Ill-Posed Problems: Theory and Applications
-
Upload
others
-
View
9
-
Download
0
Embed Size (px)
Citation preview
Mathematics and Its Applications
Volume 301
by
A. Bakushinsky Institute for System Studies, Russian Academy of
Sciences, Moscow, Russia
and
SPRINGER SCIENCE+BUSINESS MEDIA, B.V.
III-Posed Problems: Theory and Applications
by
A. Bakushinsky Institute jor System Studies, Russian Academy oj
Sciences, Moscow, Russia
and
SPRINGER SCIENCE+BUSINESS MEDIA, B.V.
A C .I.P. Catalogue record for this book is available from the
Library of Congress.
ISBN 978-94-010-4447-9 ISBN 978-94-011-1026-6 (eBook)
This monograph is a new and original work based 01)
two books by the same authors previously published in Russian:
Iterative Melhods for Solving /ll-Posed Problems, Moscow, Nauka ©
1989, and /ll-Posed Problems, Numerical Melhods and IIS
Applicalions, Moscow, Moscow State University Press © 1989.
Printed on acid-free paper
All Rights Reserved © 1994 Springer Science+Business Media
Dordrecht Originally published by Kluwer Academic Publishers in
1994 Softcover reprint ofthe hardcover 1st edition 1994
No part of the material protected by this copyright notice may be
reproduced or utilized in any form or by any means, electronic or
mechanical. inc1uding photocopying, recording or by any information
storage and retrieval system, without written permission from the
copyright owner.
DOI 10.1007/978-94-011-1026-6
1 General problems of regularizability 4 1.1 Definition of
regularizing algorithm (RA) 4 1.2 General theorems on
regularizability and principles of con-
structing the regularizing algorithms . . . . . . . . . . . .. 7
1.3 Estimates of approximation error in solving the ill-posed
prob-
lems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.4
Comparison of RA. The concept of optimal algorithm 19
2 Regularizing algorithms on compacta 23 2.1 The normal solvability
of operator equations. 24 2.2 Theorems on stability of the inverse
mappings. 26 2.3 Quasisolutions of the ill-posed problems . . . .
28 2.4 Properties of 6-quasisolutions on the sets with special
structure 36 2.5 Numerical algorithms for approximate solving the
ill-posed
problem on the sets with special structure . . . . . . . . . ..
40
3 Tikhonov's scheme for constructing regularizing algorithms 43 3.1
RA in Tikhonov's scheme with a priori choice of the regular
ization parameter . . . . . . . . . . . . . . . . . . . . . . . ..
43 3.2 A choice of regularization parameter with the use of the
gen-
eralized discrepancy 47 3.3 Application of Tikhonov's scheme to
Fredholm integral equa-
tions of the first kind . . . . . . . . . . . . . . . . . . . . .
.. 57 3.4 Tikonov's scheme for nonlinear operator equations . . . .
.. 61 3.5 Numerical implementation of Tikhonov's scheme for
solving
operator equation . . . . . . . . . . . . . . . . . . . . . . . ..
68
v
vi
4 General technique for constructing linear RA for linear prob-
lems in Hilbert space 73 4.1 General scheme for constructing RA for
linear problems with
completely continuous operator . . . . . . . . . . . . . . . .. 74
4.2 General case of constructing the approximating families
and
RA 77 4.3 Error estimates for solutions of the ill-posed problems.
The
optimal algorithms . . . . . . . . . . . . . . . . . . . . . . ..
90 4.4 Regularization in case of perturbed operator. . . . . . . .
. . 100 4.5 Construction of linear approximating families and RA in
Ba-
nach space 114 4.6 Stochastic errors. Approximation and
regularization of the
solution of linear problems in case of stochastic errors .. . .
122
5 Iterative algorithms for solving non-linear ill-posed problems
with monotonic operators. Principle of iterative regularization 127
5.1 Variational inequalities as a way of formulating
non-linear
problems 128 5.2 Equivalent transforms of variational inequalities.
. . . . . . . 131 5.3 Browder-Tikhonov approximation for the
solutions of varia-
tional inequalities. . . . . . . . . . . . . . . . . . . . . . .
136 5.4 Principle of iterative regularization . . . . . . . . . . .
. . . . 141 5.5 Iterative regularization based on the zero-order
techniques . . 142 5.6 Iterative regularization based on the
first-order technique (re-
gularized Newton technique) 150 5.7 RA for solving variational
inequalities 155 5.8 Estimates of convergence rate of the iterative
regularizing
algorithms. . . . . . . . . . . . . . . . . . . . . . . . . . .
160
6 Applications of the principle of iterative regularization 164 6.1
Algorithms for minimizing convex functionals. Solving the
non-linear equations with monotonic operators .. . . . . . . 164
6.2 Algorithms for minimizing quadratic functionals.
Non-linear
procedures for solving linear problems .. . . . . . . . . . . . 168
6.3 Iterative algorithms for solving general problems of
mathe
matical programming. . . . . . . . . . . . . . . . . . . . . . .
172 6.4 Algorithms to find the saddle points and equilibrium
points in games . . . . . . . . . . . . . . . . . . . . . . . . . .
178
7 Iterative methods for solving non-linear ill-posed operator
equations with non-monotonic operators 185 7.1 Iteratively
regularized Gauss - Newton technique for operator
equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
186 7.2 The other ways of constructing iterative algorithms for
general
ill-posed operator equations . . . . . . . . . . . . . . . . . . .
195
8 Application of regularizing algorithms to solving practical
problems 199 8.1 Inverse problems of image processing . . . . . 200
8.2 Reconstructive computerized tomography . . 208 8.3 Computerized
tomography of layered objects 213 8.4 Tomographic examination of
objects with focused radiation 218 8.5 Seismic tomography in
engineering geophysics 222 8.6 Inverse problems of acoustic
sounding in wave approximation 227 8.7 Inverse problems of
gravimetry . 236 8.8 Problems of linear programming 238
Bibliography 242
Index 254
Preface
Recent years have been characterized by the increasing amount of
publications in the field of so-called ill-posed problems. This is
easily understandable because we observe the rapid progress of a
relatively young branch of mathematics, of which the first results
date back to about 30 years ago.
By now, impressive results have been achieved both in the theory of
solving ill-posed problems and in the applications of algorithms
using modem computers. To mention just one field, one can name the
computer tomography which could not possibly have been developed
without modem tools for solving ill-posed problems.
When writing this book, the authors tried to define the place and
role of ill posed problems in modem mathematics. In a few words,
we define the theory of ill-posed problems as the theory of
approximating functions with approximately given arguments in
functional spaces. The difference between well-posed and ill posed
problems is concerned with the fact that the latter are associated
with discontinuous functions. This approach is followed by the
authors throughout the whole book. We
hope that the theoretical results will be of interest to
researchers working in approximation theory and functional
analysis.
As for particular algorithms for solving ill-posed problems, the
authors paid general attention to the principles of constructing
such algorithms as the methods for approximating discontinuous
functions with approximately specified arguments. In this way it
proved possible to define the limits of applicability of
regularization techniques. The possibility of constructing
iterative procedures for approximating
solutions of the ill-posed problems is thoroughly investigated for
a wide range of linear and non-linear problems.
The authors have acquired extensive experience in applying the
methods of solving ill-posed problems. This allowed them to
demonstrate the efficiency of algorithms using model and applied
practical problems of mathematical programming, linear algebra,
functional optimization, solving the operator equations.
Some new approaches concerned with computer tomography in the
frameworks of both geometrical optics and wave approximation are
presented in a monograph for the first time ever. We hope that
these sections of the book will attract the interest of researchers
specializing in computer tomography of seismic geophysics or those
applying tomographical approaches in industry. The presentation of
the material in this book was strongly affected by many
years of scientific contacts with colleagues from the USA, Japan,
France and Italy. It was also influenced by numerous discussions
with Academician A. Tikhonov and scientists of his school, with
which the authors associate themselves.
ix
x
This book is based on two monographs of the same authors which have
been published in Russian earlier (Bakushinsky et aI., 1989a,
1989b). Compared to the Russian edition, the text of this book is
substantially changed and extended to include some of the most
recent results. Chapters 7 and 8 have been specially prepared for
the present edition. The list of references was substantially
extended. The authors are grateful to Dr. LV. Kochikov for
translating this book into
English, and to E.O Drevyatnikova for wordprocessing the
book.
A. Bakushinsky A. Goncharsky
Introduction
The impressive and steadily increasing opportunities provided by
modern computers stimulate a continuous growth of the fields of
human activities where mathematical models are applied.
In some of these models the algorithms for approximate solution of
the correspondent mathematical problems can be formulated within
the framework of traditional computational mathematics. In this
case it is often possible to prove the formal convergence of
algorithms and to evaluate the approximations errors. The technical
problems related to numerical implementation of such algorithms are
associated with rounding errors, data representation, etc. and do
not usually lead to significant difficulties.
However. it is often the case when data available to a researcher
can only be interpreted with a formal model which does not allow
the application of traditional computational algorithms.
Such models usually lead to formulation of the ill-posed problems
for which there exist no theorems on solvability in some natural
functional spaces. Moreover, these problems lack stability (in the
classical sense) of the solution with respect to errors of input
data. This is significant since we almost never deal with the
absolutely exact values of input parameters. The theory of solving
ill-posed problems is a relatively new branch of
computer science which began to separate into an independent
science shortly after the publications by A. Tikhonov (Tikhonov.
1963a, 1963b). In these publications the fundamental concept of
regularizing algorithm was introduced.
Here we formulate this concept for a mathematical model represented
by operator equation Az = U where operator A acts from a metric
space Z into a metric space U. The peculiarity of the ill-posed
problems is that operator A is not assumed to be continuously
invertible (in local or in global sense). Let us assume that for
the "exact" values of uand A there exists the "exact" solution
zwhich we are interested in. Let. for the sake of clarity, operator
A be known exactly, and instead ofuwe
are given its approximation uf, E U such that p(Ul)' u) ::; O. Here
0 is a numeric parameter characterizing the errors of input data
ul). It is natural to require that a numerical algorithm for
solving the operator equation should have the following main
property: the less is the error 8, the more close approximation to
zcan be obtained. This approach underlies the formal definition of
regularizing algorithm.
The regularizing algorithm(RA) is the operator R which puts into
correspondence to any pair (Uf,,<i) the element Zl) E Z such
that zf, ~ z (in the metrics of Z) as 8 ~ O. For a given set of
input data, R(uf,,8) can be treated as the approximate solution of
the problem.
2 Introduction
The definition of the RA given above and its generalized versions
constitute the base for modem theory and practice of solving
ill-posed problems.
It proved possible to classify ill-posed problems into
regularizable (Le. those for which the RA can in principle be
constructed) and non-regularizable. Besides, the general principles
for constructing the RA for the wide class of mathematical models
have been created. It was discovered that many classical methods
(e.g. iterative techniques for solving linear operator equations)
can be used to solve ill posed problems as well; or, more exactly
to construct the RA for such problems The only necessary
improvement is to formulate the appropriate stopping rule dependent
on the errors of input data. A significant progress has been
achieved in constructing the RA for non-linear problems. The
principle of iterative regularization has allowed the generation of
special iterative sequences which are characterized by only slight
formal differences from the traditional procedures.
By now, many monographs dealing with the ill-posed problems have
been published (see, for example, (Groetch, 1984, Louis, 1989,
Tikhonov et al., 1977». However, they are all concerned with
certain fragments of theory of the RA. As a rule, only Tikhonov's
classical variational scheme is described. Many general problems
and new schemes of constructing RA are mentioned briefly, and
readers are referred to journal publications. The rapidly growing
interest in both theory and practice of solving ill-posed
problems has stimulated the authors to write a monograph which
would include all the main currently available results related to
the existence and constructing of the RA.
The general structure of this book is as follows. Chapter 1
contains fonnulations and discussion of the main definitions
used
throughout the book. The general theorems on regularizability and
on the error estimates are proved. In particular, the theorem on
necessary and sufficient conditions of regularizability (Vinokurov,
1971) is proved, which previously could be found only in journal
publications.
Chapter 2 and 3 are devoted to the more widely known problems of
regularizability on compacta and the general scheme of constructing
RA proposed by A. Tikhonov.
In Chapter 4 we present a general scheme, first given in
(Bakushinsky, 1967) of constructing linear RA for linear problems
(e.g. solving linear equations and calculating the values of linear
unbounded operator) in Hilbert and Banach spaces. The properties of
approximations and the RAs generated within such a scheme are
studied. The relations between traditional iterative techniques for
solving linear equations and some of the newly created RAs are
revealed.
Introduction 3
A separate section deals with regularizing linear problems under
the conditions of stochastic errors. The latter problem is only
briefly discussed, attention being paid to the questions that do
not require significant extension of the general "deterministic"
approach of the book. A complete state-of-the-art reference in
linear ill-posed problems with stochastic errors can be found in
(Fedotov, 1982).
Chapters 5, 6 and 7 are devoted to various aspects of implementing
the principle of iterative regularization, allowing to construct
iterative RAs for non linear problems. Such problems, in
particular, embrace all the problems of convex optimization. With
the use of the mentioned principle, it proved possible (for the
first time) to formulate strongly convergent iterative procedures
and RAs for solving the general infinite-dimensional problem of
convex optimization. The general problem of linear programming is
considered as an example for which many stable (in the sense of
regularization theory) iterative algorithms can be constructed on
this way. The last chapter (Chapter 8) contains examples of
practical application and
numerical implementation of some of the methods developed in
Chapters 2-7. In selecting the examples, the authors tried to
demonstrate the opportunities that RA provide for solving the wide
range of problems (from linear integral equations to linear
programming). A relatively significant space is devoted to
applications of RAs to image reconstruction and reconstructive
tomography. This is justified by the importance of these problems
for modem engineering and reflects the particular scientific
interests of the authors.
The statements formulated in the monograph are, as a rule, provided
with detailed proofs. Some more special results as well as theorems
of functional analysis are given without proofs. However,
references are always provided for the readers interested in more
detailed infonnation on the subject. The reading of this book
requires a certain knowledge of the basic concepts
of functional analysis (acquaintance with, for example, (Dunford et
al., 1958, 1963) is sufficient). As for the reference list, it does
not aim to be complete. The authors have
included in the reference list only publications directly related
to problems discussed in the text.
Chapter 1
1.1 Definition of regularizing algorithm (RA)
Any mathematical model sets some correspondence between two kinds
of objects. One class of objects includes characteristics of the
model, and the other consists of experimentally observed attributes
of the studied phenom ena. The problems of processing experimental
data are always concerned with inevitable experimental errors. For
the purposes of present analysis, we shall consider the objects of
the second kind belonging to some metric space X, and of the first
kind - to space Y. The mathematical model es tablishes the certain
correspondence y =G( x) between input data x E X and
characteristics of the model y E Y. Modeling is aimed at obtaining
model characteristics y using approximately specified input data x.
The above problem is often formulated in other terms. Let z
represent
the characteristics of the model, and u - the attributes of the
observed phenomenon. The values of z and u are related by operator
equation Az = u, where operator A is known. It is necessary to
obtain an approximation to z using the knowledge of approximately
specified u. The difficulties on this way arise from the following
facts: 1) operator G = A-I, in general, is not specified
explicitly; 2) domain of G = A-I does not include the whole space;
3) operator G = A-I defined on its domain, is not continuous. Only
items 2), 3) are concerned with the principal problems of
theory.
Therefore we shall assume that mapping y = G(x) is given, though
not
4
1.1 Definition of regularizing algorithm (RA) 5
everywhere in X, but rather on some subset DG eX. Mapping G is not
continuous on D G •
Function G may as well represent a one-to-many mapping. In
particular, the operator equation Az = u may have different
solutions for the same right-hand side u. If follows that, in
general, G = A-I is a one-to-many mapping. In principle, it is
possible to formulate the regularizability theory for
one-to-many mappings. In most cases, however, there exists a
natural way of selecting a one-to-one cross-section Go of mapping G
defined by the math ematical model. All the consequent analysis is
usually performed for such a cross-section. For this reason, when
formulating the general problems of regularizability later in this
chapter, we shall restrict our considerations with the case of
one-to-one mapping G. This is sufficient for the consequent
applications of the results. D e fin i t ion 1. The mathematical
model is absolutely well-posed
(correct) on DG if it defines the mapping G which may be continued
to all X continuously on D G . If G is relatively continuous on a
subset D ~ D G ,
the model is called conditionally well-posed. The above definition
of correctness is somewhat more broad than the
classical one given by Hadamard. Example 1. Consider a linear
algebraic equation
where Em , En are Euclidean spaces of corresponding dimensions. Let
us consider the vector u as input data distorted by
experimental
errors. Let us introduce mapping G(u) which puts into
correspondence to any u the solution of the.above equation with the
minimal norm in the sense of the least-squares technique. Function
G(u) is defined and continuous everywhere in En, i. e. it
represents the absolutely well-posed model.
If elements of matrix A are also considered as input data and are
known approximately, the mapping z =G(A, u) is no more continuous
with respect to A, if distance between matrices is estimated, for
example, with the use of convenient operator norm. This model is
therefore ill-posed with respect to perturbations of matrix. These
and related problems are discussed in more detail in Chapter 3.
Example 2. Consider the operator equation
Az = u, z E Z, u E U,
6 Chapter 1. General problems of regularizability
where Z, U are linear normed spaces, and A is a completely
continuous opera tor from Z to U. The inverse mapping G = A-1 is
only defined on D G = AZ which does not include the whole U in the
case of infinite-dimensional spaces. It is well known that mapping
G = A-1 is unbounded on D G = AZ, i. e. this problem is ill-posed
on AZ. However, if we could a priori specify a compact subset M C
Z, then
mapping G '= A-1 defined on D = AM C DG is relatively continuous,
i. e. the problem is conditionally well-posed on D = AM C DG
.
In computational mathematics, it is essential to develop methods of
cal culating G(x) using approximately specified argument x. In
this case G( x) may also be found approximately. It is natural to
refer to y as an approxi mation of G(x) if py (G(x), y) is "small"
(py (I, g) is the distance between f and 9 in the space Y).
It is necessary to clarify the concept of "approximately specified
data x". We shall represent the approximate data with a pair (x~,
6) such that Px(x, x~) ~ 6. The element x~ does not necessarily
belong to DG • The theory of regularization is based on the
assumption that such a pair is available. Obviously, it is very
attractive to yield the approximation of G(x) as a
pair (YClE) where py(G(x),yc) ~ E. It proves, however, that such
approxima tions of the solution of ill-posed problem cannot be
constructed. For these problems it is only possible to find y(x~)
such that y(x~ )-.G(x) when 6-.0. Constructing y( x~) is concerned
with creating the so-called regularizing al gorithms. For the
ill-posed models (i. e. discontinuous mappings G) the
approxi
mation in the sense of Tikhonov regularizability has the special
significance. Let some mapping G(X-.Y) be defined on a subset
D(G)<;;'X of a metric space X, and G(D(G))<;;'Y (Y is also a
metric space). Let us introduce the approximation error of G(x) for
a certain fixed 6 > 0 and a priori given mapping R~(X-.y)
as
D.(R~,6,x) = sup py(R~(x~),G(x)). p(r,r6)~~
De fin i t ion 2 (Vinokurov, 1971, 1987). A function Gis
regularizable on DG C X (or on non-empty subset D <;;, DG ) if
there exists a mapping R~(·)
(or R(·, 6), which is the same) from the direct product of spaces X
®Rt into Y such that
lim D.(R~,6,x) =0 \Ix E DG (\Ix ED). ~-o
(1.1 )
1.2 General theorems on regularizability 7
The operator R 6(x) is called a regularizing algorithm (RA) or a
regu larizing operator for calculating G(x). Note that R6( x) is
not necessarily continuous.
If R6 is RA for G(·), it is natural to regard R6(X6) as an
approximation to G(x) using input data (X6' <5). In the
regularization theory, it is essential that the pair (X6,<5) is
used for approximating G(x). We cannot use the triple (X6' <5,
x) because the exact value of argument is not known. Then, maybe,
it is possible to create RA with the use of X6 only? Let X6 be an
arbitrary element from the <5-vicinity of x, i. e. P(X,X6) ~
8.
Does there exist any function R(X6) which does not explicitly
depend on o such that Eq. (1.1) holds? It proves to be that only
well-posed on D problems may be regularized with such a RA.
Theorem 1.1 Mapping G is regularizable on D(DG ) by the family R6 =
R(·,o) = R(·), 0 < <5 ~ <50 , if and only if G(x) may be
continued to all X, the continuation being continuous on D( DG ) in
X.
Proof. To prove the sufficiency, one may use R(·) = G where G is
continuation of G, continuous on D(DG ). When necessity is proved,
R(·) is the continuation we are searching for. According to the
above theorem, the pair (X6' 8) is a minimal set of data
which may secure approximation of G(x )in the sense of DeL 2.
Unluckily, it is generally impossible to estimate deviation of
R6(X6) from
G(x) without any additional information concerning x. This is
specific fea ture of the ill-posed problems (see Sec. 1.3). In
general, RA guarantees only asymptotic convergence of
approximations to the exact solution as 0 tends to zero.
1.2 General theorems on regularizability and principles of
constructing the regularizing al gorithms
Let us try to find out how wide is the class of regularizable
mappings G. Obviously, it is not empty. It can be seen immediately
from the definition that all the mappings G corresponding to the
well-posed models are reg ularizable (i. e. all functions G( x
)continuous on Dare regularizable). In this case it may be assumed
that R6 =G for VO. Since G is continuous, Eq. (1.1) is valid. All
the classical computer mathematics is based on the
8 Chapter 1. General problems of regularizability
fact that G(X6) approximates G(x)when P(X6'X) ~ D, if G corresponds
to a well-posed mathematical model. Let us deduce an immediate
consequence of Def. 2 which guarantees that
the class of nonregularizable mappings is not empty.
Theorem 1.2 (Vinokurov, 1971). Suppose that G maps DG ~ X onto Y ,
i. e. G(DG ) == Y , where X, Y are metric spaces. If G is
regularizable on
00
DGI and N is an everywhere-dense set in X, then U R1/n(N) is
everywhere n=!
dense in Y.
Proof. Let y E Y and x E G-!(y). Since N is dense in X, there
exists Xn E N such that p( Xn , x) ~ 1/n. In accordance with Def.
2,
In other words, we have found a sequence of points Rl/n(xn)l~ ~
00
~ UR1/ n ( N) that converges to y. Since y is chosen arbitrarily,
the theorem n=!
is proved. Con seq u e n c e 1. Let G map DG onto Y, mapping Gis
regularizable
on DG and X is a separable space. Then Y is also separable. This
directly follows from Theorem 1.2 if we select a numerable
every
where dense set N in X. This fact allows to easily construct the
examples of nonregularizable
mappings. Example 1. Let X = 100 be a space of bounded sequences,
and
Y = J2 be a space of square-summable sequences; the norm of element
x = (Xl, •.. ,xn, ... ) in 100 is given by equation "xlIl~ = sup
Ixnl·
n
We define a linear operator A from 100 to J2 as
It is easily seen that KerA =0, and the inverse mapping G =A-I from
J2 to 100 exists. In accordance with Cons. 1, it is not
regularizable on D G C J2 because [00 is an inseparable Banach
space (Dunford et al., 1958), while J2 is separable.
1.2 General theorems on regularizability 9
Example 2. Let A be a linear one-to-one operator from the space of
functions with bounded variation V: to L 2 • The mapping A-I may
not be regularizable on AV; C L 2 because V; is not separable,
while L 2 is. We have already mentioned that continuous mappings
are regularizable.
It is a remarkable fact that some discontinues mappings can also be
regu larized. In particular, this is true for all the mappings
which represent the pointwise limits of continuous mappings. It
follows that the approximate solution of the ill-posed problems is
possible (in the above-specified sense).
Theorem 1.3 Let function G : X ~ Y be a pointwise .limit (on a set
D ~ X) of the functions Gn which are continuous everywhere on X.
Then G is regularizable on D.
Proof. Suppose that Gn is a given approximating sequence. Let us
define the family of coverings lIn of the space X by open sets as
follows: III consists of all the open sets U ~ X with diameters :s;
1 such that
d{GI(u)} :s; 1; 112 consists of all the open sets U ~ X with
diameters $ 1/2 such that
d{GI(u)}:s; 1/2, d{G2(u)} :s; 1/2; lIn consists of all the open
sets U ~ Xwith diameters :s; l/n such that
d{GI(u)}:S; lin, ... , d{Gn(u)}:S; lin. The continuity of every Gn
implies that any point x E X belongs to each
of lIn- Let us construct the regularizing family R6( x). Suppose we
have fixed
the point x and some 15 > O. For the small 15 $ Co there exists
n = n( 15, x) such that we may find the set U C II n(6,x), S( x, 8)
C U;l Let n( 15, x) be the maximal number for which the latter
condition is valid. We define
R 6 (x) = {Gn(6'X)(X), n < 00
GI(x), n 1= 1,2'00'
It is clear that necessary n can always be found for sufficiently
small 8 :s; Co. Now let us show that the above-defined family R6(x
)regularizes G on the
set D C D G eX. We select an arbitrary point x from D. From the
triangle unequality we get
sup p(R6(x'),G(x)) $ P(Gn(6,f)(X),G(x)) + p(x' ,f)
+ sup p(Gn(x'), Gn(X)) $ p(Gn(X),G(X)) + lin. p(x',r)
(1.2)
lHere S(x, 6) used to denote the open globe with the center in x
and with radius 6.
10 Chapter 1. General problems of regularizability
Note that n = n(6,x) -+ 00 when 6 -+ O. In fact, since any globe
S(x',6) contains the point X, it is included into an arbitrary
vicinity of x when 6 tends to zero. Hence, for the arbitrarily
large N we can choose the sufficiently small 6 so that S(x, 6) is
included into some of the open sets constituting covering fiN' It
follows that n(6, x) ~ N. This taken into account, proof of the
theorem follows from the inequality
(1.2). In the following important case the condition of continuous
approxima
tion is also a necessary condition for regularizability of G.
Theorem 1.4 (Vinokurov, 1971). Suppose a separable space Y is a
convex subset of a linear normed space, and G is regularizable on D
G • Then G is a pointwise limit of the sequence of mappings Gn
which are continuous on X.
Proof. The proof would appear trivial if the RA mentioned in the
formulation of theorem were continuous. It happens that such a RA
can be constructed.
Lemma 1.1 Given the conditions of theorem 1.4, it follows that
there exists RA such that each R 6 acquires only a limited number
of values in Y.
Proof of Lemma 1.1. Any separable metric space Y is homeomorphic to
some completely bounded metric space Y' (i. e. the space where the
finite 6-net exists for every 6)(Dunford et al.,1958). Let 1 be the
mentioned homeomorphism. The function lG is obviously
regularizable, and the fact that R6 regularizes the mapping G
implies that family lR6 is a regularizing algorithm for lG. Suppose
that L 6 (6 > 0) is a mapping Y' -+ Y' which maps each element
y' E Y' to one of the nearest points of 6-net in Y'. Using the
triangle inequality, it is easy to show that L 6lR6 is also a
regularizing algorithm for lG. The family of functions l-1 L 6 lR6
is the RA the existence of which is declared by Lemma 1.1. Rem ark.
The number of values that R6 can acquire in Lemma 1.1,
depends on 6.
Lemma 1.2 Given the conditions of theorem 1.4, there exists the RA
such that every R 6 is a continuous function on X.
Proof of Lemma 1.2. Let R6 be a RA for G having only a finite
number of values Yl, ... , Yk(6) for any 6 > O. It is proved in
Lemma 1.1 that such RA exists. Let us fix some 6* > O. For the
sake of simplicity we shall
1.2 General theorems on regularizability 11
assume that k(c*) = 2. A complete prototype of the point YI will be
denoted as R"iol(yd =AI' Consider the open set O(Ad of all the
points with distances from Al not
exceeding 6* /8. If Riol(Y2) cO(Ad, we stop with this set.
Otherwise, define A2 = Ri}(Y2)\O(AI ) and construct O(A2) in the
full analogy with O(Ad. It is obvious that
(1.3)
{
0, p(x, Ai) ~ C' /4, Ii (x) = 1 _ p(x, A;) ( ) c /
C' /4 ' P x, Ai < • 4.
Note that for any x E X
1 p
(1.4)
where p is the number of constructed sets Ai (in this particular
case p ~ 2). The first inequality in (1.4) follows from the fact
that p(x, A;) < C' /8 for at
. c* /8 1 least one t because of (1.3). Hence the value of function
Ii ~ 1 - c* /4 = 2' Now we can introduce obviously continuous
functions R6• /8(x) with values
if the set A2 = (j), and
2
Ly;fi(X)
LIi(X) i=1
(1.5)
otherwise. The procedure of constructing R60 /8(x) for the case
when R6 has more than two values is absolutely analogous. Because
of the assumptions on Y (linearity and convexity), R60 /8(x)
is
defined correctly, and R60/8(X) E Y.
12 Chapter 1. General problems of regularizability
Now we are to show that Eq.(1.5) defines a RA for mapping G. Let us
fix some x E Da . Since R6 is a RA for G, for any c there exists
h(c, x) such that
(1.6)
if p(x',x). Select some c > O. Let for the sake of simplicity
R6(t,F)(X) has two values (YI and Y2) and two sets A}, A 2 are
constructed. If, for instance,
IIYI - G(x)11 ~ c,
then fI(x') = 0 for every x' from the vicinity of x. To prove this,
note that in this case
(1.7)
--1 R6(t,f)(Ydn{p(x',x) ~ h(c,x)} = (J).
It follows that Al n{p(x', x) ~ h(c, xn = (J), and, consequently,
that O(A1)
n{p(x',x) ~ h(c,x)/8} = (J), which yields fl(x') = O. Inequality
(1.7) cannot be true simultaneously for all the Yi participating in
construction
--I procedure (1.5) because of Eq. (1.3) and Ai ~ R6 (Yi)' If all
Yi satisfied (1.7) simultaneously, it would follow that x rt X.
Hence, if (1.7) is satisfied for YI, then
R6(t,f)/S(X') = Y2 if p(x', x) ~ 15/8.
If both inequalities
IIR6«,f)/S(X') - G(x)1I ~ c, p(x', x) ~ 6(c, x)/8.
This means that R6/ S(x) is the RA for G that we are searching for.
The proof of Theorem 1.4 becomes simple now. Let hn -.0 be a
sequence
of positive numbers. The family of continuous mappings Gn == R6n /
8 is a pointwise approximation of G on Da .
This enables us to formulate a criterion of regularizability
(Vinokurov, 1971):
Theorem 1.5 For the function G : X -. Y (Y is a separable linear
normed space) to be regularizable on D ~ Da C X I it is necessary
and sufficient that G is a pointwise limit (on D) of a sequence of
functions G n that are continuous on D.
1.2 General theorems on regularizability 13
For the majority of computational applications it is sufficient to
con sider linear normed separable spaces, or even Hilbert
separable spaces X, Y. These cases are completely covered by the
above theorem. Still, it is possible to formulate a more general
criterion of regularizabili ty
for metric spaces. D e fin i t ion 3. Function G : X --+ Y (X,Yare
metric spaces) is
called a first-class B-measurable function if the prototype of each
open set in Y is a set of the type F6 , i. e. it can be represented
as a countable sum of closed sets.
Theorem 1.6 For the function G from Do ~ X into a separable space Y
to be regularizable on D ~ Do ~ X, it is necessary and sufficient
that the contraction of G on D is a first-class B-measurable
function.
Hence, the regularizability offunction D on X (or Do) depends only
on the properties of this function on the mentioned subset. We omit
the proof of this theorem. It is based on the results of
Theo
rem 1.5 and existence of certain standard homeomorphisms. Example.
Dirichlet function from [0,1] to [0, 1] is not regularizable
ac
cording to Theorem 1.4 because it is not a pointwise limit of a
sequence of continuous functions (this is proved in theory of
functions ofreal arguments). On the contrary, it is obviously
regularizable on the subset of rational points of [0,1), the RA
being R6 == o. Theorem 1.3 shows the close correspondence between
the problem of
constructing the RA and the problem of constructing the
approximating sequence of continuous mappings for G. Later in this
book we shall consider not only families Gn ( x) where n is a
positive integer, and n --+ 00, but as well the more general family
{Ga(x)} where Q belongs to some ordered infinite numerical set,
and
lim p(Ga(x),G(x)) =0 for "Ix E Do 0-00
(1.8)
Constructing the family ROt possessing the property (1.8) is a
classical problem of computational mathematics. In the classical
computational mathematics different classes of continu
ous mappings G are considered which originate from mathematical
models (solving the differential, algebraic, integral equations,
etc.). Constructing approximating sequences (1.8) for them is
generally equivalent to creating an approximate technique for
calculating G(x).
14 Chapter 1. General problems of regularizability
To clarify this point, suppose {a} to be the set of positive
numbers and 00 = O. Since Go are all continuous, it is true
that
lim lim p(Go(x'),G(x)) = 0, p(x',x) ~ b 0--+06--+0
(1.9)
irregardless of G being continuous or not. However, for the
continuous map pings G the limits in (1.9) can be
interchanged:
lim lim p(Go(x'),G(x)) =O. 6-00-0
(1.10)
Eq. (1.10) to some extent justifies the technique frequently used
in a classical computational mathematics, when for the
approximately given x', such that p(x', x) ~ b, the approximation
Go(x') with the smallest possible a is taken. For example, this is
the case when iterations in the iterative process are continued as
long as the time allows. If we neglect the accumu lation of
rounding errors in such a process, this technique can be justified
by consideration that GO~.n(x') is "close" to G(x'), while G(x') is
close to G(x) because of the continuity of G. Eq. (1.10) in many
cases may be made even more strong for continuous
mappings:
(1.11)
Sometimes the error p(Go(x'), G( x)) may be estimated by some
function <p(0, b) such that <p(0, b) --+ 0 when 0, b --+
0:
p(Go(x'), G(x)) ~ <p(a, b).
If the condition (1.11) is satisfied, it is possible to choose any
dependence a(b) compatible with (1.11), e. g. a = b. If the
estimate <p(a, b) is known, a(b) may be defined from the
condition
a(b) =argmin<p(a,b) o
The way from constructing the family (1.8) to obtaining some kind
of approximation to continuous mapping G is, in general, clear and
is widely studied in the classical computational mathematics (e. g.
for constructing finite differential methods of solving
differential equations, both in complete and partial
derivatives).
1.2 General theorems on regularizability 15
It turns out to be that efficient approximating sequences of
continu ous mappings may as well be created for the wide range of
discontinuous mappings, i. e. it is possible to approximate the
solutions of the ill-posed problems. When the approximating family
is constructed, Theorem 1.3 allows to
speak of G being regularizable. The natural way of transfer from
G(a) to R 6 (or from Gn to R6 ) is to specify the function a(x',6)
(or n(x',6)) such that the family Ga(.r/,6)(X') == R6(x') (or
Gn(.r',6)(X') == R6(x')) satisfied the definition of RA (1.1). This
method has already been implemented while proving Theorem 1.3.
Unfortunately, the same approach as used in that proof may not be
applied to a real problem. Therefore the other methods of
specifying the parameter a( x', 6) (or n( x', 6) for iterative
techniques) have been developed. When these methods are considered,
it is very important to ensure that
a (or n) depends only on x', <5 and no additional information
about x is used. If such an information (be it implicit or
explicit) is used, it is only possible to speak of RA on a certain
subset D ~ Do defined by the used information.
It must be specially mentioned that there exists a wide range of
map pings and approximations allowing to specify function a(x',6)
(or n(x',6)) independent of x'.
Theorem 1.7 Let mapping G(x) may be approximated on Do by mappings
Ga( x) satisfying Lipschitz conditions with the constant La in X,
i.e. for any X,X1 E X p(Ga(x),Ga(xd) ~ LaP(x,xd. Then it is
possible to specify a = a(b') independently of x' so that Ga(6)(X)
is a RA for G(x) on Do.
Proof. Let x be an arbitrary fixed element from Do such that p(x',
x) ~ 6. Then
p(Ga(x'),G(x)) ~ p(Ga(x'),Ga(x)) + p(Ga(x),G(x)) ~ Lab'
+p(Ga(x),G(x)).
Suppose we define a(6) -> 0 when <5 -> 0 so that
(1.12)
(1.13)
Combining (1.12) and (1.13), we find that R6 == Ga (6)(X) is the RA
for G on Do.
16 Chapter 1. General problems of regularizability
Absolutely analogous treatment is valid if every G" satisfies
Lipschitz condition L,,(S) in some open globe S(O,p) including
6o-neighborhood of D(G) (or of its subset D). In this case we again
specify a(6) so that (1.13) is true which enables us to construct a
RA Go (6) on D (or on the whole of Do). Running a little ahead, we
would like to note here that conditions of
Theorem 1.7 are satisfied for mappings generated by linear operator
equa tions in the pair of arbitrary Hilbert spaces: G(X) =A-lX, if
KerA = 0, or, in general case of one-to-many mappin~, G(X) = A(j"IX
where A(j"l is an appropriate cross-section of one-to-many mapping.
Theorem 1.7 is also applicable to non-linear mappings associated
with the solution of monotonic variational inequalities, e. g. the
problems of convex optimization, etc.
1.3 Estimates of approximation error in solving the ill-posed
problems
In the theory of regularization, a new concept approximation error
of the solution has been introduced. By definition, if the values
ofG are calculated with the use of some regularizing algorithm, it
is the function 6. in the DeL 1 that is called the approximation
error of RA at the point x.
If mapping G is regularizable on D( Do), the error tends to zero
when 6 ~ °at any point x E D(Do ). Like the traditional theory of
estimating errors, the theory of RA studies
the behavior of 6(6, x) as a function of 6. Various estimates are
constructed to determine the rate of convergence of the error to
zero when 6 ~ 0. A simple but very characteristic result concerning
the approximation error is that, in general, there exists no
uniform (for all x E Do) estimate 6. if the problem of calculating
G(x) is ill-posed.
Theorem 1.8 If function G( x) is regularizable on D by mappings R6
, and there exists a uniform on D estimate of approximation error
6.(R6 , 6, x) ~
~ <p(6) such that lim cp(b) = 0, then contraction GI D is
uniformly continuous 6-0
on D.
Proof. Let XllX2 E D, and P(X\>X2) ~ b. Due to the triangle
inequal ity, P(G(XI),G(X2» ~ p(R6(xd,G(xd) + p(R6(xd,G(X2» ~
2cp(b). This means that GID is uniformly continuous. Con seq u e n
c e 1. Let mapping G = A-I be the inversion of the
injective linear completely continuous operator acting in Hilbert
space H.
1.3 Estimates of approximation error in solving the ill-posed
problems 17
It is shown in Chapter 3 that this mapping is regularizable
everywhere in the domain of A-I, and the regularizing mappings R6(
x) are continuous. However, A-I is not uniformly continuous in its
domain, hence there exists no uniform estimate of error of such a
RA on the whole DA-I.
Let us now clarify the point on what linear functionals depending
on the solutions of the ill-posed problem may be approximately
calculated with the uniform (relative to the input data) estimate
of errors, depending on error t of the input data. Con seq u e nee
2. Let A be a linear continuous operator in a
Banach space B such that A -I exists and is densely defined. Let 1
be a linear continuous functional in B. Consider the mapping G(u) =
l(A-1u). A regularizing algorithm for calculating G(u) with the
uniform estimate
of approximation error (everywhere in DA-I) exists if and only if
1E DAo-., i. e. the equation A· q = 1 is compatible. Indeed, if 1 E
DAo-., then G(u) = l(A-1u) = q(u) where q is a linear
continuous functional. Such a mapping is no doubt regularizable
uniformly on D A -. (and even on the whole of B), the RA being R
6(u) == q(u). Vice versa, if R 6(u) with the formulated properties
exists, then l(A-1u)ID
A _
I is
uniformly continuous on DA-I due to the results of Theorem 1.8. It
is easy to show that in this case the linear functional l( A-IU )
may be continued to yield a linear continuous functional everywhere
on B. Using the defini tion of a conjugate operator, we come to 1E
DAo-1. This result completely defines the class of linear
functionals calculating
the values of which on the solutions of the ill-posed problems is
an (abso lutely) well-posed problem.
It is of great significance from the point of view both
applications and theory to classify the manifolds which allow to
create a uniform estimate of error. The results obtained in this
field so far are not absolutely complete. It seems that the
manifold allowing the uniform estimate is very "slim". This not
very accurate statement is illustrated by
Theorem 1.9 Let G be a linear mapping inverse to a linear
condensation in Banach space, and G is unbounded in its domain.
Then the set D allowing to estimate the approximation error is
first-category set in X I i. e. D is the countable sum of nowhere
dense sets.
We omit the proof of this theorem. It is of interest how far can
the conditions of this theorem be extended
(e. g. for non-linear mappings and more general spaces).
18 Chapter 1. General problems of regularizability
In general, the set of uniform estimate depends on X, G and
R6•
Some more special problems allow to efficiently describe (using the
oper ator of the problem) the set of uniform estimates of
convergence in Eq. (1.1) and to obtain the uniform estimates of
errors on this set. Examples of such problems are to be found in
Chapters 3-7. So far we have spoken only about the upper bounds of
error in Eq. (1.1).
It happens that there exist universal uniform lower bounds of this
error dependent on the mapping G and independent of the particular
RA (Vi nokurov, 1973).
Theorem 1.10 Let the set E ~ DG ~ X consist of more than one point,
and there exist at least two points Xl>X2 E E stich that
P(Xl>X2) ~ 6. Then
where
w(6,G,E) = sup p(G(Xd,G(X2)) :r,.:r.EE
(1.14)
(Function w in the right-hand side of (1.14) does not depend on the
particular RA and is called the continuity module of mapping G on
E). Proof. Let XI,X2 E E, and P(Xl>X2) ~ 6. Then
b.(R6 , 6, xd ~ p(R6(xd, G(xd),
b.(R6• 6, X2) ~ p( R6( xd, G(X2)),
Using the triangle inequality, we obtain
b.(R6,6,xd +b.(R6 ,6,X2) ~ p(R6(xd,G(xd)+
(1.15)
where Xl, X2 are the points from E that we have chosen. Taking into
account that Xl> X2 may be chosen arbitrarily (as long as
distance between them does not exceed 6), we obtain the inequality
(1.14). Note that in the above theorem we nowhere used the
condition that R6
is a RA for G. For any fixed 6 we could as well use any mapping R :
X -+ Y defined everywhere on X instead of R6 •
1·4 Comparison of RA. The concept of optimal algorithm 19
1.4 Comparison of RA. The concept of optimal al gorithm
The function sup iJ.(R6 , 0, x) or, more generally, sup iJ.(R, 0,
x) where R is an xEE xEE
arbitrary mapping X ~ Y, provides the opportunity to compare
algorithms with respect to approximation error on the set E (at
some fixed 0 = 00). Algorithm R~l) !: R~2) (i. e. R} is better than
RD on E at 0= 00 if
sup iJ.( R~~), 00, x) ~ sup iJ.( R~~), 00, x). E E
D e fin i t ion 4. Algorithm R6 is optimal on E at a fixed °= 00
if
supiJ.(R6 ,o,x) = inf sup iJ.(R,o, x). xEE R:X-Y xEE
(1.16)
Algorithm R6 is called optimal if it is optimal in the sense of
this defini tion for any 0. Analysis of behavior of the error as a
function of 0 when 0 -+ 0 justifies
the following definition: D e fin i t ion 5. Algorithm R6(x,o) is
of the optimal order (or
order-optimal) on E if for any 0 such that 0 < 0 ~ 00
supiJ.(R6 ,o,x) ~ k inf supiJ.(R,o,x) (1.17) xEE R:X-Y xEE
where k 2: 1 is a constant independent of 0. The general idea
underlying the construction of optimal and. order-op
timal algorithms is as follows. First, we try to obtain as accurate
lower boundary estimates of the right-hand side of Eq. (1.16) as
possible. Then we are to construct an algorithm with the upper
boundary of error equal to the above - mentioned estimate
multiplied by a constant independent of 0 (this leads to the
order-optimal algorithm).
It happens that the lower boundary estimate given by Theorem 1.10
is in many cases sufficient to obtain the order-optimal algorithms.
Let E again be a subset of metric space X, and E C Da . Consider
the
following technique for approximate calculating G(x) (Vinokurov,
1973): for any x E X
(1.18)
where
{
X, if x E E, P6(X) = element X· E EnS(x,6) , if x rt E,EnS(x,6) 1=
0,
elementxQ , if x rt E ,EnS(x,6) = 0·
Let w(6,x) be a local continuity module of GIE at the point x E
E.
Lemma 1.3 The following estimate is true:
(1.19)
Proof.
xEX xEX
p(x',x):5 26
The lemma is proved. Calculating the upper boundaries of both sides
of the inequality (1.19)
leads to
(1.20)
The inequalities (1.20) and (1.14) cannot be compared directly
because in general the relation between w(26, G, E) and w(6, G, E)
is not known. The method based on Eq. (1.18) is obviously
order-optimal for the map
pings G and sets E, for which there exists k ~ 1 such that
w(26,G,E) ~ kw(6,G,E). (1.21 )
Lt Comparison of RA. The concept of optimal algorithm 21
We can say, at least, than the class of pairs G,E defined by Eq.
(1.21) is not empty.
Example. Let X =Y =B be Banach spaces, and A be a linear injective
operator in B. Consider the mapping G = A-I and the set E =A(S)
where S = {x : Ilxll $ I}. In this case
w(26,G,E) = sup IIA-1(xl - x2)11 = x"x,eE
IIx,-x,II~26
(1.22)
Suppose that VI, V2 are arbitrary elements from S such that
then
II VI - v211 =211vd2 - v2/211 =211w I - w211
since WI, W2 E Sand IIAwl - AW211 ~ 6. Finally we obtain for any
pair (VlJ V2) in Eq. (1.22) the following inequal
ity:
We have found that (1.21) with k =2 is true for the considered
example. The fact that E is the image of a unit globe is not
essential. It was used only to secure that element w/2 belongs to S
if w does. In particular, the set S may be substituted by L(S)
where L(S) is the image of the globe created by a linear mapping L.
The technique based on Eq. (1.18), being the order-optimal
algorithm, is
not necessarily a RA for the mapping G even on the subset E. It is
therefore natural to clarify the point if there exist optimal and
order-optimal methods among the regularizing algorithms. The
general answer to this question is not known. It is obvious,
however,
that mapping defined by Eq. (1.18) is a RA for G on E if limw(8, G,
E) = O. 6-0
Certain results concerned with constructing order-optimal
algorithms can be found in (Groetch, 1984, Ivanov et al., 1978,
Vainikko et al., 1986). It is shown, for example, that Tikhonov RAs
are order-optimal. In the rest of this section we shall briefly
describe the problem of con
structing the optimal methods. The estimate of lower boundary given
by (1.14) is too rough for this purpose. In some more special cases
it is possible
22 Chapter 1. General problems of regularizability
to replace this estimate by the more accurate one. This latter
estimate is sometimes sufficient for obtaining the optimal
algorithm. At present, the optimal algorithms are developed only
for linear problems
in Hilbert spaces. The examples of such algorithms are to be found
in Chapter 4. We shall give here the estimate of error which will
be of use later instead of (1.14) (Strachov, 1970).
Theorem 1.11 Let Z, U be Hilbert spaces, and G = A-I be the
inversion of a linear bounded operator A defined on Z. Let the set
E = A(M) be the image of centrally symmetrical set M E Z produced
by mapping A. Then for any mapping P : U -+ Z
sup 6(P, 0, z) ~ WI (0, M), zEE
where
IIA z lI:S6
Proof of this theorem can be found in (Vainikko et al., 1986,
Morozov, 1987).
Chapter 2
Regularizing algorithms on compacta
Henceforth we consider how to construct regularizing algorithms for
solv ing the ill-posed problems. The typical ill-posed problem is
the operator equation of the first kind
Az = u,z E Z,u E U. (2.1)
Here Z, U are Banach spaces, and A is a continuous injective
mapping of Z onto U. In this chapter we consider the case when a
priori considerations allow to specify a certain set M C Z (the
correctness set) including the exact solution i of Eq. (2.1), the
set M being such that the inverse operator A-1 is defined and
continuous throughout AM CU. In this case the approximations to
zmay be easily constructed. It should
be noted, however, that A-1 Ub may not be used as such an
approximation because Ub does not necessarily belong to the set AM.
The idea of approximating the solution of the ill-posed problem on
a spe
cial set was first suggested by A. N. Tikhonov as early as in 1943
(Tikhonov, 1943). In the mentioned paper he has formulated the
concept of conditionally well-posed (or Tikhonov well-posed)
problem. The problem is conditionally well-posed if the following
requirements are met: 1) it is known a priori that a solution of
Eq. (2.1) exists and belongs to
a specified set M; 2) the operator A is a one-to-one mapping of M
onto AM; 3) the operator A -1 is continuous on AM CU. This
definition is really a reformulation of DeL 1 given in Chapter
1.
23
24 Chapter 2. Regularizing algorithms on compacta
In contrast to absolutely well-posed (Hadamard well-posed
(Hadamard, 1932)) problems, for a conditionally well-posed problem:
a) it is not required that Eq. (2.1) is solvable over the whole
space; b) the requirement of continuity of A-lover all U is
substituted by the
requirement that A-I is continuous over the image of correctness
set. Establishing the links between the concept of conditionally
well-posed
problem and that of regularizability (see Sec. 1.1), we could say
that the main result of Chapter 2 is the proof of regularizability
of the mapping G = A-Ion the set AM cU. For the conditionally
well-posed probleI1ls, it is possible (due to Theo
rem 1.1) to construct regularizer R6(·) on -M dependent only on U6,
i. e. R6(·) = R(·). Moreover, the isolation of correctness set
allows not only to construct RA, but to obtain a uniform estimate
of approximation error on M.
2.1 The normal solvability of operator equations
The aim of this section is to disclose the relations between the
structure of the range of operator A: AZ = R(A) and the
stability-of the solution of Eq. (2.1). More strictly, it is to be
found out, when the correctness set M coincides with the whole
space Z. Let us consider the simplest case of linear injective
operator A and Ba
nach spaces Z, U. De fin i t ion 1. A graph of operator A defined
on D(A) C Z is a set
of pairs z, Az where z E D(A). A graph of operator is a subset of
the direct sum of spaces Z and U. D e fin i t ion 2. Linear
operator A 'is called closed if its graph is a
closed set in the direct sum of spaces Z and U. The operator being
closed means if Zn E D( A), Zn -+ Z and Az~ -+ u,
then z E D(A) and u = Az. It is clear that operator A is closed if
D(A) = Z and A is linear and bounded. It is also easy to see that
closedness of operator A leads to operator A-I being closed.
Really, the graph ofthe operator A-I may be written as Az, z, z E
D(A),
i. e. it is obtained by interchanging z and Az in graph of operator
A. There fore graph of A-I is also a closed set in the direct sum
of Z and U. The following statement is true:
Lemma 2.1 The range R(A) of the linear continuous closed operator A
is a closed set.
2.1 The normal solvability of operator equations 25
Proof. Let Z be a limit point of D(A), and Zn -t z, where Zn E
D(A), The sequence AZn is fundamental in U because
Using the completeness of U, we get AZn -t f, fEU. To prove the
lemma, it is sufficient now to use the closedness of operator
A.
I
Theorem 2.1 Let A be a linear continuous injective operator defined
on D(A) == Z with the range R(A) ~ U, where Z and U are Banach
spaces. For the inverse operator A-I acting from R(A) to Z to be
bounded (IIA- 1 11 < +00), it is necessary and sufficient that
R(A) =R(A).
Proof. Necessity. Taking into account that operator A is
continuous, and D( A) = Z, we obtain that A is a closed operator.
Therefore A-I is closed as well, and, due to the conditions of the
theorem, it is continuous. Using Lemma 2.1, we arrive at R(A)
=R(A).
Sufficiency. If R( A) == R( A), then the linear operator A is a
one-to-one continuous mapping of Banach space Z onto Banach space
R(A). It follows from Banach theorem (Dunford et aI, 1958) that
operator A-I is continuous and therefore bounded, i. e. IIA- I II
< +00. We have found that for the linear one-to-one continuous
mappings, the
stability of the solution and solvability of the problem are
closely related. In particular, for the ill-posed problems when A-I
is unbounded in its domain, the set R(A) is not closed and,
consequently, Eq. (2.1) cannot be solved for the whole space U. For
example, the common Fredholm integral equation of the first kind
with the closed square-summable kernel (the operator acts from L 2
to L 2 ) is obviously not solvable in the whole space. This follows
from the fact that operator of a direct problem is completely
continuous and therefore may not have a continuous inversion. The
above results mean that for the ill-posed problems, when A-I is
un
bounded in its domain, two correctness conditions of Hadamard are
violated: that of solvability and of continuity of the inverse
operator. The normally solvable equations (i. e. those for which
R(A) is closed) may
yield the ill-posed problems in U only due to the condition R(A) =
R(A) i i- U. In this case the approximately given element U6 E U
such that IIu6 - iLll ~ 8 does not necessarily belong to R(A).
However, G == A-I is regularizable on R( A) C U, and the
regularizing algorithm may be con structed, for example, in the
following way.
26 Chapter 2. Regularizing algorithms on compacta
Let P6 be the operator mapping any element U6 E U from the
6-vicinity of it into the arbitrary element u~ E R(A) such that
lIu~ - u611u ~ 6. Then G = A-I P6 is a RA for the problem
(2.1).
2.2 Theorems on stability of the inverse mappings
In this section the structure of the correctness set is determined
for the case when it does not cover the whole Z. Any compacturn in
Z can be" taken as the correctness set M C Z. This
statement is based on the following well-known lemma.
Lemma 2.2 Let Z and U be Banach spaces, and operator A produce a
con tinuous one-to-one mapping of compadum M onto AM cU. Then the
inverse operator A-I is continuous on AM.
Let us consider some examples of compacta in Banach spaces. Example
1. Any bounded closed set in a finite-dimensional space is
a compactum. This example may seem trivial but the approach using
this property was widely used for solving the ill-posed problems
until the modern powerful regularization technique was created. To
find the approximate solution of the ill-posed problem, the
functions to be found are parametrized by a finite number of
parameters in accordance with a priori information on the solution.
If the range of parameters is bounded, the correctness set
introduced in such a way is a compactum in Z.
Example 2. Let S be a sphere in reflexive space V, and B be a
completely continuous linear operator from V to Z. Then we can take
the image of a sphere produced by mapping B a.s a compactum in M,
i. e. M =BS.
Example 3. When the operator equations (2.1) are solved, and the
func tion to be found is z( t), t E [a, b], some features of the
solution are some times known a priori. We may know, for example,
that z(t) is bounded, or monotonic, convex, etc. Let us consider
the set of bounded nongrowing functions Zie such that z(t) E Zie if
0 ~ z(t) ~ C, z(td ~ z(t2 ), t l ~ t 2 ,
t,t h t2 E [a,b]. The set Z Ie is a compactum in Lp(p > 1)
(Goncharsky, 1987). Really, according to theorem on choice (Dunford
et aI., 1958), we can always select(from any sequence Zll"" zn) a
subsequence zn,(t) that converges everywhere to some function z(t)
E Z Ie . The latter statement, together with the uniform
boundedness of this sequence, yields the conver gence in Lp
•
2.2 Theorems on stability of the inverse mappings 27
Example 4. It may be shown in a full analogy with Ex. 3 that the
set Zc of convex upwards functions such that 0 ::; z(t) ::; C for
any t E [a, b] is a compactum in Lp(p > 1) (Goncharsky et al.,
1979). Therefore, the compacta may serve as the examples of the
correctness
sets for Eq. (2.1) with a continuous (not necessarily linear)
operator. For the linear continuous operators, the correctness set
M may be ex
tended to algebraic sum M = I( +£ where I( is a compactum, and £ is
a finite-demensional subspace of Banach space Z.
Theorem 2.2 (Ivanovetal.,1978). Let A be a linear continuous
one-to-one operator from Z to U where Z, U are Banach spaces. If
the set M C Z can be represented as M = I( +£, I( n £ = (J), where
I( is a compactum, and £ is a finite-dimentional subspace in Z,
then operator A-I is continuous on AMCU.
Proof. Since A is a one-to-one mapping, and I( n £ = (J), it
follows that AI( n A£ = (J). Therefore the inverse operator All is
defined on AI(, and operator A2"l is defined on A£, the operators
being such that A-I'll = = Aliv +AzIw, V E AI(, w E A£. The
operator Az
1 is finite-dimensional and hence continuous. The continuity of All
is proved by Lemma 2.2. The
only thing to complete the proof is to notice that convergence Un ~
u implies that V n ~ V, W n ~ w. This means that operator A-I is
continuous on AM. Now we shall give some examples of the
correctness sets M representable
as the algebraic sum of compactum and a finite-dimensional
subspace. Example 1. Let M be a set of continuously differentiable
functions cp( x)
1
such that f[cp' (x )pdx ::; 1. It is easy to see that o
I<;>(x) - <p(0)1 < i<p'(s)ds ~ [!(<P'(X))'dX] 'J'
< 1,
l<p(x,) - <pix,)1 ~ VI<P'(x )J'dx]'" lx, - x.1 ~ lx, -
x.l· Let us introduce a subset I( C M of the functions ep( x) such
that ep(O) =
= 0 . According to Arzela theorem, the set I( is a compacturn in
C[O, 1]. The set M consists of the functions cp( x) representable
as cp = ep( x )+c and therefore can itself be represented as M =K
+£ where £ is one-dimensional subspace.
28 Chapter 2. Regularizing algorithms on compacta
Example 2. Consider the set offunctions from Lp(p > 1) with
variations not exceeding a given constant c. This set we denote as
Ve • Let the set o 0
VeC Ve include functions tP( x) from Ve such that tP(O) = O. The
set V e is a compactum in Lp (Goncharsky et al., 1979b). The set Ve
is representable as
o Ve = K + I:- where K =Ve , and I:- is a one-dimensional space. In
a certain sense, the requirement for the correctness set M = K +
I:
where K is a compactum, and I:- is a finite-dimensional space,
cannot be weakened. D e fin i t ion 2. The set M C Z is ooundedly
compact if any bounded
subset of M is a compactum. It is clear that the set M is boundedly
compact if it is representable as
M = K + I:- where K is a compactum and I:- is a finite-dimensional
space. For the linear completely continuous operators the set M
being boundedly compact is a necessary condition for the continuity
of A-Ion AM (Ivanov et ai., 1978).
Theorem 2.3 Let M be a convex closed subset of the reflexive space
Z, and A be a completely continuous linear one-to-one operator from
Z to U. For A - I to be continuous on AM C U, it is necessary that
M is boundedly compact.
Proof. Let the opposite be true, i. e. there exists incompact
bounded sequence Zn E M. This means that its subsequence Zn' exists
that does not converge strongly. Since M is reflexive, we can
always assume that subsequence Zn' --+ Z· in a weak sense as n' --+
00.
The set M is convex and closed, therefore it is weakly closed, and
Z· EM. Since A is a completely continuous operator, the sequence
Un' = Azn , con verges strongly to Az· = U· in U. We have got that
Un' --+ U· and, because of the theorem conditions, A-Iun , = Zn'
also converges strongly. This con tradicts the assumption that Zn'
does not converge, hence the theorem is proved.
2.3 Quasisolutions of the ill-posed problems
In the previous section we have formulated certain theorems which
ensure the continuity of the inverse operator A-Ion the image of
correctness set M. These results allow to construct RA for the
ill-posed problem (2.1) on the set M. However, the inverse operator
A-I cannot be used as a RA because
2.3 Quasisolutions of the ill-posed problems 29
it is not defined throughout the whole U for the conditionally
well-posed problems. There exists a relatively simple but
sufficiently general technique for the
conditionally well-posed problems; it is known as a quasisolution
method. In a sense, it is based on continuing the mapping A-I from
the set AM throughout U in such a way that the resulting
continuation is continuous everywhere in AM . Let the operator A in
Eq. (2.1) be continuous operator from a Banach
space Z to Banach space U, and M be the correctness set. De fin i t
ion 3 (Ivanov, 1966, lvanov et al., 1978). A quasisolution of
Eq. (2.1) given on M is the element z; E M which minimizes the
discrepancy:
IIAz; - u~lIu =min IIAz - u~lIu. 'EM
Since A is continuous, the functional g(z) = IIAz - u~1I is
continuous either. Assume at first that the correctness set M is a
compactum in Z. Therefore g(z) attains its exact lowest bound on M
for any fixed element u~ E U. It means, in particular, that a
quasisolution exists for u~ E U such that Ilu~ - ull ~ 0. Let A : M
~ U be a one-to-one mapping which maps z E Manto u E AM. Let us
denote an arbitrary element from the set of quasisolutions as z;.
We have formulated a rule z; = R~(u), the operator R~(·) being
defined throughout the whole space. It is clear that
Since z~ and z belong to M, and operator A -I is continuous on AM C
U, we finally get that liz; - zllz ~ °when 0 ~ 0, i. e. the mapping
Rb( u) is a RA for A-Ion M. Operator Rb( u) really depends only on
argument u, R~(u) == R(u).This
is in accordance with the results of Theorem 1.1. The conditions of
this theorem imply that for RA of the kind R(·) to exist, it is
necessary that A-I (defined on AM) could be continued throughout
the whole space, the con tinuation being continuous on AM. The RA
that we have just constructed plays the role of such a
continuation. The question if a quasisolution is unique is of a
certain interest. To an
swer it, one can use the well-known theorem on the uniqueness of
extremum point of a rigorously convex functional in Banach space.
De fin i t ion 4. Functional g(z) defined on a convex set Q of
Banach
space Z is rigorously convex if for any A E (0,1) and any Zh Z2 E
Q, Zl :I Z2,
30 Chapter 2. Regularizing algorithms on compacta
the following inequality is true:
(2.2)
(If we substitute sign < by ~, and use A E [0,1], we yield the
definition of the convex functional on Q C Z). D e fin i t ion 5.
Banach space is rigorously convex (or rigorously
normed) if the equation IIx + yll = IIxll + lIyll is only possible
for y= Ax where A is real number. Note that in a rigorously convex
Banach space the functional Ilxll is not
rigorously convex. In fact, if X2 = aXl where a is a real
parameter, then for any AE (0,1)
However, .the functional IIxll2 is rigorously convex.
Lemma 2.3 In a rigorously convex space functionalllxW is rigorously
con vex.
Proof. Suppose the opposite is true, and there exist a· (a· E
(0,1», and x, y(x :j; y) such that
Using the triangle inequality and Eq. (2.3), we come to
Let us assume, for instance, that x :j; 0 and x, yare not
proportional. Since the space is rigorously convex, the inequality
(2.4) is then strict:
Hence (2.5)
However, a· E (0,1) and, consequently, inequality (2.5) is
contradictory. Therefore the assumption that x, yare not
proportional is false,and x = Ay, A :j; O. In this case Eq. (2.3)
leads to
[a· A+ (1- a·W = a·A2 + (1- a·)
2.3 Quasisolutions of the ill-posed problems 31
it follows that A = 1, or x = y.This contradicts the initial
assumption on x, y. The case x = 0 is also impossible because of
the conditions a* :f. 0, a* :f. 1.
It is well known that spaces lp, Lp (for p > 1) are rigorously
normed, while l}, L 1 , era, b] are not. Any Hilbert space is
rigorously normed. For rigorously convex functionals in Banach
space the following theorem about attaining extremal points is true
(Rocafellar, 1970):
Theorem 2.4 A rigorously convex continuons functional g( z) defined
on a convex compact closed set M of Banach space Z attains its
exact lowest bound at a unique point z· EM.
The conditions on M may be weakened if the additional restrictions
are imposed on the space.
Theorem 2.5 (Rocafellar, 1970). A rigorously convex functional g(z)
de fined on a convex bounded doused set M of a reflexive Banach
space attains its exact lowest bound at a unique point z* EM.
Now we may return to the problem of obtaining a quasisolution of
Eq. (2.1). Obviously, the problem of seaching for quasisolution on
the set M may be formulated as a problem of defining the element z*
such that functional I\Az-u61lb attains its exact lowest bound at
Z·. (The only difference between the latter statement and
definition of quasisolution is that the functional I\Az - u611u is
replaced by IIAz - u61Ib).
If the space is rigorously normed, and A produces a one-to-one
mapping of M on to AM C U, then the functional IJAz - u611b is
rigorously convex for any U6 E U. Hence, the following theorem is
true:
Theorem 2.6 Let A be a linear continuous one-to-one operator, M be
a convex compactum in Z, and U be a rigorously convex Banach space.
Then a unique quasisolution of Eq. (2.1) exists for any u E U, and
it is a continuous function of u.
Proof. According to theorem conditions, the functional g(z) = IIAz
-u6l1b is rigorously convex for any u, and is defined on the closed
convex compact set M C Z. The continuity offunctional g(z)
immediately follows from continuity of operator A. Therefore
Theorem 2.4 is applicable, and g( z) attains its exact lowest bound
on M at a unique point. Hence, the unique quasisolution exists for
any u E U.
32 Chapter 2. Regularizing algorithms on compacta
The continuity of quasisolution as a function of u is a trivial
consequence of the set M being compact and of the continuity of
operator A -1 on the set AMCU.
Rem ark. If the correctness set M is extended to M = K + £- where K
is a compactum in Z, and £- is a finite-dimensional space, the
analogous result may be proved with the additional requirements
imposed on the space U. All the results of Theorem 2.6 are still
valid, if U is rigorously convex and reveals E-property
(Efimov-Stechkin property), i. e. U is reflexive, and strong
convergence Un --. U (n --. (0) follows from the weak convergence
(un w~ly u) and convergence of the norms (lIun ll --. Ilull). To
find the approximations to quasisolutions on the correctness set M
C Z, the well known methods for optimizing functionals may be
applied. These methods are best developed for the case of
rigorously convex functional g( z) = IIAz -u6l1lr (this is the
case, for instanse, when operator A is linear, and U is Hilbert
space), and M being a convex compactum in Z. In this case the
problem of finding a quasisolution is a problem of convex
programming. With the use of various iterative optimization
techniques (e. q. conjugate gradients projection, sleepest descent
technique, second-order methods, ets.) one can obtain the sequence
Zn minimizing functional g( z) on M. Since M is a compactum, the
sequence Zn converges to a unique point of minimum of g(z) on M
(provided the operator A is injective), i. e. to the quasisolution
of Eq. (2.1). All these aspects are concerned with the classical
well-developed extremal methods. They are widely reflected in
publications (Goncharsky et al., 1979a). We shall not discuss such
algorithms in detail here.
By now, the use of additional information about the solution in
cases when this information is sufficient to convert the problem to
Tikhonoov (conditionally) correct one, has made it possible to
create efficient packages of numerical algorithms for solving the
wide range of inverse problems in mathematical physics.
Since iterative algorithms applied to problems with domain
restrictions usually do not solve the problem of minimizing a
functional in a finite (known) number of iterations, the
quasisolution may only be found approxi mately. Therefore in the
practical numeric algorithms based on the concept of quasisolution,
the quasisolution technique is used in a somewhat modified way. The
essense of such a modification can be explained as follows. Con
sider the ill-posed problem given by Eq. (2.1). Let Z, U be
arbitrary Banach spaces, and A produces a continuous one-to-one
mapping of a compactum M C Z onto AM cU. Instead of exact value of
u = Ai we are given its
2.3 Quasisolutions of the ill-posed problems 33
approximation U6 E U such that lIu6 - ullu ~ b. We may use the
following RA instead of a quazisolution technique: for the given
pair (U6' b) the RA puts into correspondence any element Z6 E M
such that IIAz - u611u ~ b. Such an element obviously exists
because IIAi - w511u ~ 6, i E M. It is clear that IIAz6 - Azllu ~
26. Since i, Z6 EM, and M is a compactum in Z , then Z6 ~ z when 6
--+ 0, i. e. the described algorithm is a regularizing one. It will
be referred to as b-quasisolution technique. The use of such a RA
allows to overcome problems concerned with the
uniqueness of a quasisolution and with the impossibility of
absolutely accu rate solving the problem of finding a
quasisolution. To find the approx imate solution, the
above-mentioned techniques of optimizing functional g(z) = IIAz-
-u611~ are used, yielding the sequence Zn which minimizes g(z) on
M. It is sufficient to continue the process of minimizing g( z) on
M until the er ror level of 62 is achieved. It must be mentioned,
however, that, contrary to a quasisolution technique, such a RA
uses more information about the approximation to a right-hand side
of Eq. (2.1): the value of b as well as U6 E U. Solving the problem
on a compactum is attractive in one more aspect:
it is possible not only to obtain the approximate solution Z6 EM,
but to estimate the deviation of Z6 from i in the metrics of space
Z as well.This latter estimate is based on the characteristics of A
and M. The approximation error may be estimated with the use of a
quasiso
lution technique or its modification descibed above. In fact, if Ai
= u, lIu - u611u ~ 6, and Z6 is selected from the set M n {z : IIAz
- u611u ~ 6}, then
Therefore, using notation introduced in Chap. 1,
(2.6)
where w(26, A- 1 , AM) is continuity module of operator A- 1 on the
set AM. Moreover,
Comparing (2.6) to inequality (1.14) we can see that the described
tech niques are quasioptimal on AM if condition (1.21) is
satisfied for continuity module of A-1 on AM.
34 Chapter 2. Regularizing algorithms on compacta
The problem of evaluating the continuity module of the inverse
operators on various compacta is well investigated. The most
detailed results are available for the case when compactum M is an
image of a sphere S = {v E
E V : II v II ~ c} of a reflexive space V, the mapping of a sphere
S c V onto the space Z being produced by linear completely
continuons operator B, i. e. M = BS. In this case condition (1.21)
is obviously satisfied, and methods like quasisolution technique
are quasioptimal on AM. So far we have considered the case of
exactly known operator A. Now let
in Eq. ( 2.1) both right-hand side u and operator A be given
approximately, i. e. instead of A we deal with a family of
continuous operators A h such that
Here 'lj;(h, y) is a function continuous with respect to both
arguments at h ~ 0, y ~ 0 , monotonically non-decreasing with
increasing y, non-negative and satisfying the condition 'lj;(h, y)
-> 0 as h -> 0 uniformly for y from a certain segment [0,
co]. If operators A, Ah are linear, then IIAhz - Azllu ~
~ hllzll, i. e. 'lj;(h, Ilzl!) = hllzll. Suppose it is known a
priori that solution z of Eq. (2.1) belongs to a
compactum M c Z, and operator A produces a one-to-one mapping of M
onto AM CU. Denote"., = (6, h) and define M'1 as a set of vectors z
E M such that
The set M'1 is not empty for any"., > 0 because it contains the
element z EM. Since M is a compactum in a normed space, there
exists a constant Co such that
sup IIzl1 ~ Co zEM
Let z'1 be an arbitrary element from M'1' Then
Taking into account properties of the function 'lj;(h, y) and
continuity of operator A-l on AM, we yield z'1 -> z when".,
-> 0 . Hence, any element z'1 E M'1 may be taken as the
approximate solution. It means that on compacta the problem (2.1)
with approximately specified operator is solved in a full analogy
with the case when operator is known exactly.
2.3 Quasisolutions of the ill-posed problems 35
An interesting case is when a priori information is only sufficient
to se cure the compactness of M within a weak topology. It is well
known, for example, that any sphere is a weak compacturn in a
reflexive space. Using the same technique, we can construct an
approximate solution which con verges to the exact one in a weak
topology. In general, a weak topology has no metrics in the whole
space. However, using the special procedure to construct
approximations, it proves possible to obtain certain estimates of
approximation error. We shall now give the idea of such a procedure
of constructing approx
{
, N;l < t < 1.
The approximate solution (or, more exactly, its finite-dimensional
ana logue) zJ.. (t) is to be found in a form
6 " 6 eiN(t)zN(t) = LJ(U6, 1/JiN)~, ,
where . (t) = {N' t E (iIN,(i + l)IN
e,NO, t ~ (i IN, (i +1)IN.
Functions 1/JtN(t) are obtained as the solutions of the following
problems:
IIA*1/JtN - eiNIl = min II Av - eiNl1 IIvll5:1/v'6
The following theorem is true:
Theorem 2.7 (Gaponenko, 1976). Let it be a priori known that Iz(t)1
$ R for any t E [0,1]. Then for the finite-dimensional analogue of
the approximate solution zJ..(t) and for that of the exact solution
ZN(t) the following estimate of error is true:
36 Chapter 2. Regularizing algorithms on compacta
Therefore the knowledge of solution's belonging to a weak compactum
allows not only to construct approximations to the solution in a
weak topol ogy, but as well to give the quantative estimates of
approximation error for the finite-dimensional analogues of the
solution. It must be understood, however, that for "essentially"
ill-posed problems (when A-I is unbounded in its domain), the
estimate obtained for any fixed h, in general, tends to infinity as
N -+ oo!
2.4 Properties of 8-quasisolutions on the sets with special
structure
The quasisolution technique and its modification (h-quasisolution
technique) ensure strong convergence in Z of approximations z~ to
the exact solution z on the correctness set M . Sometimes more
delicate results can be achieved with the use of compact imbedding.
This is, as a rule, possible when some more additional information
about the exact solution is available. Some examples of such
results are to be found in this section. Consider the operator
equaton (2.1). Suppose it is known a priori that
the exact solution z ofthe ill-posed problem (2.1) belongs to a set
of mono tonic nongrowing bounded functions Z!eC Lp(p > 1). Let
also Z = Lp, and operator A produces a one-to-one mapping of Z!e
onto AZ!eC U. As usu ally, denote the approximation to u as u~,
lIu6 - ull ~ h. Introduce Z!e (h) as the set offunctions z(s) E Z!e
such that IIAz - u~llu ~ h. Since Z!e is a compactum in Lp, any
choice of z~ from Z!e (h) will lead to convergence
L p z~ -+ z when h -+ O. However, if we are supplied with
additional information about the exact
solution, this result can be overridden. For instance, if z(s) is
continuous on [a, b], we may yield as much as uniform convergence
of approximations to z( s). The approximate solutions in this case
still belong to Z!c, i. e. they may be discontinuous.
2·4 Properties of o-quasisolutions 37
Theorem 2.8 Suppose z(s) E C[a,bJ n Zic and [',O'J is an arbitrary
seg ment of the interval (a, b). Let also Z6
n E Zic (on) and On ~ 0 as n ~ 00.
Th Cb,u] -() hen Z6n (3) ~ Z S w en n ~ 00.
Proof. At first we shall show that, when n ~ 00, the sequence Z6JS)
converges to z(s) pointwise at any point s E (a,b). Select a
pointwise con verging subsequence Z6 n' (z) from Z6 n (s); it
converges to a certain function z(s) E Zic at any point s E (a,b).
The following estimate is valid:
IIAz - Azllu ~ IIAz - AZ6nl Ilu+
+IIAz6 , - U6 I lIu + IIu6 , - ullu ~ Cn' +20n"n n n
where Cn' ~ 0 when n' ~ 00 due to the continuity of operator A from
Lp
to U. Hence z~ z. Since z(s) is monotonic, and z( s) is both
monotonic and continuous, then
z(s) == z(s) at any s E (a, b). Therefore z( s) = z( s) at any
internal point of the segment [a, bJ. Due to the fact that it is
always possible to select a converging subsequence from Z6
n (s) , and that all the selected subsequences
converge to the same element z(s) , we obtain that the sequence Z6n
(s) itself converges to z(s) at any point of the interval (a,b).
Now we are going to prove that the sequence Z6JS) converges
uniformly
to z( s) on any closed segment [" O'J C (a, b). Fix some c > O.
It follows from the uniform continuity of z( s) on a segment [" aJ
that for any c > 0 there exists o(c) > 0 such that for any t
h t2 E ["O'J satisfying the condition It1 - t21 < o(c) the
inequality Iz(tt} - z(t 2 )1 < c/4 is true. Divide the segment
[" a J by the points I = t1 , t2 ••• < tm < a in such a way
that Iti+1 - til < 6(c), i = 1,2, ... ,m. Select N(c) so that
for any n > N(c) the inequality
is true at any point t i , i = 1,2, ... , m. Then, because of z(t),
Z6(t) being monotonic, we yield
IZ6n(t) - z(t)1 < c
at any point t E ["O'J when n> N(c). Rem ark 1. We once again
stress that the elements of the set Zic are
not necessarily continuous functions. Nevertheless, sup IZ6n (t) -
z(t)1 ~ 0 tE[-r,u)
when 6n ~ O.
38 Chapter 2. Regularizing algorithms on compacta
Rem ark 2. It is clear that the set Z!c is not a compacturn in C[a,
b], i. e. the result of Theorem 2.8 cannot be obtained within a
standard approach. This result was achieved due to the compact
imbedding and additional a priori information about the solution.
Rem ark 3. The results can easily be extended to the case when
exact
solution z( s) is a piecewise continuous monotonic function. In
this case the convergence'of approximating sequence to exact
solution is uniform on any closed segment [J,O'] which does not
include points a, b and discontinuities of z(s) (Goncharsky, 1987).
Hence, if it is known a priori that exact solution z(s) is a
monotonic
bounded function, any element Z6 E Z!c (6) can be taken as the
approximate solution. In this case not only convergence in Lp is
ensured (the set Z!c is a compactum in Lp ), but also the uniform
convergence to the exact solution, as it has been shown above. A
similar theorem can be proved for the case when the exact
solution
z(s) E f;;, where f;; is the set of convex upwards nonnegative
bounded functions: 0 :::; z(s) :::; C. Morever, in this case it is
possible to achieve not only the convergence of approximations, but
(in a certain sense) the convergence of their derivatives
(Goncharsky et al., 1979a). We shall provide one more example that
also falls out of the standard
scheme: the case when exact solution is a convex (say, convex
upwards) nonnegative function. Let the exact solution of the
ill-posed problem (2.1) be a convex upwards
nonnegative function. The set of such functions we denote as Z and
the set of functions from Z satisfying condition IIAz - u611u :::;
6 as Z(6). Let Z6 ..
be an arbitrary element from Z(6n ). We shall show that if operator
A is a linear one-to-one mapping of Lp onto U, then the sequence of
functions Z6 ..
for 6n --+ 0 is uniformly bounded from above. This result means
that, when searching for the solution of the ill-posed problem
(2.1) on the set of convex functions, there is no need to know the
constant for the upper bound of the exact solution.
It is easy to prove the following statement.
~ (a +b)Lemma 2.4 Let z(s) E Z and z(s·) =z -2- :::; 2.
any s E [a, b].
Then z(s) < 4 at
Proof. Suppose the opposite is true. Then there exists point s·· E
[a, b]
such that z(s··) > 4. Let, for the sake of clarity, s·· < a ~
b. The value of
2·4 Properties of b-quasisolutions 39
() h . * a +b . I h d' d'z s at t e pomt s =-2- IS greater or equa
to t e correspon mg or mate of the point of a straight line
connecting points {b,z(b)} and {s**,z(s**)}.
Since z(b) 2: 0, we obtain z ( a ; b) > 2. This contradicts the
conditions of
Lemma and therefore proves it. Lemma 2.4 can be formulated in other
words:
Lemma 2.5 Let z( s) E Z and there exists point s·· E [a, bj such
that
z( s**) > 4. Then z(s*) = z ( a ; b) > 2.
Suppose that the exact solution of Eq. (2.1) z(s) E Z and z(s) ::;
1 for any s E [a, bj.
Theorem 2.9 Let Z6 n be an arbitrary element of Z(bn ). Then there
exists a constant c > °such that for any s E [a, bj the
condition Z6 n (s) ::; c is met (n = 1,2, ...).
Proof. Since operator A is linear, the set Z(hn ) is convex and,
conse quently, the range of functions z(s) E Z(bn ) is convex at
any point s E (a, b),
including s = s* =a; b. Suppose that the opposite to the Theorem
~tate
ment is true. Then there exists the sequence of functions Z6 ,(s) E
Z(on' ) and the sequence of points Sn' E [a, bj such that Z6 ,(Sn'
) > n' ~ According to Lemma 2.5, Z6
n , (s*) > 2 for n' > 4. n
Note that i( s) E Z(On') for any n', and z( s·) ::; 1. Since the
range of functions z(s) E Z(on') at the point s* is convex, for any
n' there exists an element z~,(s) E Z(on') such that z~,(s·) = 2.
According to Lemma 2.4, all the functions z~,(s) are uniformly
bounded, and, due to Helly theorem
on choice, there exist a subsequence Z~/1 and a function z(s) E Z
such that at any point s E [a, bj ym Z~/1 (s) = z(s). Since
operator A is continuous,
n -00
IIAz - Aillu = 0, and, consequently, z~ z. However, we have obt