71
Dimension Reduction and Discretization in Stochastic Problems by Regression Method ————— Reprint from Mathematical Models for Structural Reliability Analysis (eds.: F. Casciati, J.B. Roberts), CRC, Florida, 1996, pp 51 - 138. Ove Ditlevsen Technical University of Denmark 1

Ditlevsen - MathematicalModels for Structural Reliability Analysis

Embed Size (px)

Citation preview

Page 1: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction andDiscretization in Stochastic Problems byRegression Method—————Reprint from Mathematical Models for Structural Reliability Analysis (eds.: F. Casciati, J.B. Roberts), CRC,Florida, 1996, pp 51 - 138.

Ove DitlevsenTechnical University of Denmark

1

Page 2: Ditlevsen - MathematicalModels for Structural Reliability Analysis

2 Structural Reliability

Page 3: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Contents

1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2 Linear Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3 Normal distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.4 Non-Gaussian distributions and linear regression . . . . . . . . . . . . . . . . . . . . . . . . . 161.5 Marginally transformed Gaussian processes and fields . . . . . . . . . . . . . . . . . . . . . . 171.6 Discretized fields defined by linear regression on a finite set of field values . . . . . . . . . . 201.7 Discretization defined by linear regression on finite set of linear functionals . . . . . . . . . 201.8 Poisson Load Field Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241.9 Stochastic finite element methods and reliability calculations . . . . . . . . . . . . . . . . . . 291.10 Classical versus statistical-stochastic interpolation formulated on the basis of the principle

of maximum likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331.11 Computational practicability of the statistical-stochastic interpolation method . . . . . . . 381.12 Field modelling on the basis of measured noisy data . . . . . . . . . . . . . . . . . . . . . . . 401.13 Discretization defined by linear regression on derivatives at a single point . . . . . . . . . . 461.14 Conditioning on crossing events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481.15 Slepian model vector processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531.16 Applications of Slepian model processes in stochastic mechanics . . . . . . . . . . . . . . . 56

1.1 Introduction

It is not the intention in the following to use a rigorous mathematical style of presentation, but ratherto stick to a heuristic style that makes the text possible to read for mathematically motivated engineersand scientists with an appreciation for applications of probabilistic concepts in their fields of work. Abasic knowledge of elementary probability theory will be assumed including the definition of a vectorof random variables and their joint distribution, expectations, variances, covariances, etc., the rules ofoperating with these concepts, and the generalisations to random processes and random fields. Neitherthere will be systematic references to the many brilliant mathematicians and statisticians that createdthese concepts and theories now belonging to the standard toolbox of probability and mathematicalstatistics. Predominantly the references will be to originators of applications that are related to structuralreliability problems and stochastic mechanics problems.

Many different stochastic problems in engineering and in the sciences are defined in terms of randomprocesses or random fields. In most of the problems these are non-countable infinite families of randomvariables: To each t in an ordered index set I (usually a subset of or the entire time axis or a subsetof coordinates that define points in some space) a random variable (or random vector, or even a more

3

Page 4: Ditlevsen - MathematicalModels for Structural Reliability Analysis

4 Structural Reliability

general random entity) X (t ) is adjoined, and, in principle, (according to a theorem of Kolmogorov [1, 2])this family of random variables is completely defined by the set of joint probability distributions thatcorrespond to all finite subsets of I , given that the set of probability distributions satisfies some obviousconsistency conditions. In the following the word “field” is used as a common short terminology for“random process” and “random field”, except if otherwise is noted.

When it comes to the practical solution of the stochastic problems it is only in exceptional casespossible to proceed without introducing simplifications based on more or less approximate reasoning.Several different types of simplifications may be applicable on a given problem dependent on the type ofthe problem.

A frequently used simplification is to assume that the fields are related in some way to the class ofGaussian random variables, such that the fact that this class is closed with respect to linear operationscan be utilized for obtaining the solution. Moreover, it can be utilized that the class of jointly distributedGaussian random variables has the convenient property that the conditional expectation of any subsetA of the Gaussian random variables given any subset B of the Gaussian random variables is coincidentwith the linear regression of the subset A on the subset B of random variables. The advantage is thus thatthe conditional expectations can be calculated solely by algebraic operations on the expectations and thecovariances of the total set of Gaussian random variables. This is basic knowledge given in most elemen-tary probability courses. Due to its importance for the present subject the concept of linear regressionwill be introduced specifically in the following section.

Another similar type of simplification exists in the case where random point fields enter the prob-lems. Then it is often used in practice to assume that Poissonian properties are present thus opening acatalogue of known results.

In order to be able to reach computational results it is generally necessary to introduce simplificationsby which the infinite dimensional set of random variables of the field is replaced by another infiniteset of random variables defined completely in terms of a finite set of representative random variables.This replacement is denoted as discretization of the field. Problem independent automatic discretizationprocedures are generally based on direct approximation of the field (most often defined as input to thestochastic problem). Automatic procedures have obvious advantages in routine work. However, the finiteset of random variables can be chosen such that it is sufficiently representative for the solution to theproblem, noting that often it is less important whether or not the original field is well approximatedwhen judged by direct comparison of sample functions. These aspects will be discussed in connectionwith the introduction of the mathematical tools of field discretization.

The opposite problem is about extending from a finite subset of the infinite set of random variablesof a field about which nothing else is known except, perhaps, that it belongs to some given class of fields.The extension is intended to be to a field that is supposed to resemble the unknown field. This is theinterpolation problem. It is obviously not solvable in general in the sense that it becomes possible tojudge the goodness of the approximation by some well defined measure of error. A principle of simplic-ity may instead be taken as a way to choose between the possibly infinite number of fields that can beconstructed as extensions.

The problem becomes still more ambiguous if only a single sample of values of the finite set of ran-dom variables is known. These values may even be given without any reference to a random variablemodel. They may be given as some measured almost error free values of a deterministic but otherwiseunknown function. The interpolation problem is nevertheless solvable by reference to the same principleof simplicity as applied in the statistical theory of maximum likelihood estimation. This means that themeasured values may be treated as if they were values from a realization of a suitably chosen type of field,the distribution parameters of which become estimated from the sample. The obtained estimate of the

Page 5: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 5

conditional mean value variation over the index set given that the measured values are reproduced by theconditional mean at the points of measurement may then be taken as the interpolation function. Thismethod of interpolation is often called kriging after the South African mining engineer Krige [3]–[6], whofirst applied the principle of statistical-stochastic interpolation for estimating the size and properties ofmineral deposits. The philosophy of statistical-stochastic interpolation is discussed and interpreted interms of principles of deterministic interpolation in Section 2.10.

A further complication is added to the interpolation problem when the field values are uncertain dueto superimposed measuring uncertainty. To deal with this problem it is necessary to make assumptionsabout the field properties of the measuring uncertainty. Two different measuring error models are con-sidered in Section 2.12. The one is the standard model of independently added random errors and theother is a model where the error field has the same correlation structure as the unknown field itself. Itis demonstrated for both models that the method of statistical-stochastic interpolation is well suited forthe separation of the measuring uncertainty field from the object field. The importance of the principleof making independent double measurements of each field value is emphasized by this error analysis.

Certain types of problems in random vector processes can be analysed by discretization defined bylinear regression on the derivatives at a single time point. In particular, this type of discretization isrelevant in connection with the evaluation of the occurrence rate of events that may happen when a sta-tionary Gaussian vector process crosses out of a given domain. Such investigations belong to the theoryof so-called Slepian model processes. The linear regressions on derivatives are given in Section 2.13 andthe problem of how to make unique conditioning on a crossing event of zero probability is discussed inSection 2.14. Being convinced that for the conditioning to be applicable on physical problems, the cross-ing events must be defined as “horizontal window” crossings, the Slepian process models for the samplefunction behavior at a level upcrossing follow directly. Several examples of interesting applications instochastic mechanics are given in the last section.

Except for Section 2.9 this text solely deals with dimension reductions and field discretizations baseddirectly on the concept of linear regression. There are alternative methods based on truncations of seriesexpansions of the given random field with respect to some infinite orthogonal basis of functions. Like thelinear regressions these truncations are linear functions (or functionals) on the field, and they are there-fore closely related to specific linear regressions. Most often the expansions are based on the so-calledKarhunen-Loeve theorem [7] by which a field of mean zero can be represented as an infinite series wherethe ith term is the ith normalized eigenfunction to the eigenvalue problem with the covariance functionof the field as kernel. The coefficients are then uncorrelated random variables, the ith of variance equalto the ith eigenvalue. Since exact solutions are known only for a few simple cases of correlation func-tions, the different methods apply various discretization approximations to reduce the integral equationeigenvalue problem to a matrix eigenvalue problem [8]–[11]. Also the random coefficients are usuallytaken to be Gaussian so that the expansion represents a Gaussian field. Some accuracy comparisons be-tween different discretization methods are reported in the literature [12]. There can hardly be formulateda statement of superiority as to which discretization method that generally uses the smallest number ofrandom variables to satisfy any given accuracy requirement.

1.2 Linear Regression

Consider a pair (X ,Y ) of random variables contained in some mathematical model of interest in a givenengineering context. We might wish to simplify the random variable part of the model by reducing thedimension of the random vector (X ,Y ) from 2 to 1. Let us assume that Y is the least important of the

Page 6: Ditlevsen - MathematicalModels for Structural Reliability Analysis

6 Structural Reliability

two random variables for the considered problem. The simplest approximation next to replacing Y by aconstant is to replace Y by an inhomogeneous linear function a +bX of X .

The coefficients a and b should obviously be chosen such that some error measure becomes mini-mized. It is reasonable to require that the error measure definition is chosen so that it is directly relatedto the solution of the model. However, if a problem independent procedure is preferable from an oper-ational point of view, the error measure must be related solely to X and Y . A reasonable procedure is todetermine a and b such that the mean square deviation E [(Y −a −bX )2] becomes as small as possible.Minimum is obtained for the values of a and b that satisfy the equations

∂aE [(Y −a −bX )2] =−2E [Y −a −bX ] = 0 (1.1)

∂bE [(Y −a −bX )2] =−2E [(Y −a −bX )X ] = 0 (1.2)

from which it follows that

E [Y ] = a +bE [X ] (1.3)

E [Y X ] = aE [X ]+bE [X 2] (1.4)

Since the variance Var[X ] and the covariance Cov[X ,Y ] are

Var[X ] = E [X 2]−E [X ]2 (1.5)

Cov[X ,Y ] = E [X Y ]−E [X ]E [Y ] (1.6)

respectively, (1.3) and (1.4) gives

a = E [Y ]− Cov[X ,Y ]

Var[X ]E [X ] (1.7)

b = Cov[X ,Y ]

Var[X ](1.8)

This linear approximation of Y in terms of X is called the linear regression of Y on X , and it is written asE [Y |X ]. The coefficient b given by (1.8) is called the regression coefficient. The result is

E [Y |X ] = E [Y ]+ Cov[X ,Y ]

Var[X ](X −E [X ]) (1.9)

The error, Y − E [Y |X ], is called the residual, and it has the variance

Var[Y − E [Y |X ]] = Var[Y ]− Cov[X ,Y ]2

Var[X ]

= Var[Y ](1−ρ[X ,Y ]2)(1.10)

where

ρ[X ,Y ] = Cov[X ,Y ]

D[X ]D[Y ](1.11)

Page 7: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 7

is the correlation coefficient. It is important to note that

Cov[X ,Y − E [Y |X ]] = 0 (1.12)

The conditions E [Y − a −bX ] = 0 and Cov[X ,Y − a −bX ] = 0 can be chosen as an alternative basis fordefining the linear regression.

Clearly it depends on the size of ρ[X ,Y ] and of Var[Y ] how good the approximation is, if the residualis neglected. For example,

Var[X +Y ] = Var[X ]+2Cov[X ,Y ]+Var[Y ] (1.13)

and

Var[X + E [Y |X ]] = (1+ Cov[X ,Y ]

Var[X ])2Var[X ] =

Var[X ]+2Cov[X ,Y ]+ρ[X ,Y ]2Var[Y ](1.14)

deviates by the residual variance.The concept of linear regression of Y on X is directly generalized to the linear regression E [Y|X] of an

m-dimensional random vector Y = Yi on an n-dimensional random vector X = X j as the best inhomo-geneous linear approximation a+BX to Y in terms of X in the sense that a = ai and B = bi j minimizesthe mean square deviation

E [(Y−a−BX)′(Y−a−BX)] =m∑

i=1E [(Yi −ai −

n∑j=1

bi j X j )2] (1.15)

where prime ′ attached to a matrix indicates transposition of the matrix. By minimizing each term on theright side we directly get

E [Yi ] = ai −n∑

j=1bi j E [X j ] (1.16)

in the same way as (1.3) follows from (1.1). Thus ai can be eliminated from the i th term of (1.15) so thatit becomes

E [(Yi −n∑

j=1bi j X j )2] (1.17)

after renaming Xi −E [Xi ] and Yi −E [Yi ] to Xi and Yi , respectively. These random variables now havezero mean. Partial differentiation of (1.17) with respect to bi k (using that ∂bi j /∂bi k = δ j k where δ j k isKronecker’s delta) and setting to zero gives the equation

E [(Yi −n∑

j=1bi j X j )

n∑j=1δ j k X j ] = E [Yi Xk ]−

n∑j=1

bi j E [X j Xk ] = Cov[Yi , Xk ]−n∑

j=1bi j Cov[X j , Xk ] = 0

(1.18)

Page 8: Ditlevsen - MathematicalModels for Structural Reliability Analysis

8 Structural Reliability

In matrix notation this equation reads

Cov[Y,X′] = BCov[X,X′] (1.19)

from which it follows that

B = Cov[Y,X′]Cov[X,X′]−1 (1.20)

given that the covariance matrix of X is regular. This is the generalization of the regression coefficient bin (1.8) to the regression coefficient matrix B of type (m,n) for the linear regression of Y on X.

The residual vector Y− E [Y|X] = Y−BX has the covariance matrix

Cov[Y−BX,Y′−X′B′] = Cov[Y,Y′]−Cov[Y,X′]B′−BCov[X,Y′]+BCov[X,X′]B′

= Cov[Y,Y′]−Cov[Y,X′]Cov[X,X′]−1Cov[X,Y′] (1.21)

called the residual covariance matrix. Moreover,

Cov[Y−BX,X′] = Cov[Y,X′]−BCov[X,X′] = 0 (1.22)

that is, the residual vector Y−BX and X is uncorrelated, and

Cov[Y−BX,Y′] = Cov[Y,Y′]−BCov[X,Y′] (1.23)

is the residual covariance matrix.It is seen from (1.22) that if instead of requiring minimum of the mean square deviation (1.15) we may

equivalently require that the residual Y−BX is uncorrelated with X and that

E [E [Y | X]] = E [Y] (1.24)

The linear regression

E [Y|X] = E [Y]+B(X−E [X]) (1.25)

has the important property of being linear in Y. In fact, let Ly (Y) = KY+k be any inhomogeneous linearmapping of Y into Ly (Y). Then it follows directly by substitution that

E [Ly (Y)|X] = Ly (E [Y|X]) (1.26)

a property that the linear regression has in common with the conditional expectation E [Y|X] with whichit coincides on the constant vectors.

Another important property shared with the conditional expectation E [Y|X] is that the linear regres-sion E [Y|X] is invariant to a one-to-one inhomogeneous linear mapping X ↔ Lx (X) of the conditioningvector X:

E [Y|Lx (X)] = E [Y|X] (1.27)

If X consists of two subvectors X1 and X2 that are mutually uncorrelated, that is, Cov[X1,X′2] = 0, then

obviously,

E [Y | X]−E [Y] = (E [Y | X1]−E [Y])+ (E [Y | X2]−E [Y]) (1.28)

Page 9: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 9

This means that the linear regression of Y on X1 (or X2) can be obtained from the linear regression ofY on X by removing all terms that contain elements of X1 (or X2). In particular, if the one-to-one inho-mogeneous linear mapping X ↔ Z = Lx (X) in (1.27) is chosen such that Z has zero mean vector and theunit matrix as covariance matrix, then (1.28) applies on any division of Z into subvectors. Thus the rel-ative importance of the terms of E [Y | Z] = BZ, B = Cov[Y,Z′], can be directly studied by comparing theresidual covariance matrices Cov[Y,Y′]−BB′ and Cov[Y,Y′]−B1B′

1, where B1 is the matrix obtained fromB by removing those columns that correspond to the elements of Z whose importance are investigated.Such investigations are particularly useful for the purpose of reduction of the dimension of randomnessin reliability and stochastic finite element calculations (Section 2.10).

In this introduction to linear regression let us generalize further. Consider a pair of random processes(X (t ),Y (t )). By direct generalization the linear regression of process Y on process X over the interval[α,β] takes the form

E [Y (t )|X ] = a(t )+∫ β

αB(t ,τ)X (τ)dτ (1.29)

where a(t ) and B(t , s) are functions that are determined from the condition

E [E [Y (t )|X ]] = E [Y (t )] (1.30)

that corresponds to (1.16), and the condition

Cov[Y (t )− E [Y (t )|X ], X (s)] = 0 (1.31)

that corresponds to (1.22). From (1.29) and (1.30) it follows that

E [Y (t )] = a(t )+∫ β

αB(t ,τ)E [X (τ)]dτ (1.32)

and (1.29) and (1.31) give the equation

Cov[Y (t ), X (s)] =∫ β

αB(t ,τ)Cov[X (τ), X (s)]dτ (1.33)

In particular, if the process pair is stationary we have covariance functions cY X and cX of one variablesuch that (1.33) in the case α=−∞, β=∞ reduces to

cY X (s − t ) =∫ ∞

−∞B(t ,τ)cX (s −τ)dτ (1.34)

or, by substituting u = s − t , v = s −τ:

cY X (u) =∫ ∞

−∞B(s −u, s − v)cX (v)d v (1.35)

Since the left side is independent of s we may put s = u. Thus

cY X (u) =∫ ∞

−∞b(u − v)cX (v)d v (1.36)

where b(x) = B(0, x).

Page 10: Ditlevsen - MathematicalModels for Structural Reliability Analysis

10 Structural Reliability

It follows from (1.33) or in the particular case of (1.36) that the determination of the linear regressionof process Y on process X amounts to solving an integral equation, knowing the covariance functionsCov[X (s), X (t )] and Cov[X (s),Y (t )] of the process pair (X ,Y ).

Example As an example consider a stationary process pair that is periodic with period 2π. Then (1.36)can be written as

cY X (u) =∫ 2π

0h(u − v)cX (v)d v (1.37)

where

h(x) =∞∑

i=−∞b(x +2πi ) (1.38)

By a substitution test the reader may show that the regression coefficient function is

h(x) = 1

π

∞∑n=1

bn

cnsin(nx) (1.39)

and that the residual covariance function is

cY (u|X ) = 1

2a0 +

∞∑n=1

[1− b2n

ancn]ancos(nu) (1.40)

where

an = 1

π

∫ 2π

0cX (x)cos(nx)d x, n = 0,1, ... (1.41)

bn = 1

π

∫ 2π

0cY X (x)sin(nx)d x, n = 1, ... (1.42)

cn = 1

π

∫ 2π

0cY (x)cos(nx)d x, n = 0,1, ... (1.43)

are the Fourier coefficients of cX , cY X , and cY , respectively.Considered as a function of n the ratio |bn |/pancn is the so-called coherence function for the pair

(X ,Y ) of random processes, and it is bounded in value between 0 and 1. The coherence function canfor each n be interpreted as the absolute value of the correlation coefficient between the nth Fouriercomponents of X and Y . This can be seen by comparing (1.10) and (1.40).

A technical application of the linear regression in this example concerns the modelling of random siloload fields in vertical circular cylindrical silos. Let X (u) and Y (u) be the horizontal wall shear stress andthe wall normal stress, respectively, at the angular position u in a given horizontal plane. Assume that therandom field (X (u),Y (u)) is homogeneous with respect to u with given covariance functions formulatedsuch that the entire wall stress field is in global equilibrium. Then the linear regression E [Y (u)|X ] definesa normal stress field that is in global equilibrium with the shear stresses set to zero everywhere. Thus theresidual covariance function corresponds to a wall stress field that acts normal to the wall and is in globalequilibrium. Such a wall stress field may act on a horizontally ideally smooth silo wall [13].

Page 11: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 11

1.3 Normal distribution

The standardized normal (or Gaussian) density

ϕ(x) = 1p2π

e−12 x2

, x ∈R (1.44)

is directly generalized to the n-dimensional standardized normal (or Gaussian) density

fU1,...,Un (u1, ...,un) =n∏

i=1ϕ(ui ) = (

1p2π

)ne−12 r 2

, (u1, ...,un) ∈Rn (1.45)

where r 2 = u21+...+u2

n . This density is rotational symmetric with respect to origo and the covariance ma-trix of the random vector U = (U1, ...,Un) is the unit matrix. According to the definition the random vari-ables U1, ...,Un are mutually independent. The conditional density of any subvector of dimension m < ngiven the complementary subvector of dimension n −m is obviously the standardized m-dimensionalnormal distribution. Moreover, any random vector V = (V1, ...,Vn) obtained by an orthogonal mapping of(U1, ...,Un) has the n-dimensional standardized normal density.

For any regular linear mapping X = AU the random vector X (or the distribution of X) is said to be n-dimensional normal (or Gaussian) with expectation vector zero and regular covariance matrix Cov[X,X′] =AA′. The density function is

fX(x) = (1p2π

)n 1

|det(A)|exp−1

2x′Cov[X,X′]−1x (1.46)

where |det(A)| =√

det (Cov[X,X′]), (det(A) is the determinant of the square matrix A). Conversely, any Xwith the density function (1.46) can in an infinity of ways be written as a regular linear mapping X = AUof an n-dimensional standardized normal vector U. All that is needed is to determine A such that

AA′ = Cov[X,X′] (1.47)

It is required for the definition (1.46) of the n-dimensional normal density to make sense that detA 6= 0,that is, that Cov[X,X′] is a regular matrix. In that case the n-dimensional normal distribution is character-ized as being regular. However, a probability distribution can be obtained in Rn by a limit passage whereCov[X,X′] approaches a singular matrix. Then the entire probability mass in the limit becomes situatedon a subspace of Rn the dimension of which is equal to the rank r of Cov[X,X′] (as obtained in the limit).Then with probability 1 exactly n − r of the random variables in X can be expressed as linear functionsof the remaining r random variables. These r random variables jointly have a regular r -dimensionalnormal distribution. In any higher dimension than r the distribution is called singular normal.

Mathematical models of physical phenomena of engineering interest do not rarely contain nonlinearfunctions of random vectors. A way to make such models accessible to analytical solution methods is toreplace them by linear models that in some sense are approximations to the nonlinear models. Phenom-ena that strongly depends on the nonlinear nature of the models are lost in this way, of course, but otherproperties such as the behavior of robust averages may be sufficiently well represented for engineeringpurposes by the approximating linear model.

In the sense of least mean square deviation the nonlinear function F (X) is best approximated by thelinear regression

E [F (X) | X] = E [F (X)]+Cov[F (X),X′]Cov[X,X′]−1(X−E [X]) (1.48)

Page 12: Ditlevsen - MathematicalModels for Structural Reliability Analysis

12 Structural Reliability

It is seen that the calculation of the coefficients in this linear regression requires knowledge of the distri-bution of X. If X is Gaussian it is convenient to use the representation X = A U+E [X] with U standardizedGaussian and A satisfying (1.47). Then F (X) = F (A U+E [X]) may be written as G(U) and according to(1.27) we have

E [F (X) | X] = E [G(U) | U] = E [G(U)]+E [G(U)U′]U (1.49)

The ith element of E [G(U)U′] becomes

E [G(U)Ui ] = E [E [G(U) |Ui ]Ui ] =∫ ∞

−∞E [G(U) | ui ]uiϕ(ui )dui

= [−E [G(U) | ui ]ϕ(ui )]∞−∞+

∫ ∞

−∞ϕ(ui )

d

duiE [G(U) | ui ]dui

=∫ ∞

−∞E

[∂G(U)

∂ui| ui

]ϕ(ui )dui = E

[∂G(U)

∂ui

](1.50)

assuming that E [G(U) | ui ]ϕ(ui ) → 0 for ui →±∞, and that ∂G(u)/∂ui exists everywhere except for a setof probability zero. Thus we have the result

E [G(U)U] = E [gradG(U)] (1.51)

where gradG(u) is the gradient of the scalar field G(u). By use of the chain rule of partial differentiationit is easily seen that

gradG(U) = A′gradF(X) (1.52)

such that (1.49) becomes

E [F (X) | X] = E [F (X)]+E [gradF (X)]′(X−E [X]) (1.53)

By comparison with (1.48) it is seen that

Cov[F (X),X] = Cov[X,X′]E [gradF (X)] (1.54)

is valid for any nonsingular Gaussian vector X.

Example In random vibration engineering it is often relevant to study an n degree of freedom dampedmass system with nonlinear restoring forces and subjected to Gaussian force process excitation. Thematrix equation of motion given in terms of the response X then reads

MX+DX+F(X) = Y (1.55)

where M and D are mass and damping matrices, respectively, F(X) is the vector of nonlinear restoringforces, and Y is the Gaussian vector process of force excitation. In general it is difficult to solve (1.55) toobtain the probabilistic description of the response vector process X(t ). However, if (1.55) is replaced bythe linear differential equation

MX+DX+KX = Y (1.56)

where K is a suitably chosen stiffness matrix (that may be time dependent), then X becomes Gaussian.One type of socalled (“equivalent”) stochastic linearization then assumes that X is Gaussian and on this

Page 13: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 13

basis replaces the nonlinear restoring force F(X) (of zero mean) by the linear regression of F(X) on X. Thusaccording to (1.53), K is defined as

K = E [gradF(X)] =

E

[∂Fi (X)

∂x j

]i , j=1,...,n

(1.57)

(i = row number, j = column number). The stiffness matrix is then determined iteratively by findingthe parameters of the Gaussian distribution of X from equations obtained from (1.56) with some initialguess on K substituted, next using these parameters to calculate a new K from (1.57), and thereafterproceed iteratively in the same way until a level of sufficiently small changes is reached. This particularprocedure is called stochastic linearization using Gaussian closure. How much truth there is in the word“equivalent” often used in connection with this technique can only be investigated for cases where exactsolutions are known, or by comparisons with empirical results obtained by simulation [14].

Now let us define a random vector X = (X1, ..., Xn) recursively by the linear equations

X1 = δ1U1

X2 = b21X1 +δ2U2...Xn = bn1X1 + ...+bn,n−1Xn−1 +δnUn

(1.58)

where δ1, ...,δn > 0 and b21, ...,bn,n−1 are constant coefficients. By solution we get

X = (I−B)−1∆U (1.59)

where I is the unit matrix, ∆= dδ1...δnc is a diagonal matrix, and

B =

0 0 · · · 0 0

b21 0 · · · 0 0· · ·

bn1 bn2 · · · bn,n−1 0

(1.60)

Clearly I−B is regular so that the solution (1.59) exists. Thus X has a normal distribution with expectationvector zero and covariance matrix

Cov[X,X′] = (I−B)−1∆[(I−B)−1∆]′ (1.61)

for which the inverse is

Cov[X,X′]−1 = [∆−1(I−B)]′∆−1(I−B) (1.62)

where∆−1(I−B) as well as (I−B)−1∆ are lower triangular matrices, that is, all elements above the diagonalare zero.

If the covariance matrix Cov[X,X′] is given and is regular, a lower triangular matrix A can be uniquelydetermined by Choleski decomposition of Cov[X,X′] such that (1.47) is satisfied. Thus (I−B)−1∆ and itsinverse ∆−1(I−B) can be determined uniquely by Choleski decomposition of Cov[X,X′] and Cov[X,X′]−1,respectively, implying that any Gaussian random vector X of zero expectation and regular covariancematrix can be written uniquely as in (1.58).

Page 14: Ditlevsen - MathematicalModels for Structural Reliability Analysis

14 Structural Reliability

Obviously the linear regression

E [Xi |X1, ..., Xi−1] = bi 1X1 + ...+bi ,i−1Xi−1 (1.63)

coincides with the conditional expectation E [Xi |X1, ..., Xi−1], and the residual variance

Var[Xi − E [Xi |X1, ..., Xi−1]] = δ2i (1.64)

coincides with the conditional variance Var[Xi |X1, ..., Xi−1]. However, writing Xk = (X1, ..., Xk ) we moregenerally have that

E [Xi |Xk ] = E [Xi |Xk ] (1.65)

and

Cov[Xi , X j |Xk ] = Cov[Xi − E [Xi |Xk ], X j − E [X j |Xk ]] (1.66)

for any i , j ,k ∈ 1, ...,n. The proofs of (1.65) and (1.66) are as follows.The coincidence of the conditional expectation of Xi given Xk and the linear regression of Xi on Xk

follows from the equations (1.58) using the one-to-one correspondence between (X1, ..., Xk ) and (U1, ...,Uk ).Thus we may replace the conditioning on Xk by conditioning on Uk = (U1, ...,Uk ). Since

E [Ui |Uk ] = E [Ui |Uk ] =

Ui for i ≤ k0 for i > k

(1.67)

it follows that (1.58) gives the same equations for the conditional expectations and the linear regressions.The unique solution is given by (1.59) replacing U by E [U|Uk ].

Similarly the conditional covariances Cov[Xi , X j |Xk ] can be obtained from (1.58) replacing Xi and Ui

for i ∈ 1, ...,n by Cov[Xi , X j |Xk ] and Cov[Ui , X j |Uk ], respectively, for each j ∈ 1, ...,n. The solution isobtained from (1.59) as

Cov[X,X′|Xk ] = (I−B)−1∆Cov[U,X′|Uk ] (1.68)

where, according to (1.59),

Cov[U,X′|Uk ] = Cov[U,U′|Uk ]∆[(I−B)−1]′ (1.69)

and

Cov[Ui ,U j |Uk ] =

1 for i = j ∈| 1, ...,k0 otherwise

(1.70)

It is seen that the conditional covariance does not depend on the value of Xk implying that

E [Cov[X,X′|Xk ]] = Cov[X,X′|Xk ] (1.71)

The residual covariance matrix is

Cov[X− E[X|Xk ], X′− E[X′|Xk ]] = Cov[X−E[X|Xk ], X′−E[X′|Xk ]]

= E [Cov[X,X′|Xk ]]+Cov[E[X−E[X|Xk ]|Xk ], E[X′−E[X′|Xk ]|Xk ]]

= Cov[X,X′|Xk ] (1.72)

Page 15: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 15

which proves (1.66). Thus we have the important result that the conditional covariance matrix is identicalto the residual covariance matrix.

For n = 2 and Var[X1] = Var[X2] = 1 we have

Cov[X,X′] =[

1 ρ

ρ 1

]=

[1 0ρ

√1−ρ2

][1 ρ

0√

1−ρ2

](1.73)

so that X1 =U1, X2 = ρX1 +δ2U2 where δ2 =√

1−ρ2. The joint density of (X1, X2) is

fX1,X2 (x1, x2) = fX2 (x2|X1 = x1) fX1 (x1)

= 1

δ2ϕ

(x2 −ρx1

δ2

)ϕ(x1) = 1

2πδ2exp

− 1

2δ22

[(x2 −ρx1)2 + (1−ρ2)x21]

= 1

2π√

1−ρ2exp

[− 1

2(1−ρ2)(x2

1 −2ρx1x2 +x22)

](1.74)

This density function is as a standard denoted as ϕ2(x1, x2;ρ).For n = 3 and Var[X1] = Var[X2] = Var[X3] = 1 we have

Cov[X,X′] = 1 ρ12 ρ13

ρ12 1 ρ23

ρ13 ρ23 1

= A A′ (1.75)

with

A =

1 0 0ρ12 δ2 0

ρ13ρ23 −ρ12ρ13√

1−ρ212

δ3

(1.76)

where

δ2 =√

1−ρ212 (1.77)

δ3 =√√√√1−ρ2

12 −ρ213 −ρ2

23 +2ρ12ρ13ρ23

1−ρ212

(1.78)

so that

X1 =U1

X2 = ρ12U1 +δ2U2 = ρ12X1 +δ2U2

X3 = ρ13U1 + ρ23 −ρ12ρ13√1−ρ2

12

U2 +δ3U3

= ρ13 −ρ12ρ23√1−ρ2

12

X1 + ρ23 −ρ12ρ13√1−ρ2

12

X2 +δ3U3 (1.79)

Page 16: Ditlevsen - MathematicalModels for Structural Reliability Analysis

16 Structural Reliability

The joint density of (X1, X2, X3) is

fX1,X2,X3 (x1, x2, x3)

= fX3 (x3|X1 = x1, X2 = x2) fX2 (x2|X1 = x1) fX1 (x1)

= 1

δ3ϕ

(x3 −b31x1 −b32x2

δ3

)1

δ2ϕ

(x2 −b21x1

δ2

)ϕ(x1)

=(

1

)3/2 1

δ2δ3exp

− 1

2(δ2δ3)2

[(1−ρ2

23)x21 + (1−ρ2

13)x22 + (1−ρ2

23)x23

−2(ρ12 −ρ13ρ23)x1x2 −2(ρ13 −ρ12ρ23)x1x3 −2(ρ23 −ρ12ρ13)x2x3]

(1.80)

Finally, an n-dimensional vector X with expectation E [X] =µ is said to be normal or Gaussian, if Y = X−µis Gaussian.

1.4 Non-Gaussian distributions and linear regression

In the previous section it is shown that the multi-dimensional normal distribution has the property thatthe conditional expectation of any normal vector Y given any normal vector X is coincident with thelinear regression of Y on X provided the joint distribution of (X,Y) is normal. Non-Gaussian distributionsdo generally not have this property. It is easy to see that if the conditional expectation

E [Y|X] =∫Rn

y fY(y|X)dy (1.81)

is linear in X, then E [Y|X] = E[Y|X]. This follows from the fact that the expectation of any random variableis equal to the value relative to which the mean square deviation of the random variable is smallest.However, the conditional covariance matrix Cov[Y,Y′|X] may vary with X and thus be different from theresidual covariance matrix Cov[Y− E [Y|X], Y′− E [Y′|X]].

Important classes of m-dimensional non-Gaussian distributions can be defined by suitable non-linear transformations of a normal vector X. The simplest and most often used type of non-linear trans-formation maps each element Xi of X into Yi = gi (Xi ), where g1, ..., gm are non-linear increasing func-tions of one variable. This type of m-dimensional transformation may conveniently be denoted as anincreasing marginal transformation.

Let g = (g1,g2) be an increasing marginal transformation of the normal vector (X,Y). Obviously the lin-ear regression E [g2(Y)|g1(X)] is generally not simply related to the conditional expectation E [Y|X] = E[Y|X]or to the conditional expectation E [g2(Y)|g1(X)] (except, of course, if g is linear). However, it can be gener-ally stated that g2(E[Y|X]) is the marginal median point of the conditional density of g2(Y) given g1(X) (orgiven X = g−1

1 [g1(X)]), that is, any given element of g2(Y) takes a value below or above the correspondingelement value of g2(E [Y|X]) with probability 1/2. Generally the point g2(E[Y|X]) is simpler to calculatethan the conditional expectation point E [g2(Y)|g1(X)].

Example In practice the most frequently used increasing marginal transformation is the exponentialtransformation gi (x) = ex , i = 1, ...,n: Define logX ≡ (logX1, ..., logXn) as a normal vector. Then X is saidto have a lognormal distribution. The relations between E [logX], Cov[logX, logX′] and E [X], Cov[X,X′] are

E [X] =

exp

(E [logXi ]+ 1

2Var[logXi ]

)i=1,...,n

(1.82)

Page 17: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 17

E [logX] =

logE [Xi ]+ 1

2log(1+V 2

Xi)

i=1,...,n

(1.83)

Cov[X,X′] =

[exp(Cov[logXi , logX j ])−1]E [Xi ]E [X j ]

i , j=1,...,n(1.84)

Cov[logX, logX′] =

log

(1+ Cov[Xi , X j ]

E [Xi ]E [X j ]

)i , j=1,...,n

(1.85)

Let (X,Y) be a lognormal vector, X of dimension m, Y of dimension n. Then the conditional distributionof Y given X is n-dimensional lognormal. According to (1.82) to (1.84) the conditional expectation of Yi

given X becomes

E [Yi |X] = E [Yi |logX]

= exp(E [logYi |logX]+ 1

2Var[logYi |logX])

= exp(E [logYi |logX]+ 1

2Var[logYi − E [logYi |logX]])

= expE [logYi ]+Cov[logYi , logX′]Cov[logX, logX′]−1(logX−E [logX])

+1

2(Var[logYi ]−Cov[logYi , logX′]Cov[logX, logX′]−1Cov[logX, logYi ])

(1.86)

Thus E [Yi |X] depends nonlinearly on X and is therefore different from E [Yi |X]. If the variance term isneglected we get the marginal median point exp(E [logYi |logX]).

The conditional covariance between Yi and Y j given X becomes

Cov[Yi ,Y j |X] = Cov[Yi ,Y j |logX] = exp(Cov[logYi , logX j |logX])−1

E [Yi |X]E [Y j |X] (1.87)

where Cov[logYi , logX j |logX] is equal to the covariance between the linear regression residuals logYi −E [logYi |logX] and logY j − E [logY j |logX], and therefore does not depend on X. However, the conditionalexpectation factors in (1.87) depend on X as shown in (1.4).

The linear regression E [Y|X] plays no particular interesting role in the lognormal distribution exceptthat E [Y|X] is that linear function of X that approximates the conditional expectation E [Y|X] best in thesense of minimizing the expected squared difference E [(E [Y|X]− E [Y|X])2].

1.5 Marginally transformed Gaussian processes and fields

As stated in the introduction section the word “field” will in the following be used as short for “randomprocess” or “random field”. A field X (t ) is said to be Gaussian if the random vector corresponding to anyfinite subset t1, ..., tn of the index set I is a Gaussian vector. A Gaussian field is completely defined by theexpectation or mean value function µ(t ) ≡ E [X (t )] and the covariance function c(s, t ) ≡ Cov[X (s), X (t )].The last function must be nonnegative definite:

∀t1, ..., tn ∈ I :c(ti , t j ) = c(t j , ti )

and ∀x1, ..., xn ∈R :n∑

i=1

n∑j=1

c(ti , t j )xi x j ≥ 0 (1.88)

Page 18: Ditlevsen - MathematicalModels for Structural Reliability Analysis

18 Structural Reliability

Given that I ⊂Rq for some q , a field is said to be homogeneous (or stationary, if the word “field” stands for“random process”) within I if the joint distribution of (X (t1), ..., X (tn)) is identical to the joint distributionof (X (t1 +τ), ..., X (tn +τ)) for any t1, ..., tn ∈ I and any τ such that t1 +τ, ..., tn +τ ∈ I .

A Gaussian field is homogeneous if and only if µ(t ) is a constant and the covariance function c(s, t ) isa function solely of the difference t − s. If this condition is satisfied for a non-Gaussian field the field isnot necessarily homogeneous but it is then said to be weakly homogeneous or homogeneous up to thesecond order moments.

Let Y (t ) be a Gaussian field with zero mean value function E [Y (t )] ≡ 0, unit variance function Var[Y (t )] ≡1, and correlation function ρ(s, t ) = ρ[Y (s),Y (t )]. Moreover, let g (x, t ) be some function of x ∈R and t ∈ Ifor which∫ ∞

−∞g (x, t )ϕ(x)d x ≡ 0,

∫ ∞

−∞g (x, t )2ϕ(x)d x ≡ 1 (1.89)

and let µ(t ) and σ(t ) > 0 be given functions of t ∈ I . Then the field

X (t ) =µ(t )+ g [Y (t ), t ]σ(t ) (1.90)

has mean value function µ(t ), variance function σ(t )2, and correlation function

ρ[X (s), X (t )] =∫ ∞

−∞

∫ ∞

−∞g (x, s)g (y, t )ϕ2[x, y ;ρ(s, t )]d x d y (1.91)

The field X (t ) is said to be obtained by a marginal transformation of the field Y (t ), and it is Gaussianif g (x, t ) is a linear function in x. According to (1.89) this linear function then must be g (x, t ) = x, and(1.91) gives ρ[X (s), X (t )] = ρ[Y (s),Y (t )] ≡ ρ(s, t ). If µ(t ),σ(t ) as well as the marginal transformation isindependent of t ∈ I , and Y (t ) is homogeneous, then X (t ) is also homogeneous, and (1.91) simplifies to

r (t ) =∫ ∞

−∞

∫ ∞

−∞g (x)g (y)ϕ2[x, y ;ρ(t )]d x d y (1.92)

where r (t − s) = ρ[X (s), X (t )] and ρ(t − s) = ρ[Y (s),Y (t )].We may now quite naturally ask the question of whether it is always possible to determine the cor-

relation function ρ(t ) of the Gaussian process such that a given nonnegative definite function r (t ) is thecorrelation function for the homogeneous field X (t ) = g [Y (t )]. The answer is negative. By the right sideof (1.92) the set of nonnegative definite functions is in general mapped into a genuine subset of the setof nonnegative definite functions. In other words, it is not granted that the integral equation (1.92) for agiven nonnegative definite function r (t ) has a solution ρ(t ) in the set of nonnegative definite functions.If a nonnegative definite solution exists the field X (t ) = g [Y (t )] is well defined with zero mean value, unitvariance and given correlation function r (t ). This type of homogeneous non-Gaussian field is called azero mean, unit variance homogeneous Nataf field [15, 16].

Example Let I =R and

X (t ) = exp[a +bY (t )] (1.93)

such that for any given t ∈ R, X (t ) has a lognormal distribution. The mean µ and the variance σ2 aregiven by

µ=∫ ∞

−∞ea+bxϕ(x)d x = ea+b2/2 (1.94)

Page 19: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 19

µ)2 = 1

µ2

∫ ∞

−∞e2(a+bx)ϕ(x)d x −1 = eb2 −1 (1.95)

consistent with (1.82) and (1.84). Thus

a = log(µ/√

1+X 2) (1.96)

b =√

log(1+V 2) (1.97)

where V =σ/µ is the coefficient of variation of X (t ). Then

g (x) = 1

V

ex√

log(1+V 2)

p1+V 2

−1

(1.98)

Substitution into (1.92) leads to a solvable integral. We get

r (t ) = 1

V 2

[(1+V 2)ρ(t ) −1

](1.99)

which by solution with respect to ρ(t ) gives

ρ(t ) = log[1+V 2r (t )]

log(1+V 2)(1.100)

consistent with (1.84) and (1.85). It can by examples be shown that there exist positive definite functionsr (t ) for which (1.100) gives functions ρ(t ) that are not positive definite.

In scientific or engineering applications it is usually so that the field X (t ) has a physical interpretationand that X (t ) must satisfy some physical or geometrical conditions. For example, it may be so that X (t )of physical reasons should be positive for all t . Often a model with X (t ) being Gaussian is applicablein spite of the physical condition of nonnegativity, simply because X (t ) may have so small a coefficientof variation that the probability of getting negative values of X (t ) is small as compared to the calculatedprobability of any event that is relevant for the engineering application. However, for larger coefficients ofvariation of X (t ) the Gaussian assumption may be in too gross conflict with the nonnegativity conditionto be applicable. Then often the lognormal field or some other nonnegative marginal transformation ofa Gaussian field is adopted as X (t ).

If the model is obtained solely by fitting to data there will be no inconsistency in the covariance func-tion modelling if all data are transformed inversely to data that are assumed to comply with a Gaussianfield model. However, in some cases it may be so that X (t ) satisfies some physical equation. For exam-ple, if X (t ) models the normal pressure on the wall of a vertical cylindrical silo with horizontally ideallysmooth wall, the horizontal equilibrium of the silo medium requires that X (t ) satisfies three global equi-librium equations that are linear in X (t ). These equations put restrictions on the choice of the covariancefunction of X (t ) among the nonnegative definite functions. Therefore, starting the modelling by obey-ing these equilibrium conditions and thereafter assuming that X (t ) is a homogeneous lognormal field,say, requires careful consideration of the nonnegative definiteness of ρ(t ) in (1.100) when a modellingcandidate for r (t ) has been chosen. The nonlinear relation between r (t ) and ρ(t ) usually requires somecorrective steps to be taken [17].

Page 20: Ditlevsen - MathematicalModels for Structural Reliability Analysis

20 Structural Reliability

1.6 Discretized fields defined by linear regression on a finite set offield values

The linear regression of a field X (t ) on a finite set Xn = (X (t1), ..., X (tn)) of random variables of the fieldwith regular covariance matrix Cov[Xn ,X′

n] is

E [X (t )|Xn] =µ(t )+Cov[X (t ),X′n]Cov[Xn ,X′

n]−1(Xn −µn) (1.101)

whereµ(t ) = E [X (t )] andµn = (µ(t1), ...,µ(tn)). The linear regression defines a field X (t |t1, ..., tn) ≡ E [X (t )|Xn]that may be said to have a dimension of randomness equal to n. Since

X (ti |t1, ..., tn) = X (ti ) (1.102)

the field X (ti |t1, ..., tn) interpolates between the values X (t1), ..., X (tn) of the field X (t ). The covariancefunctions of the row matrix Cov[X (t ),X′

n] play the role as deterministic interpolation functions (shapefunctions). The mean value function is identical for the two fields. The covariance function is

Cov[E [X (s)|t1, ..., tn], E [X (t )|t1, ..., tn]] = Cov[X (s),X′n]Cov[Xn ,X′

n]−1Cov[Xn , X (t )] (1.103)

which added to the residual covariance function, see (1.2), gives the covariance functionCov[X (s), X (t )]of the field X (t ).

In numerical calculations with fields it is most often necessary to discretize the fields in the sense ofreplacing fields of infinite dimension of randomness by approximating fields of finite dimension of ran-domness. This replacement is called random field discretization. It depends on the considered problemand the related error measure, which type of random field discretization is most effective and opera-tionally convenient.

As mentioned in the introduction section the replacement of X (t ) by E [X (t )|Xn] is sometimes calledkriging. The error of the calculation output comes from neglecting the residual field X (t )− E [X (t )|Xn],an error that in some problems can be crudely evaluated at the output level by repeated calculationsusing different dimensions of randomness of the discretized field. Field discretization different from thekriging method, but all based on linear regression in one or the other form, will be treated in several ofthe following sections.

The linearity of E [X (t )|Xn] with respect to X (t ) directly shows that if t is a parameter for which wecan talk about differentiability or integrability of X (t ), and X (t ) has such differentiability or integrabilityproperties, then

E [d X (t )|Xn] = dE [X (t )|Xn] (1.104)

and

E [∫Ω

X (t )d t |Xn] =∫Ω

E [X (t )|Xn]d t (1.105)

where the integration is over any suitably regular setΩ⊂ I .

1.7 Discretization defined by linear regression on finite set of linearfunctionals

In stochastic mechanics applications of random fields the fields often appear as integrands in the so-lutions to the relevant equations. For example, several types of load intensities acting on a structure

Page 21: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 21

can conveniently be modelled as random fields. The internal stresses and the displacements causedby the acting load are functions of weighted integrals (i.e. linear functionals) of the load intensity overthe structure. Another example concerns constitutive relations that contain spatially varying parametersmodelled as outcomes of random fields. Macroscopic effects of such variation generally are determinedby weighted integrals of the constitutive parameters over the structure. Numerical approximations tothe solutions may therefore in both examples be improved with respect to accuracy if the field discretiza-tions are made such that a selected set of relevant linear functionals are not affected by the discretization.Linear regression on the set of linear functionals serves this purpose [18].

Let F1, ...,Fn be n different linear functionals defined on the field X (t ), and let X (t ) be the field definedby the linear regression

X (t ) ≡ E [X (t )|FX ] (1.106)

where FX = (F1X , ...,FnX ). The linearity of the linear regression ensures that the linear functionalsare invariant to the replacement of X (t ) by X (t ):

FX = E [FX |FX ] = FX (1.107)

The linear regression reads:

E [X (t )|FX ] = E [X (t )]+Cov[X (t ),FX ′]Cov[FX ,FX ′]−1(FX −E [FX ])

= E [X (t )]+Fv Cov[X (t ), X (v)]′(Fu F′v Cov[X (u), X (v)])−1FX −E [X ] (1.108)

where the indices u and v indicate that F operates on the functions of u and v , respectively.Next consider any linear functional G defined both on X (t ) and X (t ). Then the discretization error

with respect to G is

GX −GX =GX − E [GX |FX ] (1.109)

that is, the discretization error is the residual corresponding to the linear regression of GX on FX .Thus the variance of the discretization error is

Var[GX −GX ] = Var[GX ]−Cov[GX ,FX ′](Fu F′v Cov[X (u), X (v)])−1Cov[GX ,FX ]

=GsGt

Cov[X (s), X (t )]−FuCov[X (s), X (u)′

(Fu F′

v Cov[X (u), X (v)])−1 Fv Cov[X (t ), X (v)]

(1.110)

where the indices s and t indicate that G operates on the functions of s and t , respectively. Thus theresidual variance is obtained by application of the bilinear functional GsGt to the residual covariancefunction corresponding to the linear regression of the field X on the vector of linear functionals FX .

Example Consider a beam resting on a linear elastic bed of stiffness S(x) at the point x, and subdividethe length of the beam into intervals [x0, x1], ..., ]xm−1, xm]. A displacement function u(x) of the form as apolynomial of nth degree generates a reaction load intensity field that over the interval ]xi−1, xi ] has thefollowing resulting vertical reaction and moment with respect to x = 0:∫ xi

xi−1

u(ξ)S(ξ)dξ=n∑

j=0a j

∫ xi

xi−1

ξ j S(ξ)dξ (1.111)

Page 22: Ditlevsen - MathematicalModels for Structural Reliability Analysis

22 Structural Reliability

∫ xi

xi−1

ξu(ξ)S(ξ)dξ=n∑

j=0a j

∫ xi

xi−1

ξ j+1S(ξ)dξ (1.112)

respectively. With S(x) being a field these reactions and moments are not affected by replacing S(x) bythe discretized field S(x) defined as the linear regression of S(x) on the m(n +2) linear functionals

Fi j S =∫ xi

xi−1

ξ j S(ξ)dξ; i = 1, ...,m; j = 0, ...,n +1 (1.113)

for all displacements that vary as a polynomial of at most nth degree (or just as a function that withineach of the subintervals vary as a polynomial of at most nth degree).

Example Assume that the flexibility (compliance) of a linear elastic Euler-Bernoulli beam is modelledas a field C (x), and consider a straight beam element over the interval [−L,L] loaded externally fromthe neighbour elements at the end points and by a distributed load varying as a polynomial of (n −2)thdegree along the beam axis and acting orthogonal to the axis. Thus the bending moment M(x) in thebeam varies as an nth degree polynomial. It then follows from the principle of virtual work that the totalangular rotation over [−L,L] and the displacement of the one end point orthogonal to the tangent at theother end point have the form

∫ L

−LC (x)M(x)d x =

n∑j=0

a j

∫ L

−Lx j C (x)d x (1.114)

∫ L

−LxC (x)M(x)d x =

n∑j=0

a j

∫ L

−Lx j+1C (x)d x (1.115)

respectively. Thus the macro flexibility properties of the beam element ranging over [−L,L] are invariantto the replacement of C (x) by C (x) defined as the linear regression of C (x) on the n+2 linear functionals

F j C =∫ L

−Lx j C (x); j = 0, ...,n +1 (1.116)

Example Consider a plane straight or curved beam with the beam axis completely defined in terms of thenatural equation representation s,ϕ(s), where s is the arch length along the beam axis and ϕ(s) is theangle between the tangent at s and the tangent at the origin of s. The beam is subjected to the randomload intensity fields p(s) and q(s) acting in the plane of the beam orthogonal and tangential to the beamaxis, respectively, Fig. 1.

Page 23: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 23

Figure 1. Top left: Load fields p(s) and q(s). Top right: Indirect loading on simply supported beams.Bottom left: Support forces from simple beams. Bottom right: Discretized replacement load fields p(s) andq(s) defined as the linear regressions of p(s) and q(s) on the support forces.

First let us assume that the beam structure is statically determinate. If we let the load fields act in-directly through a row of beams that are simply supported on the given beam, the internal forces in thebeam are the same at the supports of the simply supported beams as in the directly loaded beam. There-fore a representation of the random load fields by the corresponding orthogonal and tangential randomsupport forces of the simple beams is sufficient for the stochastic representation of the internal forcesat these discretization points. However, if the beam is statically indeterminate the redundants are func-tionals of the directly acting load fields causing that the internal forces at the discretization points do notbecome error free if the directly acting load fields are replaced by the field of the concentrated supportforces. This effect becomes more dominant for coarser discretization. An illustrative example is an Euler-Bernoulli beam of shape as a circular ring with uniform load field acting orthogonal to the beam axis.This directly acting load field causes a normal force but no bending moments in the beam. However, anysystem of indirect loading will generate non-zero bending moments that become at their extreme levelif the discretization is chosen so coarse as to be defined by only two diametrically opposite points, Fig.2. Obviously this undesirable discretization effect is counteracted by reintroducing a directly acting loadfield that has some similarity with the given load field. This is achieved by using the linear regression ofthe given load field on the set of statically equivalent support forces as the replacement load field. Thus

Page 24: Ditlevsen - MathematicalModels for Structural Reliability Analysis

24 Structural Reliability

the method of regression on linear functionals in combination with the principle of indirect loading is arational tool for stochastic finite element discretization of random load fields on beam structures [19].

Figure 2. Left: Uniformly loaded circular ring carrying the load solely by tension. Right: The most coarsediscretization of the load field into the support forces of two simple beams. The ring ovalizes due to bending.The linear regression p = E [p | pD] = p reestablishes the uniform load intensity.

At a discretization point let X = X++ X− and Y = Y++Y−be the orthogonal and tangential supportforces, respectively, from the two adjacent simply supported imaginary beams assumed to carry the di-rect load fields over to the given beam structure. Let the origin of the arch length parameter s be at thediscretization point. The support forces coming from the imaginary simply supported beam of span L+on the positive side of the origin and from the imaginary simply supported beam of span L− on the neg-ative side of the origin are X+,Y+ and X−,Y− respectively. The 4 support forces are linear functionals ofthe load fields p(s) and q(s), and they depend solely on the natural equation representation s,ϕ(s) inthe interval from −L− to L+. The corresponding 8 influence functions I+np (s), I+nq (s), I+t p (s), I+t q (s), I−np (s),I−nq (s), I−t p (s), I−t q (s) are derived from elementary static analysis.

The principle of indirect loading suggests that it may be sufficient to let the replacement load fieldsp(s) and q(s) be the linear regressions on the resulting support forces X = X+ + X− and Y = Y+ + Y−at all discretization points instead of being the linear regressions on the individual forces X+, X−,Y+,Y−at these points. Thereby the number of discretization variables are reduced to the half on the expenseof an increased residual variance. Clearly, in the limit where the load fields on the positive side of theorigin are stochastically independent of the load fields on the negative side of the origin, p(s) and q(s)for s > 0 should depend only on X+,Y+ and not on X−,Y−, that is, if p(s) and q(s) are defined as the linearregressions on X and Y , then X−and Y− add irrelevant random contributions to the replacement fieldsto be applied for s > 0.

1.8 Poisson Load Field Example

The material of this section is mathematically very technical and it is not used subsequently. The sectionserves to illustrate that the method of linear regression on linear functionals is applicable even to replacea load field with “sample curves” of highly singular nature by a discretized almost statically equivalentload field with continuous sample curves [18]. The considered example is of a class for which the directkriging method is not applicable. However, the example is perfectly relevant for investigations concern-ing load effects from load fields of intermittent type as for example traffic load fields. The field is a ho-

Page 25: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 25

mogeneous Poisson stream of single forces acting orthogonal to a straight line axis, Fig. 3. For simplicitythe load field is taken to be extended along the entire axis from −∞ to ∞, even though the actual loadedbeam has finite length. The mean number of Poisson points per length unit is λ, and the sequence ofrandom forces Fi assigned to the sequence of Poisson points are assumed to be mutually independent,independent of the Poisson process, and all distributed as a given random variable F . According to thediscretization principle of indirect loading explained in the last example of the previous section, we letthe load field be applied to a row of imaginary simply supported beams of span L. For convenience wetake L as length unit and choose the origin of the axis at the end point of an interval . Thus the Poissonprocess on a dimensionless s-axis has the intensity λL.

First we will consider the particular situation where a single force F is placed at random within theinterval of the first three units. The nodal forces X1 and X2 at the abscissas 1 and 2 caused by the forceF are X1 = F I1(U ) and X2 = F I2(U ), respectively, where I1(s) = s1s∈[0,1] + (2− s)1s∈[1,2] and I2(s) = (s −1)1s∈[1,2]+(3−s)1s∈[2,3] are the influence functions, and where U is a random variable which is uniformlydistributed between 0 and 3. For a force placed outside the interval from 0 to 3, the nodal forces at 1 and2 are zero. Clearly X1 and X2 are identically distributed with nth order moment

E [X ni ] = E [F n(U n 1U∈[0,1] + (2−U )n 1U∈[1,2])] = 2E [F n]E [U n 1x∈[0,1]] = 2

3(n +1)E [F n] (1.117)

giving E [Xi ] = 13 E [F ], Var[Xi ] = 2

9 E [F 2]− 19 E [F 2] = 2

9 Var[F ]+ 19 E [F ]2 and

E [X1X2] = E[F 2(2−U )(U −1)1U∈[1,2]

]= E[F 2(1−U )U 1U∈[0,1]

]= 1

18E [F 2] (1.118)

so that the covariance between X1 and X2 becomes

Cov[X1, X2] = 1

18E [F 2]− 1

9E [F ]2 = 1

18(Var[F ]−E [F ]2) (1.119)

If N forces are placed independently and at random in the interval from 0 to 3, the points will be dis-tributed exactly as the points in a realization of a homogeneous Poisson process given that there are Npoints within the interval. Conditional on N , the mean, variance, and covariance are obtained by apply-ing the factor N on the above results. Using that E [N ] = Var[N ] for a Poisson distribution, unconditioninggives

E [Xi ] = 1

3E [F ]E [N ] (1.120)

Var[Xi ] = E [Var[Xi |N ]]+Var[E [Xi |N ]]

=(

2

9Var[F ]+ 1

9E [F ]2

)E [N ]+

(1

3E [F ]

)2

Var[N ] = 2

9E [F 2]E [N ] (1.121)

Cov[X1, X2] = E [Cov[X1, X2|N ]]+Cov[E [X1|N ],E [X2|N ]]

=(

1

18E [F 2]− 1

9E [F ]2

)E [N ]+

(1

3E [F ]

)2

Var[N ] = 1

18E [F 2]E [N ] (1.122)

where E [N ] = 3λL. It follows from this that the homogeneous sequence of random nodal forces Xi , i =...,−2,−1,0,1,2, ... has the mean and the covariances

E [Xi ] = E [F ]λL (1.123)

Page 26: Ditlevsen - MathematicalModels for Structural Reliability Analysis

26 Structural Reliability

Cov[Xi , X j ] = 1

6(δi ( j−1) +4δi j +δi ( j+1))λLE [F 2] (1.124)

respectively, where δi j is Kronecker’s delta.

Figure 3. Poisson traffic load field discretized into an almost statically equivalent continuous and piecewiselinear load field defined within any finite interval by a finite non-random number of random variables.

To obtain the linear regression p(s) of the Poisson load field p(s) on the sequence of nodal forces Xi ,we need to invert the covariance matrix of infinite order, and to calculate the covariances Cov[Xi , p(s)].Noting that

∞∑j=−∞

(δi ( j−1) +4δi j +δi ( j+1))a|k− j | =

a|k−i−1|(a2 +4a +1) for k 6= i

2a +4 for k = i(1.125)

it follows that this expression is proportional to δi k only for a = −(p

3+2) or a = p3−2. Since a|i− j | is

bounded as | i − j |→ ∞ only for the last value of a for which the constant of proportionality is 2p

3, itfollows that the inverse covariance matrix is the matrix of infinite order with the element in ith row andjth column equal to

p3(λLE [F 2])−1a−|i− j | (1.126)

with a = (p

3−2)−1 =−(p

3+2).

Page 27: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 27

The mean load intensity obviously is

E [p(s)] = E [Xi ] (1.127)

For the covariances Cov[Xi , p(s)] we first consider the case N = 1. Then for h > 0 and as h → 0:

E [Xi p(s)] = limh↓0

E

[F Ii (U )1U∈[s,s+h]

F

h

]= E [F 2]lim

h↓0

1

hE [Ii (U )1U∈[s,s+h]]

= E [F 2]lim

h↓0

1

hE [Ii (s)1U∈[s,s+h]]

= 1

3E [F 2]Ii (s) (1.128)

so that Cov[Xi , p(s)] = 13 E [F 2]Ii (s)− 1

9 E [F ]2. Then

Cov[Xi , p(s)|N ] = 3E [F 2]Ii (s)−E [F ]2N /9 (1.129)

Unconditioning with respect to N finally gives

E [p(s)] = E [E [p(s)|N ]] = E

[E

[F

h1U∈[s,s+h]

]N

]= 1

3E [F ]E [N ] =λL E [F ] (1.130)

Cov[Xi , p(s)] = 1

9

3E [F 2]Ii (s)−E [F ]2E [N ]+ 1

3E [Xi ]E [F ]Var[N ]

= 1

3E [N ]E [F 2]Ii (s) =λL E [F 2]Ii (s) (1.131)

where

Ii (s) = (s − i +1)1s∈[i−1,i [ + (i +1− s)1s∈[i ,i+1] (1.132)

The linear regression p(s) of p(s) on the sequence of nodal forces Xi is calculated as follows:

1p3

p(s) = 1p3λL E [F ]+

∞∑i=−∞

Ii (s)∞∑

j=−∞a−|i− j |[X j −λL E [F ]]

=∞∑

j=−∞X j

∞∑i=−∞

Ii (s)a−|i− j | =∞∑

j=−∞X j

∞∑k=−∞

Ik+ j (s)a−|k| =∞∑

j=−∞X j

[s]− j+1∑k=[s]− j

I0(s −k − j )a−|k|

=∞∑

j=−∞X j

(I0(s − [s])a−|[s]− j |+ I0(s − [s]−1)a−|[s]− j+1|

)=

∞∑k=−∞

(Xk+[s]I0(s − [s])+Xk+[s]+1I0(s − [s]−1)

)a−|k| (1.133)

in which [s] is the integer part of s. It is seen that p(s) is obtained by linear interpolation between theadjacent random variables Y[s],Y[s]+1 in the sequence

Yi = p

3∞∑

k=−∞Xi+k a−|k| (1.134)

This is illustrated in Fig. 3. The mean and the covariances are

E [Yi ] =p3

∞∑k=−∞

E [Xi+k ]a−|k| = E [Xi ] = E [p(s)] (1.135)

Page 28: Ditlevsen - MathematicalModels for Structural Reliability Analysis

28 Structural Reliability

Cov[Yi ,Y j ] = 3∞∑

k=−∞

∞∑l=−∞

Cov[Xi+k , X j+l ]a−|k|−|l |

= 1

2λL E [F 2]

∞∑k=−∞

∞∑l=−∞

(δ(i+k)( j+l−1) +4δ(i+k)( j+l ) +δ(i+k)( j+l+1))a−|k|−|l |

= 1

2λL E [F 2]

∞∑l=−∞

(a−|l−i−l |−|l− j |+4a−|l−i |−|l− j |+a−|l−i+l |−|l− j |

)=p

3λL E [F 2]a−|i− j | (1.136)

respectively. The exponential decay with | i − j | shows that the correlation structure is Markovian. Thesequence of correlation coefficients a−|i− j | has the specific values 1,

p3− 2 ≈ −0.268, 7− 4

p3 ≈ 0.072,

15p

3−26 '−0.019, 97−56p

3 ≈ 0.005, ....It follows from (1.8) that the intensity of the Poisson process, the properties of the load sequence,

and even the subdivision into intervals have no influence on the correlation coefficients of the randomsequence Yi that determines the approximately statically equivalent load intensity p(s) to the givenPoisson load field p(s) by simple linear interpolation. It is noted that changing from the dimensionlessabscissa s to the real geometrical abscissa implies that the dimensionless load intensity p(s) transformsto p(s)/L. Thus the mean load intensity becomes λE [F ] and the variance of Yi /L becomes

p3λE [F 2]/L.

Surprisingly, perhaps, the correlation coefficients remain invariant with respect to L.The covariance function of p(s) follows from

E [p(s)p(t )|N = 1] = limh↓0

E

[(F

h

)2

1U∈[s,s+h]∩[t ,t+h]|N = 1

]= E [F 2]lim

h↓0

1

h2 E[1U∈[s,s+h]∩[t ,t+h]|N = 1

]= E [F 2]limh↓0

1

h2

h

3I0

(s − t

h

)= 1

3E [F 2]δ(s − t ) (1.137)

so that

Cov[p(s), p(t )|N ] = N (E [p(s)p(t )|N = 1]−E [p(s)|N = 1]E [p(t )|N = 1])

= N

(1

3E [F 2]δ(s − t )− 1

9E [F ]2

)(1.138)

and

Cov[p(s), p(t )] = E

[N

(1

3E [F 2]δ(s − t )− 1

9E [F ]2

)]+Var[N E [F ]] = λL E [F 2]δ(s − t ) (1.139)

The residual covariance function is the difference between (1.139) and the function obtained from thelinear regression by substituting the covariance Cov[Xi , p(t )] for Xi , that is, by replacing Yi by

∞∑k=−∞

Cov[Xi+k , p(t )]a−|k| =p3λL E [F 2]

∞∑k=−∞

Ii+k (t )a−|k| =p3λL E [F 2]

[t ]−i+1∑k=[t ]−i

I0(t − i −k)a−|k|

=p3λL E [F 2]

[I0(t − [t ])a−|[t ]−i |+ I0(t − [t ]−1)a−|[t ]−i+1|

](1.140)

Linear interpolation with respect to s between the two values corresponding to i = [s] and i = [s]+1, thengives that except for the factor λL E [F 2] the residual covariance function is

C (s, t ) = δ(s − t )−p

3(1−3σ−3τ+6στ+3[

(τ−σ)1[s]<[t ]

+(σ+τ−2στ)1[s]=[t ] + (σ−τ)1[s]>[t ]

]a−|[s]−[t ]| (1.141)

Page 29: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 29

where σ= s − [s] and τ= t − [t ] are the fractional parts of s and t , respectively.Clearly the linear regression p(s) is not a good direct approximation to the Poisson load field p(s).

However, the accuracy should only be judged after application of the linear functional that maps theload field into the relevant load effect. In fact, since

GsGt δ(s − t ) =Gg (s) (1.142)

where g (s) is the influence function of the considered load effect, the delta function singularity becomeseliminated. Let g (s) be a function that can be defined by linear interpolation between its values on theintegers. Obviously g (s) can be expressed as a linear combination of the functions Ii (s) for which theresidual variance is zero. Thus the residual variance is zero also for g (s) showing that the static equiva-lence is exact for any influence function of this particular type. To get a sufficiently general evaluationof the order of size of the residual variance consider a piece-wise linear influence function of the formg (s − l ) for some l ∈ [0,1]. Then it is sufficient to consider the difference between g (s − l ) and the piece-wise linear function defined by the values of g (s − l ) on the integers. This difference function is zero onthe integers. In the interval [i −1, i ] corresponding to i = [s] it is proportional to the function

σ

l1σ∈[0,l [ +

1−σ1− l

1σ∈[l ,1] (1.143)

by a factor ci . Applying (1.7), the residual variance becomes

Var[G(s)−G(s)] =λL E [F 2]R(l ; ci ) (1.144)

where

R(l ; ci ) =∫ ∞

−∞

∫ ∞

−∞C (s, t )

(1σ∈[0,l [

σ

l+1σ∈[l ,1]

1−σ1− l

)(1τ∈[0,l [

τ

l+1τ∈[l ,1]

1−τ1− l

)c[s]c[t ]d s d t

=([p

3+2(p

3−1)l (1− l )]

/12) ∞∑

i=−∞c2

i +([p

3+2p

3 l (1− l )]

/6) ∞∑

i=−∞ci

∞∑j=1

ci+ j a− j (1.145)

For ci given, the residual variance is maximal for l = 0.5 and minimal for l = 0 or 1 with the two coef-ficient values in the last factor equal to 0.175, 0.433 and 0.144, 0.289, respectively. Thus the dominatinginfluence on the residual variance comes from the sequence ci . By and large the best accuracy is ob-tained by choosing the equidistant stochastic finite element division points as closely as possible to thepoints of non-differentiability of the actual influence function. Of course, this attempt tends to counter-act the goal of having few discretization random variables.

1.9 Stochastic finite element methods and reliability calculations

The main features of finite element methods applied in structural engineering are assumed known tothe reader. When random fields are included in the structural modelling and finite element methodsare applied, the fields must be given a suitable representation on the finite element level. This leads tothe notion of stochastic finite elements. Essentially the stochastic part of the finite element modellingconsists of replacing the field of infinite dimensionality of randomness by a field defined by a finite set ofrandom variables. Referring to the previous sections there are several possibilities of such field discretiza-tions giving different levels of approximation errors, of course. Seeking for better approximations lead

Page 30: Ditlevsen - MathematicalModels for Structural Reliability Analysis

30 Structural Reliability

to increasing computational efforts even though it may not necessarily lead to an increasing conceptualcomplexity of the discretization method.

Based on the presentation in the previous sections the following field discretization methods can belisted [12]. The first 4 to 6 methods are listed by and large in the order of increasing output approximationaccuracy for the same dimensionality of randomness.

1. Midpoint methodThe field is replaced by a field of random values that are constant within each element. These values aretaken as the field values of the original field at a central point of each element [20]. Thus the replacementfield depends on the element mesh and is discontinuous over the element boundaries. The joint prob-ability distribution of the finite set of random variables is directly given by the definition of the originalfield.

2. Integral average methodSame as 1 except that the random values are taken as the average field value over each element [21]. Thejoint probability distribution of the finite set of random variables is unknown except for Gaussian fields.

3. Shape function methodWithin each element the field is replaced by an interpolation between a finite set of field values usingsome suitable, but otherwise arbitrary shape functions (spline functions) as interpolation functions [22].The replacement field gets the same analytical properties as the chosen shape functions.

4. Direct linear regression methodThis is an optimal version of 3 in the sense that the best shape functions (in the mean square sense)are used thus removing the arbitrariness with respect to shape function choice. The field is replaced bythe linear regression of the field on a finite set of field values [12]. The replacement field has the sameanalytical regularity properties as the mean and covariance function of the field. The mesh of points atwhich the field values are taken need not be related in any particular way to the finite element mesh of themechanical model (as is also the case for the shape function method). The joint probability distributionof the set of random variables is directly given by the definition of the field. Except at the points of themesh, the replacement field of a non-Gaussian field is generally not mean value correct. For a Gaussianfield the replacement field is a mean value correct Gaussian field.

Sometimes this procedure is called kriging (a terminology that according to its origin [3]–[6] (Sec-tion 2.1) rather should be used solely for statistical-stochastic interpolation between measured values atdifferent spatial points. This is treated in the next section).

5. Method of marginally backtransformed linear regression on marginally transformed field valuesThe field is assumed to be a marginally transformed Gaussian field (translation field [23]), for examplethe lognormal field. The corresponding Gaussian field is replaced by the linear regression of the Gaussianfield on a finite set of its field values. The marginal transformation is next applied to the linear regressionto define the replacement field to the original field [12]. The joint probability distribution of the set ofrandom variables is directly given by the definition of the field. Except at the points of the mesh thereplacement field is not mean value correct. However, it reproduces the marginal median at any point.

6. Method of mean value correct backtransformed linear regression on marginally transformed field valuesSame as 4 except that the linear regression of the Gaussian field is interpreted as a conditional mean andtransformed back to the conditional mean of the original field on the finite set of random variables of theoriginal field. For a lognormal field the relevant formulas are (1.82) to (1.87) and (1.93) to (1.100). More

Page 31: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 31

generally this method is applicable for a Nataf field [12] (Section 2.6).

7. Method of linear regression on linear functionalsThe field is replaced by the linear regression of the field on a finite set of linear functionals defined on thefield [18]. The linear functionals are chosen among linear functionals that are relevant for the solutionof the actual problem. Thus the method is an extension of the method known from the literature as theweighted integral method [24, 25]. The joint probability distribution of the finite set of linear functionalis not known except for Gaussian fields. The replacement field is more regular with respect to analyticalproperties than the original field. This property makes the method applicable on fields for which noneof the previously mentioned methods are applicable (Section 2.9).

8. Truncated series expansion methodsStarting from an exact expansion

X (t ) = E [X (t )]+∞∑

n=0Vnhn(t ) (mean square sense) (1.146)

of the field with respect to a complete orthonormal set h1(t ), ...,hn(t ), ... of deterministic functions overt ∈Ωwith random variable coefficients

Vn =∫Ω

X (t )hn(t )d t , n = 1,2, ... (1.147)

the replacement field is obtained by a suitable approximation to a suitable truncation of the expansion[8, 9, 26]. In case the random coefficients are required to be uncorrelated, the unique Karhunen-Loeveexpansion [7, 27] is obtained as the exact expansion on which different approximation methods [8, 9]are applied as explained in Section 2.1. For non-Gaussian fields it is a difficult problem to determine thejoint distribution of the random coefficients (1.147) of the expansion. For Gaussian fields the coefficientsbecome Gaussian and mutually independent.

For any other complete orthonormal function system than that used in the Karhunen-Loeve expan-sion the random coefficients become correlated [26, 27]. However, by constructing a one-to-one lineartransformation of any finite subset of the random coefficients into an uncorrelated set of random vari-ables it can be shown that the corresponding subsum of finitely many terms in the expansion can be agood approximation to a corresponding subsum of the Karhunen-Loeve expansion containing the samenumber of terms [26].

The finite expansion obtained by truncation of the Karhunen-Loeve expansion after the nth term isidentical to the linear regression of the field on the n random coefficients of the finite expansion (see textfollowing (1.28)). This statement is generally not true for other expansions due to the correlation betweenthe truncated sum and the remainder. Thus the accuracy can be improved by replacing the truncationsum by the linear regression of the field on the random coefficients of the finite expansion. In fact, sincethe random coefficients (1.147) are linear functionals on the field, the truncated series expansion methodis a special case of the previous method 7 of linear regression on linear functionals.

Judging the accuracy on the output level of the investigated mechanical problem the linear function-als of the Karhunen-Loeve expansion (for which the weight functions are the eigenfunctions related tothe covariance function of the field) or any other orthogonal expansion may be inferior to linear func-tionals that are of direct relevance to the mechanical problem.

Any of the listed random field discretization methods may with varying accuracy be applied for the pur-pose of rendering a structural reliability analysis practicable. The replacement field to the original field

Page 32: Ditlevsen - MathematicalModels for Structural Reliability Analysis

32 Structural Reliability

X is in all the methods defined by a finite-dimensional random vector FX where F is a linear vectorfunctional defined on the field. In all the methods the possibility of further reduction of the dimension ofrandomness can be studied by applying a one-to-one inhomogeneous linear transformation of FX intoa set of uncorrelated and normalized random variables using the properties explained in the text after(1.28)[12, 26, 28].

For reliability analysis applications the mechanical finite element program is typically formulatedsuch that it can compute approximate values of a critical vector functional defined on the set of all ran-dom variables of the problem. The adverse event is that this functional gets an outcome outside a givensafe domain of functional values. This safe domain is specified on the basis of the given superior struc-tural performance criteria. Thus the finite element program for the given safe domain defines a limitstate surface in the finite-dimensional space of the vector of all random variables of the problem.

For simplicity and also sufficient for the generalisation of the following explanation, let the randomvector FX contain all the randomness of the problem. To apply the generally available computer pro-grams for computing approximations to the probability of the adverse event (e.g. programs that arebased on first or second order reliability analysis methods (FORM or SORM) as well as on simulationmethods) all what is needed is to specify input information about the distribution properties of FX . IfX is a Gaussian field then FX is a Gaussian vector and besides specifying this information it is there-fore only required that the mean vector and the covariance matrix of FX be specified. These first andsecond order moments can be calculated by simple numerical integration of weighted single and doubleintegrals of the mean function and the covariance function of the random field X , respectively. However,if X is non-Gaussian, then more than the second moment characterisations of FX is needed. Generallyit is then mandatory for obtaining calculation practicability to introduce suitable distribution approxi-mations.

If the one-dimensional marginal distributions and the covariance matrix of FX are known, a pos-sibility may be to use the Nataf distribution to approximate the distribution of FX . If it exists, thisdistribution is defined as in Section 2.5 by an increasing marginal transformation of a Gaussian distribu-tion of the same dimension as FX . The transformation is constructed such that the covariance matrixand the one-dimensional distributions of FX are correctly reproduced by the transformation. However,it may in some problems not be practicable to determine the marginal distributions of FX , except pos-sibly as approximations obtained by statistical fitting of standard distribution types to extensive samplesof simulated outcomes of FX . Such a simulation procedure can be difficult in itself because an accurategeneration of outcomes of FX requires an accurate simulation of finely discretized realisations of thenon-Gaussian field X .

Since the assumption of having a Nataf distribution of FX in general is an approximation, it maybe justified to make the further approximation of solely concentrating on obtaining the skewness andthe kurtosis of the marginal distributions of FX . Knowing the first four moments of each of the one-dimensional marginal distributions, one may choose some standard type distributions that reproducethese four moments, and then adopt the Nataf distribution with these marginal standard type distribu-tions. A simpler possibility is, perhaps,to use the moment information to replace the components of therandom vector FX by third degree polynomials of correlated Gaussian variables (Winterstein approx-imations [29]) such that they reproduce the first four marginal moments and all the correlation coeffi-cients correctly [30]-[32]. If no other methods are readily applicable for the given problem, the skewnessand the kurtosis may always be estimated statistically from a sample of simulated outcomes of FX .

Page 33: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 33

1.10 Classical versus statistical-stochastic interpolation formulatedon the basis of the principle of maximum likelihood

In modern reliability analysis in particular in geotechnical engineering or earthquake engineering it canbe necessary to make interpolation between measured values obtained at different spatial points and/orat different points in time [33, 34]. Following a paper of the author [35] this section first considers theclassical problem of interpolation and the difficulty of error estimation in the classical theory exceptunder conditions that only rarely are satisfied in practice. The formulation of a conditional random fieldmodel for interpolation is next introduced as a pragmatic alternative. However, it is attempted to presentthe statistical-stochastic interpolation method with a linking to classical interpolation because it mayease the appreciation of the method as a pragmatic interpolation method whether or not the data aregenerated by some physical process that behaves according to a random mechanism. Thus the methodof statistical-stochastic interpolation is simply viewed as a rational model of the uncertain engineeringguess on interpolated values. The basis is the idea that continuity and differentiability principles makethe variation among the measured values representative for what should be expected with respect to theuncertainty of the interpolation values.

Finally it is shown that pragmatic principles of computational practicability of random field interpo-lation in large regularly organized tables lead to a very narrow class of applicable correlation functions ofthe interpolation fields.

The solution to the problem of interpolation in a table (x0, y0),(x1, y1), ...,(xn−1, yn−1) of values of the n times differentiable function y = f (x) is in classical mathematicalanalysis given by Lagrange on the form

f (x) = y0Q0(x)+ y1Q1(x)+ ...+ yn−1Qn−1(x)+Rn(x) (1.148)

in which Qi (x), i = 0,1, ...,n−1, is the uniquely defined polynomial of (n−1)th degree that takes the value1 for x = xi and the value 0 for x = x0, ..., xi−1, xi+1, ..., xn−1. The Lagrangian remainder Rn(x) is

Rn(x) = (x −x0) · ... · (x −xn−1)f (n)(ζ)

n!(1.149)

in which ζ is some number contained within the smallest interval I that contains x, x0, ..., xn−1. Interpo-lation to order n−1 then consists of using (1.148) with the remainder neglected. The error committed bythe interpolation is bounded in absolute value by

| Rn(x) |≤ | (x −x0) · ... · (x −xn−1) |n!

maxζ∈I

| f (n)(ζ) | (1.150)

Thus the error evaluation in classical interpolation is not offering a model of the error in terms of a ran-dom variable but only an upper bound that can be highly conservative. Moreover, the evaluation of thebound (1.150) requires that the nth derivative can be obtained. This, however, is in contrast to most sit-uations in practice where interpolation is needed. Often only the table (x0, y0), (x1, y1), ..., (xn−1, yn−1) isgiven and the values of the function f (x) are unknown between these points. This is the typical situationwhen the table is obtained by point by point measurement of some physical quantity that varies with xin an unknown way. The table may also be the result of a lengthy computation. In principle f (x) maybe computed for any needed value of x but it may be more economical to accept even a considerableerror in the evaluation of f (x) as obtained by interpolation between already computed values. The sameholds in the case of values obtained by measurements. In some situations it may even be impossible to

Page 34: Ditlevsen - MathematicalModels for Structural Reliability Analysis

34 Structural Reliability

measure intermediate values (as in the obvious case of x being the time, but sometimes also in caseswhere x is a spatial coordinate). With only the table of points given, the interpolation procedure is usu-ally guided by a principle of simplicity that may be supported by physical and geometrical arguments. Toappreciate the philosophy of the principle of simplicity behind practical interpolation it is only neededto note that there is an infinite set of interpolation functions passing through the given points. Besidessatisfying differentiability requirements to some specified order this set is obviously characterized bythe property that the difference between any two functions of the set is a function with zero points atx0, x1, ..., xn . In practice the choice of the interpolation function from the set is usually made in view ofchoosing an interpolation function that in itself and in all the derivatives up to the specified order has assmall a variability as possible. Even though this principle of simplicity of the interpolation may be givena precise mathematical definition and it thereafter may be demonstrated that it leads to a unique choiceof the interpolation function, it is still just an arbitrary principle that does not give any indication of theinterpolation error.

A solution to this error evaluation problem can be obtained by introducing an alternative but some-what similar principle of simplicity. It is based on the principles of statistical reasoning.

Let J (y0, ..., yn−1) be the set of interpolation functions and introduce a probability measure over theunion

F = ⋃z ∈Rn

J (z) (1.151)

in which z = (z0, ..., zn−1). If it is solely required that the applied interpolation function is n times con-tinuously differentiable, then F is the class of n times continuously differentiable functions. Consider aprobability measure over F and assume that it depends on a number of parameters θ1, ...,θq . For eachvalue set of θ1, ...,θq the probability measure defines a random field over the range of values of x. Let theprobability measure have such regularity properties that there is a probability density at the set J (z) foreach z ∈Rn in the sense of being the unique limit

limd→0

P

[ ⋃ζ ∈N (z)

J (ζ) | θ1, ...,θq

]Vol[N (z)]

(1.152)

where N (z) is an arbitrary neighborhood of z with volume Vol[N (z)], d is the diameter of N (z) accordingto some suitable diameter definition, and P [· | θ1, ...,θq ] is a suitably defined probability measure.

The principle of simplicity now concerns a principle about how to choose the values of the param-eters θ1, ...,θq that fix the probability measure on F . Instead of the deterministic principle of least vari-ability it is reasonable to choose θ1, ...,θq so that the probability density has its maximum for z = y =(y0, y1, ..., yn−1), that is, at the actual set of interpolation functions. This is the well-known principle ofmaximum likelihood estimation in the theory of mathematical statistics. When the parameter valuesare chosen for example according to the principle of maximum likelihood, the probability measure onF induces a probability measure in the relevant set of interpolation functions J (y) simply as the condi-tional probability measure given J (y), that is, given z = y. Thus a conditional random field is defined thatpossesses the desired properties of expressing the interpolated value at x of the unknown function as arandom variable. At the points of the given table (x0, y0), ..., (xn−1, yn−1) the random variable degenerateto be atomic, but at a point x different from the points of the table it gets a mean and a standard deviation

Page 35: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 35

that depend on x. The detailed nature of this dependency is laid down in the mathematical structure ofthe probability measure introduced in F .

>From an operational point of view the most attractive probability measure to choose is the Gaussianmeasure. Let the Gaussian measure on F have mean value function µ(x), x ∈R, and covariance functionc(ξ,η), ξ, η ∈R. Then the conditional measure on J (y) is Gaussian with mean value function given by thelinear regression (1.101), that is,

E [Y (x) | z = y] =µ(x)+ [c(x, x0)...c(x, xn−1)]c(xi , x j )−1

y0 −µ(x0)...yn−1 −µ(xn−1)

(1.153)

and covariance function given by the residual covariance function corresponding to the linear regression(1.2), that is,

Cov[Y (ζ),Y (η) | z = y] = c(ξ,η)− [c(ξ, x0)...c(ξ, xn−1)]c(xi , x j )−1

c(η, x0)...c(η, xn−1)

(1.154)

The conditional mean (1.153), in particular, is written out explicitly in order to display an interpretationof the mean value function µ(x) and the function c(ξ,η) that relates them to usual deterministic interpo-lation practice. In fact, if (1.148) and (1.153) are compared, it is seen that (1.153) is obtained from (1.148)if the polynomial Qi (x), i = 0, ...,n −1, is replaced by the linear combination

a0i c(x, x0)+a1i c(x, x1)+ ...+a(n−1)i c(x, xn−1) (1.155)

where the coefficients a0i , ..., a(n−1)i are the elements of the i th column in the inverse to the covariancematrix c(xi , x j ). It is seen that this function like Qi (x) is 1 for x = xi and 0 for x = x0, ..., xi−1, xi+1, ..., xn−1.Indeed, if (1.155) is identified with Qi (x) for i = 0, ...,n−1, the functions c(x, x0), ...,c(x, xn−1) are uniquelydetermined by their values at x0, x1, ..., xn−1 as

c(x, xi ) = c(x0, xi )Q0(x)+ ...+ c(xn−1, xi )Qn−1(x) (1.156)

As a function of x this is a valid covariance, that is, there exists a non-negative definite function c(ξ,η)such that (1.156) is obtained for (ξ,η) = (x, xi ). This is seen directly by computing the covariance functionof the random field

Z0Q0(x)+ ...+Zn−1Qn−1(x) (1.157)

in which (Z0, ..., Zn−1) is a random vector with covariance matrixc(xi , x j ). The covariance function is

c(ξ,η) = [Q0(ξ)...Qn−1(ξ)]c(xi , x j )

Q0(η)...Qn−1(η)

(1.158)

which gives (1.156) for ξ= x and η= xi .

Page 36: Ditlevsen - MathematicalModels for Structural Reliability Analysis

36 Structural Reliability

It follows from this that (1.148) except for Rn(x) is a special case of (1.153). The remainder becomesreplaced by the term

µ(x)− [c(x, x0)...c(x, xn−1)]c(xi , x j )−1

µ(x0)...µ(xn−1)

(1.159)

plus a Gaussian zero mean random field with covariance function given by (1.154). This interpretation ofthe mean value function µ(x) and the covariance function c(ξ,η) of the random field as essentially beinginterpolation functions makes it easier to appreciate the consequence of a specific mathematical formof the covariance function [36]. For example, if the general appearance of the values in the table suggeststhe choice of a homogeneous Gaussian field, the choice of a correlation function like exp[−α | ξ−η |]implies that the mean interpolation function given by (1.153) will be non-differentiable at x = x0, ..., xn−1

while a correlation function like exp[−β(ξ− η)2] will give differentiable interpolation function also atx = x0, ..., xn−1. In fact, under Gaussianity the sample functions corresponding to the first correlationfunction will with probability 1 be continuous but not differentiable at any point while the second corre-lation function corresponds to sample functions that are differentiable of any order.

As mentioned in Section 2.1 the procedure of doing statistical-stochastic interpolation is often calledkriging.

Example Consider the case where the points x0, ..., xn−1 are equidistant with xi+1−xi = h. Let the randominterpolation field on the line be a homogeneous Gaussian field with mean µ and covariance function

c(ξ,η) =

(1− | ξ−η |

h

)σ2 for | ξ−η |≤ h

0 otherwise(1.160)

Then (1.153) gives

E [Y (x) | z = y] = 1

h[(x1 −x)y0 + (x −x0)y1] (1.161)

for x0 ≤ x ≤ x1. This is equivalent to linear interpolation in the mean. The covariance function (1.154)becomes

Cov[Y (ξ),Y (η) | z = y]

=

(σh

)2[h2 −h | ξ−η | −(x1 −ξ)(x1 −η)− (x0 −ξ)(x0 −η)] for | ξ−η |≤ h

0 otherwise(1.162)

from which the standard deviation

D[Y (x) | z = y] = σ

h

√2(x −x0)(x1 −x) (1.163)

is obtained. It is interesting to compare this measure of the interpolation error with the Lagrangian re-mainder obtained from (1.149) for n = 2. It is seen that (1.163) varies like the square root of R2(x), that is,it predicts larger errors close to x0 or x1 than R2(x) does.

Page 37: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 37

While the interpolation functions in the classical procedure are broken at the points of the table im-plying that no way is indicated of reasonable extrapolation outside the range from x0 to xn−1, the homo-geneous random field procedure represents a solution to both the interpolation and the extrapolationproblem in terms of a set of sample functions defined everywhere. The points of the table only play therole that all the sample functions of the set pass through the points of the table. The homogeneity ofthe field implies that the variability of the points within the range from x0 to xn−1 is reflected in the fielddescription outside this range. Specifically we have for x ≤ x0:

E [Y (x) | z = y] =µ for x ≤ x0 −h

1

h[(x0 −x)µ+ (x −x0 +h)y0] for x0 −h < x ≤ x0

(1.164)

and similarly for x > xn−1. The standard deviation is

D[Y (x) | z = y] =σ for x ≤ x0 −hσ

h

√(x0 −x)(x −x0 +2h) for x0 −h < x ≤ x0

(1.165)

and similarly for x > xn−1. Outside the interval from x0 −h to xn−1 +h the field is simply given as thehomogeneous field. The variation among y0, ..., yn−1 determines the values ofµ andσ2 as the well-knownmaximum likelihood estimates

y = 1

n

n−1∑i=0

and s2 = 1

n

n−1∑i=0

(yi − y)2 (1.166)

corresponding to a Gaussian sample of independent outcomes y0, ..., yn−1.

Example The maximum likelihood principle leads to a specific choice of the distribution parametersθ1, ...,θq , that is, it leaves no room for doubt about what values to choose. Such doubt can be included inthe stochastic interpolation procedure by assigning not just the distribution family to the interpolationproblem but also assigning a joint probability distribution to θ = (θ1, ...,θq ). The mathematical techniquethen becomes exactly the same as in the Bayesian statistical method.

For example, the parameter vector (µ,σ) in the previous example may be considered as an outcome ofthe random vector (M ,Σ). Assuming a non-informative prior of (M , logΣ) ∈R2 (degenerate uniform den-sity over R2), the special random field model of the previous example leads to the well-known Bayesianstandard results in Gaussian statistics. In the interpolation case the predictive distribution of

Y (x)− 1

h[(x1 −x)y0 + (x −x0)y1]

s

h

√2(x −x0)(x1 −x)

√1− 1

n(1.167)

for x ∈]x0, x1[ is the t-distribution with n −1 degrees of freedom.

Page 38: Ditlevsen - MathematicalModels for Structural Reliability Analysis

38 Structural Reliability

1.11 Computational practicability of the statistical-stochastic inter-polation method

Both for the maximum likelihood principle and the Bayesian principle of stochastic interpolation on thebasis of a Gaussian field assumption the governing mathematical function is the joint Gaussian density

fY(y0, ..., yn−1,µ,σ,P; ) ∝ 1

σn√

det(P)exp

[− 1

2σ2 (y−µe)′P−1(y−µe)

](1.168)

written here for the case of a homogeneous field with mean µ, standard deviation σ, and a correlationfunction that defines the correlation matrix P of the random vector (Y (x0), ...,Y (xn−1)). The elementsof the correlation matrix P contain the unknown parameters of the correlation function. As a functionof µ,σ and the correlation parameters the right side of (1.168) defines the likelihood function. Generallyneither P−1 nor the determinant det(P) can be expressed explicitly in terms of the correlation parameters.To maximize the likelihood with respect to the parameters by an iteration procedure both P−1 and det(P)must therefore be evaluated several times.

The Bayesian principle of stochastic interpolation has particular relevance for structural reliabilityevaluations of structures with carrying capacities that depend on the strength variation throughout amaterial body. For example, a direct foundation on saturated clay may fail due to undrained failure ex-tended over a part of the clay body. The evaluation of the failure probability corresponding to a specificrupture figure requires a specifically weighted integration of the undrained shear strength across the rup-ture figure. If the undrained shear strength is measured only at a finite number of points in the clay body,it is required to make interpolation and or extrapolation in order to obtain the value of the integrand inany relevant point of the body. Irrespective of whether the maximum likelihood principle is applied orwhether the Bayesian principle is applied in order to properly take statistical uncertainty into account,it is required that P−1 be computed iteratively for different values of the correlation parameters. For ex-ample, for the Bayesian principle of choice this is the case when the reliability is evaluated by a first orsecond order reliability method (FORM or SORM) in a space of basic variables that among its dimensionsincludes the correlation parameters. In the search for the most central limit state point by some gradientmethod, say, the inverse P−1 has to be computed iteratively several times for different parameter values.

>From this discussion it follows that computational practicability put limits to the order of P or itrequires that the mathematical structure of P is such that P can be inverted analytically or such that P−1

can be obtained in terms of matrices of considerably lower order than the order of P. A possibility ofbreaking down to lower order is present in case of what here is called a factorized correlation structure:

Let P and Q = qi j i , j=1,...,m be correlation matrices. Then the matrix defined as

[Q]⊗P ≡ q11P q12P · · ·

q21P q22P · · ·· · · qmm P

(1.169)

is a correlation matrix. The proof is as follows: Let x′ = [x′1...x′m] be an arbitrary vector of dimension n×m,and let A be an orthogonal matrix such that A′PA = Λ = dλ1...λnc where λ1, ...,λn are the nonnegativeeigenvalues of P. Then

x′([Q]⊗P)x =m∑

i=1

m∑j=1

qi j x′i Px j =m∑

i=1

m∑j=1

qi j y′iΛy j =λ1

m∑i=1

m∑j=1

qi j yi 1 y j 1 + ...+λn

m∑i=1

m∑j=1

qi j yi n y j n

(1.170)

Page 39: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 39

in which yi = Axi , i = 1, ...,m. Since Q is a correlation matrix, the right side of (1.170) is nonnegative. Thusit follows that [Q]⊗P is a correlation matrix.

It follows directly by the multiplication test that if P and Q are both regular, then

([Q]⊗P)−1 = [Q−1]⊗P−1 (1.171)

By application of simple row operations first diagonalizing P to D in all places in (1.169) to obtain q11D q12D · · ·q21D q22D· · ·

(1.172)

and next diagonalizing (1.172) by exactly those row operations that diagonalize Q, it follows that

det([Q]⊗P) = [det(Q)]n[det(P)]m (1.173)

(m = order of Q, n = order of P).Assume that the table of points corresponds to points in the plane arranged in a square mesh of

equidistant points in both directions with mesh width L. Let there be k points in the first direction andl points in the second direction in total giving kl points with given values of the otherwise unknownfunction. The field random variables Y11,Y21, ...,Yk1, Y12, Y22, ...,Yk2, ...,Y1l ,Y2l , ...,Ykl corresponding tothe points (i , j ) of the mesh are collected in a vector of dimension kl and in the order as indicated. Thenit is easily shown that the correlation matrix of order kl for an arbitrary choice of L has the factorizedstructure as in (1.169) if and only if the random field is correlation homogeneous and the correlationfunction of the field can be written as the product of a correlation function solely of the coordinate alongthe first mesh direction and a correlation function solely of the coordinate along the second direction.In the class of such product correlation functions the correlation functions that correspond to isotropicfields are uniquely given as

exp[−βr 2] (1.174)

where r is the distance between the two points at which the random variables are considered and β is afree positive parameter. Thus the assumption of isotropy of the interpolation field and the requirementof computational practicability for large kl make it almost a “must” to adopt the correlation functiondefined by (1.174). Writing (1.174) for r = L as the correlation coefficient κ, the correlation matrix of thevector of Y -values becomes [S1]⊗S2 where

Si =

1 κ κ4 · · · κ(ν−1)2

κ 1 κ · · · κ(ν−2)2

· · ·

(1.175)

i = 1(ν = k), i = 2(ν = l ). These considerations generalize directly to 3 dimensions for a spatial equidis-tant mesh of klm points. Then the correlation matrix becomes [[S1]⊗S2]⊗S3 with each of the matricesS1,S2,S3 of form as in (1.175) in case of spatial isotropy.

For a soil body it may be reasonable to have isotropy in any horizontal plane but not necessarily in the3-dimensional space. Then S3 may be any other correlation matrix obtained from the correlation func-tion in the vertical direction. When soil strength measurements are made for example by the so-calledCPT method (Cone Penetration Test), the distance h between measurement points is much smaller in

Page 40: Ditlevsen - MathematicalModels for Structural Reliability Analysis

40 Structural Reliability

the vertical direction than the mesh width L in the horizontal direction. Then S3 can be of computation-ally impracticable high order. However, there is at least one case of such a matrix S corresponding to ahomogeneous field for which S−1 is known explicitly. This case is

S = 1 ρ ρ2 · · · ρm

ρ 1 ρ · · · ρm−1

· · ·

, S−1 = 1

1−ρ2

1 −ρ 0 · · · 0−ρ 1+ρ2 −ρ · · · 0· · ·

(1.176)

which corresponds to a homogeneous Markov field on the line with ρ = exp[−αh] where α is a free posi-tive parameter.

This 3-dimensional homogeneous Gaussian field interpolation model with (1.174) defining the hor-izontal correlation properties and with Markov properties in the vertical direction has been used in areliability study of the anchor blocks for the suspension bridge across the eastern channel of Storebæltin Denmark [37]. The size of the interpolation problem in that investigation was k = 6, l = 10, m = 150giving a table of 9000 points. Without the unique structuring of the random field correlation given here,the likelihood analysis would have been totally impracticable.

1.12 Field modelling on the basis of measured noisy data

In engineering practice measured values of fields are often obtained by use of less than perfect measur-ing techniques either because no better technique is available or of economical reasons. Not rarely themost simple probabilistic measuring error model is applicable for the purpose of including the influenceof this error type in the interpolation problem. Referring to Section 2.11 the table of measured values(x0, y0), ..., (xn−1, yn−1) should rather be (x0, y0+z0),...,(xn−1, yn−1+zn−1) where z0, ..., zn−1 are measuringerrors. On the basis of an analysis of the measuring method it is often reasonable to assume that the errorvalues z0, ..., zn−1 independently of y0, ..., yn−1 can be considered as n mutually independent realizationsof a random variable Z of zero mean, or , more precisely, that the vector (z0, ..., zn−1) can be consid-ered as a realization of a random vector (Z0, ..., Zn−1), where Z0, ..., Zn−1 are n mutually independent andidentically distributed random variables of zero mean and variance ε2. Clearly the deterministic inter-pretation method is not suited as a tool for solving this interpolation problem. However, the field methodis directly applicable by assuming that y0, ..., yn−1 are the values at x0, ..., xn−1 of the field Y (x) with thepragmatically chosen covariance function Cov[Y (ξ),Y (η)] = c(ξ,η). For simplicity assuming that boththe field Y (x) and the error vector (Z0, ..., Zn−1) are Gaussian and, as indicated above, that the field andthe error vector are mutually independent, it follows that the maximum likelihood principle operates onthe Gaussian density (likelihood function) with the covariance matrix

C+ε2I (1.177)

where C = c(xi , x j ) and I is the unit matrix.Unfortunately the necessary computational efforts may increase considerably hereby, in particular

for large tables. If the covariance matrix C can be inverted explicitly as exemplified in Section 2.12 it maybe convenient to apply the quotient series expansion

(C+ε2I)−1 = C−1[I− (−ε2C−1)]−1 =C−1[I−C−1ε2 +C−2ε4 −C−3ε6 + ...]

(1.178)

Page 41: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 41

cut off at some power of ε2 dependent of the size of ε2 as compared to the largest diagonal element of C.It can be shown that the series (1.178) is convergent if ε2 < 1/ ∥ C−1 ∥ where the matrix norm is defined asthe maximal value of the row sums in the corresponding matrix of absolute values.

In case a non-zero value of ε2 is obtained, the conditional mean (1.153) used as interpolation expres-sion is replaced by the conditional mean of Y (x) given Y (xi )+Zi for i = 0, ...,n −1:

E [Y (x) | Y (xi )+Zi = yi + zi , i = 0, ...,n −1]

=µ(x)+ [c(x, x0)...c(x, xn−1)](C+ε2I)−1

y0 + z0 −µ(x0)...yn−1 + zn−1 −µ(xn−1)

(1.179)

with the corresponding residual covariance function given by (1.154) after replacement of c(xi , x j ) byC+ε2I. It follows by substituting the expansion (1.178) into (1.154) that

Cov[Y (xi ),Y (x j ) | Y (xi )+Zi = yi + zi , i = 0, ...,n −1]i , j=0,...,n−1 = ε2I−ε4C−1 +ε6C−2 − ... (1.180)

Thus there is not only a residual variance at the measuring points caused by the random measuringerror but also a correlation between the residuals at the different measuring points. The correlationcoefficients are of order of size as in the matrix ε2C−1.

The conceptually simple measuring error model explained above has recently been succesfully usedfor the interpretation of and interpolation between measured pressures at a series of points on the wallof a circular concrete silo for grain storing [36]. However, this model is not applicable in all situations.In particular, if the field Y (x) has such properties that the random variables Y (x0), ...,Y (xn−1) are onlyweakly dependent or mutually independent, then the maximum likelihood principle will not be able toseparate the variability of the field Y (x) from the variability induced by the measuring uncertainty. Infact, the condition for the success of the simple model is that a considerable dependency is present be-tween the field values Y (x0), ...,Y (xn−1). Also the method is restricted by the assumption of independentmeasuring errors.

Some measuring situations are such that the only way to deal rationally with the measuring uncer-tainty is by use of a pairing method. The simplest form of such a method is explained in the followingexample [38] that solely concerns random variables. An adapted application on a field measuring prob-lem is treated after the example.

Example A sample is drawn from some unknown population Ω (object population) and each elementof the sample is characterized by a measured value. The measurement procedure is assumed to be lessthan perfect. On each measured value it introduces an error drawn from some unknown population M1.Without knowing anything about population M1 it is clearly not possible on the basis of the obtainedsample of values to infer anything about the properties of population Ω. However, the situation is dif-ferent if each element of the sample from Ω also is characterized by a measured value obtained by useof another independent measuring method with error population M2. It is shown in the following that ifboth measuring methods besides being independent are such that the two mean errors for a given ob-ject are independent of the error-free value that should be assigned to the object, then it is possible toestimate the variances of each of the value populations corresponding to Ω, M1, M2 on the basis of thesample of pairs of measured values. In order to obtain estimates of the mean values it is necessary toassume that at least one of the measuring methods gives unbiased measurements in the sense that themean error is zero.

Page 42: Ditlevsen - MathematicalModels for Structural Reliability Analysis

42 Structural Reliability

Let a random variable Z be defined on the object population Ω and let it be measured by the twoindependent measuring methods giving the values X1 and X2, respectively. Then the measuring errorsare Y1 and Y2 and

Z = X1 +Y1 = X2 +Y2 (1.181)

Obviously Cov[Y1,Y2 | Z ] = 0, implying that Cov[X1, X2 | Z ] = Cov[Z −Y1, Z −Y2 | Z ] = 0.Under the assumption that the conditional means E [Y1 | Z ] and

E [Y2 | Z ] do not depend on Z , it then follows from the standard formula (total representation theorem[39])

Cov[X1, X2] = E [Cov[X1, X2 | Z ]]+Cov[E [X1 | Z ],E [X2 | Z ]] = E [0]+Cov[Z , Z ]) (1.182)

that

Cov[X1, X2] = Var[Z ] (1.183)

Thus the variance of the “true” quantity Z can be estimated by estimating the covariance between theresults of the two measuring methods. Since

Cov[X1,Y1] = E [Cov[X1,Y1 | Z ]] = E [Cov[X1, Z −X1 | Z ]]

= E [−Var[X1 | Z ]] =−Var[X1]+Var[E [X1 | Z ]]

=−Var[X1]+Var[E [Z −Y1 | Z ]] =−Var[X1]+Var[Z ] (1.184)

it follows from Var[Z ] = Var[X1]+Var[Y1]+2Cov[X1,Y1] that

Var[Z ] = Var[X1]−Var[Y1] (1.185)

Thus the variance of the measuring error Y1 by use of (1.183) is obtained as

Var[Y1] = Var[X1]−Cov[X1, X2] (1.186)

By symmetry, Var[Y2] is obtained by interchanging X1 and X2.

Under certain conditions the principles of the pairing method explained in the example can be adaptedto destructive testing measurements. Assume that some material property varies in space as a randomfield. By the process of measuring the property at a point the material is changed irreversibly or evendestructed within a certain neighbourhood of the point. Therefore the property cannot be measured bysome physical measuring device applied at the same point. However, if the first measurement methodis applied for one set of points and the second measurement method is applied for another set of pointsthese two sets of measurements can in a certain sense be paired by use of a stochastic interpolation pro-cedure based on the random field model of the property variation. At each point of measurement bythe first method a measured value is paired with a probability distribution derived by stochastic inter-polation between the measured values obtained by the second method. By suitable assumptions aboutthe joint probabilistic structure of the measurement error fields and the “true” property field it is thenpossible to pull out the “true” field from the measured field. These assumptions are generalizations ofthe simple independence type assumptions made in the elementary pairing method described in theexample.

Page 43: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 43

In the following the objects are values of a realization of a field of undrained shear strengths for asaturated clay deposit [38]. The two measuring methods are the CPT-method (Cone Penetration Test)and the vane test method, respectively, applied at two different sets of points of which the one, S1, is moredense than the other, S2. The pairing is made by stochastic interpolation to the points of S2 between themeasured values at the points of S1. Instead of being a number, at least one of the elements in the pair is aprobability distribution obtained by the interpolation. Also there will be stochastic dependence betweenthe different pairs.

Let

Z = X+Y (1.187)

be a vector of logarithms of cone tip resistances obtained by ideally perfect CPT-measurements. Thevector X corresponds to the imperfectly measured CPT values while Y contains the measuring error cor-rections. The CPT values refer to the points at which the vane tests are made. The CPT-values are there-fore not obtained directly but are results of suitable interpolations between physically measured CPT-values at other points. Thus the interpolation results are considered as imperfect measurements thatcan be paired with the physical measurements obtained by the vane test. The logarithms of the vane testmeasurements are contained in the vector Xv and the corresponding measuring error corrections in thevector Yv .

The logarithmic shear strength field is assumed to be homogeneous. It is further necessary to makesome reasonable assumptions about the measuring error corrections Y and Yv . Striving at simplicity ofthe analysis the following assumptions are made:

1. The conditional expectations E [Y | Z] and E [Yv | Z] are independent of Z.

2. The linear regression of Zi on X depends solely on Xi with the same regression coefficient for alli = 1, ...,n, where Z1, ..., Zn and X1, ..., Xn be the elements of Z and X, respectively.

3. The triple (Z,Y,Yv ) is jointly Gaussian with no correlation between Y and Yv and with the elementsof Yv being uncorrelated with common standard deviation γ.

4. Z can be expressed as

Z = Xv +Yv + ce (1.188)

where c is some constant and e′ = [1...1].

The last assumption is based on the fact that elasto-plastic mechanics predicts that there is almostproportionality between the undrained shear strength as measured by the vane test and the cone tipresistance in the CPT-test when applied to an ideal saturated clay.

Since Cov[Z,Y′ | Z] = 0 it follows by use of assumption 1 in the total representation theorem (1.182)that

Cov[Z,Y′] = Cov[E [Z | Z],E [Y | Z]′] = 0 (1.189)

implying that

Cov[X,Y′] =−Cov[Y,Y′] (1.190)

Page 44: Ditlevsen - MathematicalModels for Structural Reliability Analysis

44 Structural Reliability

Substituting this into

Cov[Z,Z′] = Cov[X+Y, (X+Y)′] = Cov[X,X′]+Cov[Y,X′]+Cov[X,Y′]+Cov[Y,Y′] (1.191)

gives

Cov[Z,Z′] = Cov[X,X′]−Cov[Y,Y′] (1.192)

By using assumption 2 in the general expression for the linear regression

E [Z | X] = E [Z]+Cov[Z,X′]Cov[X,X′]−1(X−E [X]) (1.193)

it follows that

Cov[Z,X′]Cov[X,X′]−1 = aI (1.194)

for some constant a. Substituting Z = X+Y and using (1.190) then give the equation

I−Cov[Y,Y′]Cov[X,X′]−1 = aI (1.195)

from which

Cov[Y,Y′] = (1−a)Cov[X,X′] (1.196)

Since the diagonals on both sides must be non-negative it follows that a ≤ 1 and from (1.192) that a ≥ 0.Setting a = (δ/σ)2, whereσ2 is the common variance of the elements in X, we see from (1.192) and (1.196)that

Cov[Z,Z′] = δ2PX, δ2 ≤σ2 (1.197)

where PX is the correlation matrix of X.Assumption 3 applied to (1.188) shows that Xv is Gaussian with mean and covariance matrix

E [Xv ] = E [Z]−E [Yv ]− c e (1.198)

Cov[Xv ,X′v ] = Cov[Z,Z′]+Cov[Yv ,Y′

v ] (1.199)

Now let ζ be a Gaussian vector which together with X,Y,Xv ,Yv is jointly Gaussian. This vector ζ is avector of measurements that contains information about X given through the conditional means and co-variances E [X | ζ] and Cov[X,X′ | ζ]. Since the two measuring methods are independent, the informationcontained in ζ and obtained solely by the CPT-method carries no information about the outcome of theerror correction vector Yv related to the vane test. Therefore all covariances between elements of Yv andelements of ζ are zero. From the joint Gaussianity and Cov[Z,Y′

v ] = 0, (1.189), it then follows by use of thelinear regression method that

Cov[Z,Y′v | ζ] = 0 (1.200)

Moreover, it follows that

Cov[Yv ,Y′v | ζ] = Cov[Yv ,Y′

v ] (1.201)

Page 45: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 45

Using the information contained inζ, the mean vector E [Z] in (1.198) and the covariance matrix Cov[Z,Z′]in (1.199) should therefore simply by replaced by the corresponding conditional quantities E [Z | ζ] andCov[Z,Z′ | ζ], respectively. To determine these conditional means and covariances we first determineE [Z | X] and Cov[Z,Z′ | X] from the linear regression of Z on X. According to assumption 2 we have that

E [Z | X] =λ2X+µ(1−λ2)e, λ= δ/σ (1.202)

and the residual covariance matrix

Cov[Z,Z′ | X] = Cov[Z,Z′]−Cov[Z,X′]Cov[X,X′]−1Cov[Z,X′]

= Cov[Z,Z′]−λ2Cov[Z,Z′] = δ2(1−λ2)PX (1.203)

where µ is the common mean of the elements of X (setting E [Y] = 0).The total representation theoremnext gives the results

E [Z | ζ] =λ2E [X | ζ]+µ(1−λ2)e (1.204)

Cov[Z,Z′ | ζ] =λ2(1−λ2)Cov[X,X′]+λ4Cov[X,X′ | ζ] (1.205)

Thus (1.198) and (1.199) are replaced by

E [Xv ] = E [Z | ζ]− c e (1.206)

Cov[Xv ,X′v ] = Cov[Z,Z′ | ζ]+γ2I (1.207)

setting E [Yv ] = 0. (If E [Y] and E [Yv ] are not zero, their contributions may be included in the constant cin (1.188)).

All the unknown parameters are obtained as follows. First the field that represents the measurementsX is modelled as a homogeneous Gaussian field using a specific pragmatically chosen covariance func-tion family. The parameters µ,σ, as well as the correlation parameters that determine the correlationmatrix PX are obtained as explained in Section 2.11 by the maximum likelihood principle or the Bayesianmethod, if necessary. The likelihood function for the remaining parameters c,γ, and λ can then be for-mulated by use of (1.204) - (1.207) and the Gaussianity of Xv .

Let xv be the observation of Xv and write

ξ(c,λ) = xv −E [Xv ] (1.208)

R(γ,λ) = Cov[Xv ,X′v ]−1 (1.209)

Then the Gaussian density of Xv computed at xv is

fXv (xv ) ∝√

det[R(γ,λ)]exp

[−1

2ξ(c,λ)′R(γ,λ)ξ(c,λ)

], c ∈R, γ ∈R+, λ ∈ [0,1] (1.210)

(∝ means “proportional to”). The right side of (1.210) defines the likelihood function L(c,γ,λ;xv ) of c,γ,and λ. Let (C ,Γ,Λ) be the set of Bayesian random variables corresponding to (c,γ,λ), and adopt thenon-informative prior for which (C , logΓ, log(Γ2+σ2Λ2)) has a diffuse prior over R×logγ, log(γ2+σ2λ2) |γ ∈ R+,0 ≤ λ ≤ 1. Then the prior density of (C ,Γ,Λ) is proportional to λ/[γ(γ2 +σ2λ2)] and we get theposterior density fC ,Γ,Λ(c,γ,λ | xv ) by multiplying this prior with the likelihood function.

Before the likelihood function (1.210) can be used to infer about the parameters c,γ, and λ there isa further step to be taken because the set of vane test undrained shear strength observations in practice

Page 46: Ditlevsen - MathematicalModels for Structural Reliability Analysis

46 Structural Reliability

often will be imperfect in the sense that some of the test results are reported not by their values but by theinformation that the values are larger than the measuring capacity of the applied vane (censored data).This is expressed by saying that each of the random variables representing the vane test measurements isclipped (or censored) at a given value x0i , i = 1, ...,n. Thus the sample is given as x1 = x01, x2 = x02, ..., xr =x0r , xr+1 < x0r+1, ..., xn < x0n , where the sample has been ordered such that the first r vane test shearstrengths are larger than the respective measuring capacities while the remaining n − r tests are “well-behaved”. For this clipped sampling case the likelihood function is obtained by integrating the jointdensity of Xv in (1.210) with respect to xi from x0i to ∞ for i = 1, ...,r . Thus the likelihood functionbecomes

L(c,γλ;xv ) ∝√

det[R(γ,λ)]∫ ∞

x01

· · ·∫ ∞

x0r

exp

[−1

2ξ(c,λ;x)′R(γ,λ)ξ(c,λ; x)

]d x1 · ... ·d xr (1.211)

The numerical studies of the posterior density of (C ,Γ,Λ) obtained from (1.211) after multiplication bythe prior density must in practice be based on Monte Carlo integration except if r = 1 or 2.

For given values of all the parameters the formulas (1.204) and (1.205) define the interpolated “true”field Z (t) “cleaned” for measuring uncertainty as a Gaussian field with conditional mean value function

E [Z (t) | ζ] =λ2E [X (t) | ζ]+µ(1−λ2) (1.212)

and conditional covariance function

Cov[Z (t1), Z (t2) | ζ] =λ2(1−λ2)Cov[X (t1), X (t2)]+λ4Cov[X (t1), X (t2) | ζ] (1.213)

For the purpose of making reliability analysis with respect to plastic collapse this interpolation techniquehas actually been applied to obtain a random field model for the undrained shear strengths of the clayunder the Anchor Block West of the Great Belt Bridge in Denmark conditioned on given CPT and vanetest results measured in situ [37]. The field model for X (t) was chosen as mentioned at the end of theprevious section.

1.13 Discretization defined by linear regression on derivatives at asingle point

In certain problems, in particular when t is time so that X (t ) is a random process, it is useful to make adiscretization in terms of X (t ) and its first n derivatives d X (t )/d t , ...,d n X (t )/d t n at a given point t = t0. Itis sufficient to let t0 = 0. More generally we will consider the linear regression of a random vector process

X(t ) = (X1(t ), ..., Xm(t )) on X1(0), X (1)1 (0), ..., X (n)

1 (0) where X ( j )1 (0) = d j X1(0)/d t j . Of continuity reasons

we have

E [X(t )|X1(0), ..., X (n)1 (0)] = lim

h→0E [X(t )|X1(0), X1(h), ..., X1(nh)] (1.214)

because there is a one-to-one linear mapping between the vector of difference quotients Q up to ordern corresponding to the time step h and the vector Z = (X1(0), X1(h), ..., X1(nh)), that is, there is a regulardifference quotient operator matrix ∆ so that Q =∆Z → [DX1(s)]s=0 as h → 0, D = (1, ∂

∂s , ..., ∂n

∂sn ). It followsfrom (1.101) that

E [Xk (t )|Z] = E [Xk (t )]+Cov[Xk (t ),Z′]Cov[Z,Z′]−1(Z−E [Z])

= E [Xk (t )]+Cov[Xk (t ),Z′]∆′(∆Cov[Z,Z′]∆′)−1(∆Z−E [∆Z]) (1.215)

Page 47: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 47

so that as h → 0:

E [Xk (t )|X1(0), ..., X (n)1 (0)] =

= E [Xk (t )]+[

∂i

∂siCov[Xk (t ), X1(s)]

]s=0

i=0,...,n

[∂i+ j

∂ui∂v jCov[X1(u), X1(v)]

]u=v=0

−1

i , j=0,...,n

·

X (i )1 (0)−E [X (i )

1 (0)]

i=0,...,n(1.216)

It is seen that the linear regression of X(t ) on X1(0), ..., X (n)1 (0) is determined by the mixed partial deriva-

tives of the covariance function of X1(t ) at (s, t ) = (0,0) and the partial derivatives of Cov[X(t ), X1(s)] ats = 0.

The vector process X(t ) is identical to the process

E [X(t ) | X1(0), ..., X (n)1 (0)]+R(t ) (1.217)

where the linear regression term for X1(t ) dominates in the vicinity of t = 0 while the residual vectorprocess R(t ) approaches X(t )−E [X(t )] as the distance from t = 0 increases, given that the covariancefunctions of X(t ) all approach zero as | t |→∞.

It follows from (1.217) that R1(0) and the first n derivatives R(1)1 (0), ..., R(n)

1 (0) of the first component of

the residual vector process R(t ) are all zero. Therefore the linear regression of Xk (t ) on X1(0), ..., X (n)1 (0)

gives a good approximation for k = 1. Whether approximations that are sufficiently good for the con-sidered applications are obtained also for k 6= 1 depends on the cross-correlations. Clearly, if the cross-correlations are small or zero the information contained in given values of X1(0), ..., X (n)

1 (0) has small orno predictive effect on Xk (t ) for k 6= 1.

The case where X(t ) is a stationary vector process has important applications treated in later sections.For a stationary X(t ) the covariance function matrix depends solely on the time difference s − t :

Cov[X(t ),X(s)′] = ckl (s − t )k,l=1,...,m (1.218)

The linear regression (1.13) consequently specializes to

E [Xk (t )|X1(0), ..., X (n)1 (0)] =

(−1)i c(i )

1k (t )

i=0,...,n

(−1) j c(i+ j )

11 (0)−1

i , j=0,...,n

X (i )

1 (0)

i=0,...,n(1.219)

in which without loss of generality X(t ) is assumed to have the zero vector as expectation. Since ck1(−t ) =c1k (t ) we have c(i )

1k (t ) = (−1)i c(i )k1(−t ). This has been used to obtain the first row matrix in (1.219). More-

over, since c11(t ) = c11(−t ) we have c(i+ j )11 (0) = 0 for i + j odd.

For n = 0,1,2 it is seen that (1.219) reads (using the standard dot notation for the derivatives of X1(t )):

E [Xk (t )|X1(0)] = c1k (t )λ−10 X1(0) (1.220)

E [Xk (t )|X1(0), X1(0)] = c1k (t )λ−10 X1(0)− c ′1k (t )λ−1

2 X1(0) (1.221)

E [Xk (t )|X1(0), X1(0), X1(0)] = [c1k (t )− c ′1k (t ) c ′′1k (t )]

λ0 0 −λ2

0 λ2 0−λ2 0 λ4

−1 X1(0)X1(0)X1(0)

= (λ0λ4 −λ2

2)−1

c1k (t )[λ4X1(0)+λ2 X1(0)]+ c ′′1k (t )[λ2X1(0)+λ0 X1(0)]−λ−1

2 c ′1k (t )X1(0) (1.222)

Page 48: Ditlevsen - MathematicalModels for Structural Reliability Analysis

48 Structural Reliability

where λ0 = c11(0), λ2 =−c ′′11(0), λ4 = c(4)11 (0) are the spectral moments

λn =∫ ∞

0ωndG(ω) (1.223)

for n = 0,2,4, corresponding to the spectral representation

c11(t ) =∫ ∞

0cosωt dG(ω) (1.224)

of the covariance function of the process X1(t ). It is implicitly assumed above that the considered deriva-tives exist. The spectral momentλ2n is the variance of the nth derivative process X (n)(t ) (which follows inthe same way as (1.13) follows from (1.13), noting that∆Cov[Z,Z′]∆′ = Cov[∆Z, (∆Z)′] → Cov[DX1(s),D′X1(t )](s,t )=(0,0) =Cov

[ d i X1(s)d si , d j X1(t )

d t j

](s,t )=(0,0) as h → 0).

Even though λ4 =∞ implies that the second derivative process X1(t ) does not exist in an elementarysense (may exist, though, e.g. in the sense of white noise), it is seen that

E [Xk (t )|X1(0), X1(0), X1(0)] → E [Xk (t )|X1(0), X1(0)] (1.225)

as λ4 →∞ (λ0,λ2 bounded). Similarly

E [Xk (t )|X1(0), X1(0)] → E [Xk (t )|X1(0)] (1.226)

as λ2 →∞ (λ0 bounded).The (k, l )-element of the residual covariance function matrix is obtained as the covariance Cov[Xk (t ), Xl (s)]

minus the covariance between the linear regressions of Xk (t ) and Xl (s) on X1(0), ..., X (n)1 (0) that in turn is

obtained by replacing X (i )1 (0) in (1.219) by the column vector (−1) j c( j )

1l (s) j=0,...,n .For n = 0,1,2 we have the residual covariance expressions:

n = 0 : ckl (s − t )− c1k (t )c1l (s)λ−10 (1.227)

n = 1 : ckl (s − t )− c1k (t )c1l (s)λ−10 − c ′1k (t )c ′1l (s)λ−1

2 (1.228)

n = 2 : ckl (s − t )− (λ0λ4 −λ22)−1λ2[c1k (t )c ′′1l (s)+ c ′′1k (t )c1l (s)]

+λ4c1k (t )c1l (s)+λ0c ′′1k (t )c ′′1l (s)−λ−12 c ′1k (t )c ′1l (s) (1.229)

It is seen that the residual covariance for n = 1 is the limit of the residual covariance for n = 2 as λ4 →∞(λ0,λ2 bounded), and similarly for n = 0 and n = 1 as λ2 →∞ (λ0 bounded).

1.14 Conditioning on crossing events

Let X(t ) be an m-dimensional Gaussian vector process and let Y(t ) be an s-dimensional (s ≤ m) subvec-tor of X(t ). Then the conditional vector process X(t ) given Y(0) is Gaussian with the linear regressionE [X(t ) | Y(0)] as vectorial mean value function and the corresponding residual covariance function as co-variance function. Of course, by unconditioning with respect to the Gaussian distribution of Y(0) the setof finite dimensional distributions of the Gaussian vector process X(t ) is regained according to the totalprobability rule:

fX(t1),...,X(tq )(x1, ...,xq ) =∫Rs

fX(t1),...,X(tq )(x1, ...,xq | Y(0) = y) fY(0)(y)dy (1.230)

Page 49: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 49

for any q > 0 and t1, ..., tq ∈ R. However, other vector processes than X(t ) can be defined on the basisof (1.230) by assigning some other distribution to Y(0) than the original Gaussian distribution. Clearlythis type of modelling can be considered for any Gaussian field with conditioning on any set of randomvraiables, but as it will be shown in the following such modelling has particular relevance for Gaussianvector processes.

Equivalently to using (1.230) as the basis for defining a vector process V(t ) different from X(t ) we maydefine V(t ) directly as

V(t ) = E [X(t ) | Y(0)]+R(t ) (1.231)

where R(t ) is the Gaussian residual vector process. Thus V(t ) becomes modelled as R(t ) added under theassumption of independence to the given inhomogeneous linear function E [X(t ) | Y(0)] of the randomvector Y(0) that may have any applicationally relevant probability distribution assigned to it.

Several interesting problems can be analysed by use of (1.231) in connection with phenomena thatdepend on the event that the trajectory of X(t ) crosses out through the hyperplane orthogonal to the x1-axis in the distance u from the origin. With sufficient generality consider crossings that happen at timet = 0, and assume that the process X1(t ) possesses differentiable sample functions. Then the crossingevent C can be defined as C = X1(0) = u, X1(0) > 0. Obviously C has probability zero so that condition-ing on C requires special care in order to ensure uniqueness of the obtained conditional probabilities. Inparticular it is necessary to have an applicationally relevant definition of the conditional distributions ofthe vector process X(t ) given the event C .

The uniqueness problem can best be illustrated by a study of two alternative (among several) defini-tions of the conditional probability density of X1(0) given C . For shortage of notation omit the index andwrite X for X1. Instead of considering the sample curve upcrossings of level u with slope z in the timeinterval [0,d t ] it is sufficient to consider the upcrossings with slope z of the tangents to the trajectories att = 0 as d t → 0 [2]. It is seen from Fig. 4 (top) that these tangents are those that cross through the x-axiswithin the interval [u − zd t ,u]. Thus the probability (asymptotically as d t → 0) of having a sample curveof any positive slope at t = 0 crossing through level u within the time interval [0,d t ] is equal to the prob-ability content of the angular domain shown to the right in Fig. 4 (top) calculated by integration of thejoint density of X = X (0) and Z = X (0) over the domain. Since the area element is z d z d t this probabilitybecomes

κh(0)d t =[∫ ∞

0fX ,Z (u, z)z d z

]d t (1.232)

and the conditional density of Z given the crossing event C =Ch obviously becomes

fZ (z |Ch) = 1

κh(0)z fX ,Z (u, z) (1.233)

Alternatively to counting the number of crossings of level u in the time interval [0,d t ] we may countthe number of crossings with slope z through the interval [u−d x,u] on the x-axis. Surely, as d x → 0 thesesample curves cross level u in the time interval [0,d x/z] that approaches [0,0]. However, the probability(asymptotically as d x → 0) of having a sample curve of any positive slope at t = 0 crossing through thex-axis within the interval [u −d x,u] becomes, Fig. 4 (bottom),

κv (0)d x =[∫ ∞

0fX ,Z (u, z)d z

]d x (1.234)

Page 50: Ditlevsen - MathematicalModels for Structural Reliability Analysis

50 Structural Reliability

and the conditional density of Z given the crossing even C =Cv becomes

fZ (z |Cv ) = 1

κv (0)fX ,Z (u, z) (1.235)

Thus different results are obtained in dependence of how the crossing event C of zero probability is inter-preted as the limit of non-zero probability events. The indices h and v on C refer to "horizontal window"and “vertical window”, respectively. For applications that refer to the number of crossings per time unitthe following considerations show that it is the horizontal window crossings that are of interest.

Figure 4. Definition of upcrossings of level u through horizontal window (top) and vertical window (bot-tom).

Even though the event of having an upcrossing at a specified time point has probability zero, upcross-ings of level u occur with non-zero probability at some time points within any time interval of non-zerolength. We may count these upcrossings of level u in the time interval [0,T ] by dividing the interval in ndisjoint intervals of length T /n and assign the indicator random variable

1i =1 if upcrossing in the interval

]i −1

nT,

i

nT

]0 otherwise

(1.236)

Page 51: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 51

Then n∑

i=11i

n∈1,2,...

↑ N (T ) (1.237)

is an increasing sequence of random variables with limit equal to the random number of upcrossingsN (T ) in the interval [0,T ]. Since according to (1.232) (shifting 0 to t ) we obviously have

E [n∑

i=11i ] =

n∑i=1

E [1i ] →∫ T

0κh(t )d t (1.238)

as n →∞, it follows that the mean number of upcrossings of level u is

E [N (T )] =∫ T

0

[∫ ∞

0fX (t ),X (t )(u, z)z d z

]d t (1.239)

The integrand in (1.239)

ν+(t ,u) =∫ ∞

0fX (t ),X (t )(u, z)z d z (1.240)

can thus be interpreted as the mean rate of upcrossings of level u at time t . Formula (1.240) is the cele-brated Rice formula [40] derived here by simple heuristic arguments.

Under the assumption that X (t ) is Gaussian, the joint distribution of X (t ) and X (t ) is Gaussian andboth of zero mean if the process X (t ) has zero mean. Let λ0,λ1,λ2 denote, see (1.13),

λ0 = Var[X (t )] (1.241)

λ1 = Cov[X (t ), X (t )] = ∂

∂vCov[X (t ), X (v)]v=t (1.242)

λ2 = Var[X (t )] = ∂2

∂u∂vCov[X (u), X (v)]u=v=t (1.243)

Then

E [X (t ) | X (t )] = λ1

λ0X (t ) (1.244)

Var[X (t ) | X (t )] =λ2

(1− λ2

1

λ0λ2

)(1.245)

calculated as the linear regression of X (0) on X (0) and the corresponding residual variance, respectively.Thus the joint density of X = X (t ) and Z = X (t ) becomes

fX ,Z (x, z) = 1√λ0λ2 −λ2

1

ϕ

(x√λ0

z −xλ1/λ0√λ2 −λ2

1/λ0

(1.246)

so that the mean upcrossing rate of level u at time t according to (1.240) becomes

ν+(t ,u) = 1

λ0

√λ0λ2 −λ2

(u√λ0

λ1√λ0λ2 −λ2

1

u√λ0

(1.247)

Page 52: Ditlevsen - MathematicalModels for Structural Reliability Analysis

52 Structural Reliability

in which ψ(·) is the function

ψ(x) =ϕ(x)+xΦ(x) (1.248)

When X (t ) is stationary we have λ1 = 0 and (1.247) reduces to

ν+(t ,u) =√

λ2

2πλ0ϕ

(u√λ0

)(1.249)

independent of t .The density (1.233) follows from (1.246) and (1.247) as

fZ (z |Ch) = λ0

λ0λ2 −λ21

z −uλ1/λ0√λ2 −λ2

1/λ0

ψ

λ1√λ0λ2 −λ2

1

u√λ0

, z > 0 (1.250)

For λ1 = 0 this is the so-called Rayleigh density

fZ (z |Ch) = z

λ2exp

[−1

2

(z√λ2

)2], z > 0 (1.251)

The formulas (1.232)–(1.240) are valid also for non-Gaussian processes with such smoothly varying sam-ple curves that exclude the occurrence of everywhere dense crossing points in any time interval andalso exclude events of having sample function tangents in common with the level u as being events ofprobability zero [2]. Under these assumptions it follows from the step from (1.238) to (1.239) that theasymptotic probability κh(t )d t of having an upcrossing of level u in the time interval [t , t +d t ] equalsthe mean number of upcrossings ν+(t ,u)d t in [t , t +d t ] as d t → 0.

If not all upcrossings in [0,T ] are counted but only those for which the slopes of the sample curve atthe crossing points are in the interval [z, z +d z], the right side of (1.238) becomes replaced by∫ T

0[ fX (t ),X (t )(u, z)z d z]d t (1.252)

and is thus the mean number of crossings of this kind. The ratio of (1.252) and E [N (T )] given by (1.239)is seen to approach fZ (z | Ch)d z as T → 0. Thus the conditional probability that Z = X (0) ∈ [z, z +d z]given that there is a horizontal window upcrossing of level u at time t = 0 is identical to the ratio betweenthe mean rate at t = 0 of upcrossings under slope in the interval [z, z +d z] and the mean rate at t = 0 ofall upcrossings (asymptotically as d z → 0).

For stationary and ergodic processes it is possible to give a useful interpretation of the conditionalprobability of any event given the event Ch of a horizontal window upcrossing of level u. In fact, for X (t )stationary the ratio of (1.252) and (1.239) becomes fZ (z |Ch)d z independent of T . Under the assumptionof ergodicity (that is, ensemble averaging can be replaced by time averaging) the ratio of the number ofcrossings with given properties (like the sample function at the upcrossing having slope in the interval[z, z +d z]) to the total number of crossings within [0,T ] is an estimator of the conditional probability

Page 53: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 53

of having the property given Ch . By use of the law of large numbers it can be shown that this estimatorconverges with probability 1 to the conditional probability as T →∞ [41].

Returning now to (1.231) an example of an application is to let X2(t ) be the derivative process X1(t )of the Gaussian process X1(t ) and define the vector process

V(t ) = E [X(t ) | X1(0) = u, X1(0) = Z ]+R(t ) (1.253)

where Z is a random variable with the density (1.250), and the linear regression is given by (1.13) for n = 1or, for X(t ) stationary, by (1.221). This vector process V(t ) gives detailed information about the propertiesof the sample trajectories of X(t ) in the vicinity of an upcrossing (hereafter omitting reference to horizon-tal window) of level u at t = 0. This is particularly useful when X(t ) is stationary and ergodic because thenthe description is valid without prespecifying the time point of the upcrossing. Moreover, the descriptioncan be empirically verified by sampling from the same trajectory of X(t ) at each upcrossing point.

The topics explained on a heuristic level in this section were first studied by Kac and Slepian [42,43]. Lindgren has made extensive use of Kac’s and Slepian’s results in studies of all sorts of both purelymathematical problems and applications related to crossings of levels and to local extremes in Gaussianprocesses. Lindgren intoduced the name “Slepian model (vector) process” for a process defined likeV(t ) in (1.253) [44]. In this and the following sections only the case of conditioning on level crossingsof a scalar process is considered. However, the Slepian model process concept may be generalized toconditioning on outcrossings of a vector process through the boundary of some suitably regular domain[45] and even to make sense for random fields [46, 47].

1.15 Slepian model vector processes

Let X(t ) be a stationary Gaussian vector process of zero mean vector, and let Y(t ) be a subvector of X(t ).Moreover, let two of the elements in X(t ) be a process X (t ) and its derivative process X (t ), and let noneof these two processes be elements of Y(t ). Slightly more general than in (1.253) we may then define theSlepian model vector process

V(t ) = E [X(t ) | X (0) = u, X (0) = Z , Y(0) = Y]+R(t ) = a(t )u +b(t )Z +C(t )Y+R(t ) (1.254)

associated to upcrossings of level u by the scalar process X (t ). The random variable Z represents theslope of the sample curve of X (t ) at an arbitrary upcrossing of level u, and the probability density of Z(interpreted in the sense of long run sampling as explained in the previous section) is the Rayleigh density(1.251).

By marking the upcrossings not only by the observed value of Z but also by the value of Y at theupcrossings, it is seen exactly as in the previous section that the long run probability density of (Z ,Y) isgiven by

fZ ,Y(z,y |Ch) = fZ (z |Ch) fY(y | X (0) = u, X (0) = z, Ch) (1.255)

in which the first factor is the Rayleigh density (1.251) and the second factor is independent of the cross-ing event and is the normal density of mean

E [Y | X (0) = u, X (0) = z] = Cov[Y, X (0)]

Var[X (0)]u + Cov[Y, X (0)]

Var[X (0)]z (1.256)

Page 54: Ditlevsen - MathematicalModels for Structural Reliability Analysis

54 Structural Reliability

and covariance matrix

Cov[Y,Y′ | X (0), X (0)]

= Cov[Y,Y′]−Cov[Y, X (0)]Cov[X (0),Y′]/Var[X (0)]−Cov[Y, X (0)]Cov[X (0),Y′]/Var[X (0)] (1.257)

using that Cov[X (0), X (0)] = 0 because X (t ) is stationary.The process R(t ) is non-stationary Gaussian and independent of

(Z ,Y), and it has the same finite dimensional distributions as the residual process corresponding to thelinear regression of X(t ) on X (0), X (0), Y(0).

The coefficients a(t ),b(t ),C(t ) are submatrices of the regression coefficient matrix

B(t ) = [a(t ) b(t ) C(t )]

= Cov[X(t ), [X (0) X (0) Y(0)]

Var[X (0)] 0 Cov[X (0),Y(0)′]0 Var[X (0)] Cov[X (0),Y(0)′]Cov[Y(0), X (0)] Cov[Y(0), X (0)] Cov[Y(0),Y(0)′]

−1

(1.258)

and the covariance function matrix of R(t ) is

Cov[R(s),R(t )′] = Cov[X(s),X(t )′]−B(s)Cov[

X (0)X (0)Y(0)

, X(t )′] (1.259)

Example The local minimum values of a sample function of a scalar process Y (t ) are the values of Y (t )at the upcrossings of the level u = 0 by the derivative process X (t ) = Y (t ). Assume that Y (t ) is Gaussian,E [Y (t )] = 0 and that Y (t ) has the four times differentiable covariance function Cov[Y (u),Y (v)] = r (v−u).Then the value at t = 0 of the Slepian model process for Y (t ) corresponding to level u = 0 becomesY0(0) = a(0)0+b(0)Z +R(0). With λ0 = Var[Y (t )] = r (0), λ2 = Var[Y (t )] =−r ′′(0), λ4 = Var[Y (t )] = r (4)(0)we find according to (1.15) that

b(0) = 1

λ4Cov[Y (0), Y (0)] =−λ2

λ4(1.260)

and

Var[R(0)] =λ0 −λ2

2

λ4(1.261)

Thus the local minimum value of the normalized process Y (t )/√λ0 can be written as

Y0(0)√λ0

=− λ2√λ0λ4

W +√

1− λ22

λ0λ4U (1.262)

where W is a standard Rayleigh variable, i.e. fW (x) = x exp[− 12 x2], x > 0, and U is standard Gaussian

variable, i.e. fU (x) = ϕ(x), and where W and U are mutually independent. By changing − to +, theanalogous expression for the local maximum values is obtained. The well-known formula of Rice [40]for the long run density of the local maximum values follows by a convolution calculation to obtain thedensity of βW +

√1−β2U , β=λ2/

√λ0λ4. This probabilistic proof was proposed by J. de Maré [48].

Page 55: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 55

The factor

β= λ2√λ0λ4

(1.263)

is called the regularity factor of Y (t ), and√

1−β2 is called the spectral width parameter of Y (t ). Forβ→ 1 the process Y (t ) is called a “narrow band” process, and the local maximal values of Y (t )/

√λ0 have

asymptotically standard Rayleigh distribution. For β→ 0 the process is called a “wide band” process,and the local maximal values of Y (t )/

√λ0 have asymptotically standard normal distribution, that is, the

same distribution as Y (t ). It is seen that if λ2 has a finite limit as λ4 →∞, then β→ 0. As we shall seelater this gives rise to a paradox in the theory of random vibration.

Example In the previous example consider the linear regression of Y (t ) on Y (0), Y (0), and Y (0) followedby conditioning on an upcrossing of level u = 0 at t = 0 by the derivative process Y (t ). Then the proba-bilistic behavior of the sample functions in the vicinities of the local minima is described by the Slepianmodel

Y (0) = a(t )0+b(t )Z + c(t )Y +R(t ) (1.264)

where the coefficient functions are obtained from (1.255) as

[a(t ) b(t ) c(t )] = [−r ′(t ) r ′′(t ) r (t )]

λ2 0 00 λ4 −λ2

0 −λ2 λ0

−1

(1.265)

giving

b(t ) = 1

λ0λ4 −λ22

[λ2r (t )+λ0r ′′(t )] (1.266)

c(t ) = 1

λ0λ4 −λ22

[λ4r (t )+λ2r ′′(t )] (1.267)

These functions also follow from (1.13) after suitable interpretation of the symbols.Let T be the time to the first maximum of b(t )Z + c(t )Y after t = 0 and let

H = b(T )Z + [c(T )−1]Y (1.268)

Neglecting the contribution from the residual process R(t ), the random variable T is thus the time dis-tance between an arbitrary local minimum and the following local maximum, and the random variableH is the increment from the minimum value to the maximum value. It can be shown that if there is nopositive t for which r ′(t ) = r ′′′(t ) = 0, then there is a one-to-one mapping between (Z ,Y ) and (T, H) thatallows (Z ,Y ) to be explicitely expressed by (T, H) in terms of the functions b(t ) and c(t ) [49]. This makesit possible to calculate the probability density of (T, H) from the density of (Z ,Y ) as given by (1.255). Theresult seems to be the only known empirically verified closed form approximation to the joint densityof wave length and amplitude that uses the entire covariance function r (t ) of the Gaussian process Y (t )[49, 50].

Example If Y is put to a given value y in (1.264) the probability density of Z becomes conditional on

Page 56: Ditlevsen - MathematicalModels for Structural Reliability Analysis

56 Structural Reliability

Y = y . Thus we have from (1.255) that

fZ (z | Y = y,Ch) ∝ fZ ,Y (z, y |Ch) ∝

(z√λ4

y + zλ2/λ4√λ0 −λ2

2/λ4

∝ zϕ

z + yλ2/λ0√λ4 −λ2

2/λ0

, z > 0(1.269)

The obtained Slepian model

Y0(t ) = b(t )Z + c(t )y +R(t ) (1.270)

is useful for simulating a sequence of consecutive local extremes of Y (t ). Starting by simulating a localminimum y from (1.262) an outcome z of Z is generated from the population defined by (1.269). Thetime τ to the first maximum of b(t )y +c(t )z is calculated and an outcome r of the Gaussian residual R(t )for t = τ is generated. Then b(τ)y + c(τ)z + r may be taken as an approximation to the next maximumafter the minimum y . Due to the symmetry of the Gaussian process Y (t ) the same procedure appliesafter obvious modification to simulate the next minimum after the obtained maximum etc. An explicitexpression for the density of the random time T from the minimum of given value y to the next maxi-mum can be obtained [48]. Thus the outcome T = τmay be directly generated. The described simulationprocedure is of interest in fatigue life studies assuming that the essential effect of random process stress-ing is determined by the sequence of local extremes rather than the detailed stress path between theseextremes.

Example Warnings on a near future upcrossing of a critical level u by the process X (t ) may be given onthe basis of the behavior of the sample trajectory of Y(t ) observed up to present time. An alarm policycan be determined on the basis of the Slepian model vector process for the behavior of Y(t ) given that Xhas an upcrossing of the critical level u at a given time. On this basis there is a theory of optimal alarmpolicy [45, 51].

It is seen from these examples and the examples of the next chapter that Slepian vector model pro-cesses are well suited for insight giving reasoning concerning sample function behavior at level upcross-ings. The reasoning is often approximate in particular with respect to the way the residual process istreated, of simply because the residual process in some applications is completely neglected. Thereforeit is important to be able to verify the results by other methods or by simulation. Simple procedures forsimulating sample functions of Slepian model processes for both stationary and non-stationary Gaussianprocesses are described in the literature [52, 53]. Simulations corresponding to the non-stationary casemay serve to test the sensitivity of the results to departure from the stationarity assumption underlyingmost reported applications of the Slepian model process concept.

1.16 Applications of Slepian model processes in stochastic mechan-ics

In stochastic mechanics there are several interesting problems that can be successfully analysed by Slepianmodel processes. In this chapter some examples concerning random vibrations of elastic and elasto-

Page 57: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 57

plastic oscillators will be treated in detail. Before this some other useful examples of applications will besummarized.

Example Of relevance for the design conditions for vehicles aimed for travelling on rough roads therehave been made probabilistic studies of the jumps and bumps of an idealized vehicle consisting of asingle wheel of zero mass connected through a linear spring and a viscous damper to a concentratedmass [54]. Since the wheel can only be in compressive contact with the random road surface it is a criticalcrossing event when the wheel looses contact and the mass starts a free flight until the wheel bumps intothe road again. Modelling the road surface as a Gaussian field over which the one-wheel vehicle travelswith constant speed, the Slepian model vector process is the obvious tool for such a study.

Example Close approximations to the distribution of the duration of a wide band Gaussian process visitto an interval can be obtained by use of Slepian process models [55]. This problem is equivalent to theproblem of calculating the distribution of the time to first passage out of the interval. Its relevance instructural reliability is therefore obvious. A specific application concerns fatigue accumulation in elasticbars that may vibrate strongly within disjoint random time intervals due to the vortex shedding effect ofthe natural wind. Between these intervals only weak vibrations occur. This is an effect of the turbulentnature of the natural wind and the so-called “lock-in” phenomenon originating from the coupling be-tween the movements of the bar and the surrounding velocity field. As a simplified engineering model itmay be assumed that damped harmonic vibrations start up when the velocity in front of the bar crossesinto some specified interval determined by a natural frequency of the bar and the mechanical damping.Further it may be assumed that the vibrations damp out when the velocity crosses out of the intervalagain. For this model it is of interest to be able to determine the distribution of the duration of an ar-bitrary visit of the wind velocity process to the critical interval. By use of the distribution and the meannumber of incrossings to the critical interval it is possible to calculate the expected accumulated damageduring a given time under the adoption of a linear accumulation law such as the Palmgren-Miner rule[56, 57].

Example The level crossing behavior of envelope processes associated to narrow band random vibrationresponse processes is of interest for the study of the long run distribution of the distances between theclumps of response process exceedances outside critical response levels.

Envelope excursions above a level u may take place without there being any upcrossings of the re-sponse process during the time of these excursions. Such envelope excursions are said to be empty.Otherwise they are said to be qualified [58]. For an ergodic narrow-band Gaussian process the distancesbetween the clumps of response process exceedances are approximately the same as the distances be-tween the qualified envelope excursions for any reasonably useful envelope definition. Thus it is relevantto estimate the long run fraction of qualified envelope excursions among all the envelope excursions [58].

Let X (t ) be an ergodic narrow-band Gaussian process and let X (t ) be its Hilbert transform simply de-fined by making a phase shift of π/2 in the spectral representation of X (t ). Then a convenient envelope

process is defined as the norm√

X (t )2 + X (t )2 of the vector process (X (t ), X (t )) [2]. The excursions of theenvelope above level u obviously become coincident with the excursions of the vector process outsidethe circle of radius u and centre at the origin (the level circle). Among these excursions the qualified ex-cursions are those for which X (t ) > u. Since the probabilistic structure of the vector process is invariantto a rotation of the co-ordinate system, it follows that the points of crossing out of the level circle areuniformly distributed on the circle. Therefore the long run fraction of empty excursions can be approx-

Page 58: Ditlevsen - MathematicalModels for Structural Reliability Analysis

58 Structural Reliability

imately estimated by calculating the probability that a randomly chosen tangent to the level circle doesnot have points in common with the trajectory of the linear regression part of the Slepian model vectorprocess of (X (t ), X (t )) corresponding to an outcrossing of the level circle taking place at the point (u,0)to time t = 0.

For a given outcrossing velocity vector this linear regression trajectory is a fixed curve that in polarcoordinates can be conveniently approximated by the second degree Taylor expansion with respect to tof the radius vector r (t ) and of the polar angle θ(t ). By use of this approximation it can be shown thatthe excursion trajectory asymptotically as λ2

1/(λ0λ2) → 1 (λn = nth spectral moment of X (t ), (1.223)) canbe approximated by a simple translation of the level circle to fit tangentially to the outcrossing velocityvector. In this asymptotic limit it is thus a simple geometric consideration to obtain the aforementionedprobability conditional on the outcrossing velocity vector. By conditioning considerations analogous to

those in Section 2.14 it can be seen that the velocity vector (X (0), ˙X (0)) conditional on the outcrossingtaking place through the time window [0,d t ] can be expressed by

X (0) =W√λ2 −λ2

1 (1.271)

˙X (0) =U√λ2 −λ2

1 +λ1u (1.272)

where W is a standard Rayleigh variable and U is a standard normal variable. Finally unconditioningleads to the long run fraction of qualified envelope excursion above level u (α = 1/2), or outside theinterval [−u,u] (α= 1), to be approximately [59, 60]

r = 1−2∫ u

0ϕ(η)

1−αp2π2Φ

(γπ

u2 −η2

u

)−1

γπu2 −η2

u

dη (1.273)

in which γ=√

(λ0λ2/λ1)[(λ0λ2/λ1)−1].Of relevance for random vibrations of strain hardening elasto-plastic oscillators these Slepian model

process applications can be extended to excursions outside unsymmetric intervals [−ul ,uu], consideringthe possibility of having empty excursions simultaneously at both ends of the interval or only at the oneor the other end of the interval [60].

Example The equation of motion of a linear single degree of freedom oscillator can be written on dimen-sionless form as

X (τ)+2αX (τ)+ (1+α2)X (τ) = N (τ) (1.274)

such that the stationary dimensionless response process X (τ) has zero mean, unit standard deviation,and correlation function

r (τ) = e−α|τ| (cosτ+αsin | τ |) (1.275)

under stationary white noise excitation N (τ) of intensity 4α(1+α2), that is, Cov[N (τ1), N (τ2)] = 4α(1+α2)δ(τ1 −τ2), where δ(·) is Dirac’s delta function. (Standard formulation: x + 2ζω0x +ω2

0x = f (t )/m, x= displacement, t = time, m = mass, ζ = damping ratio (0 ≤ ζ ≤ 1), ω0 = natural frequency for ζ = 0,f (t ) = stationary white noise of intensity πS. Dimensionless variables: τ = ω0

√1−ζ2t , X (τ) = x(t )/σ,

σ2 = Var[X (t )] =πS/(4ζω30m2) for x(t ) stationary, α= ζ/

√1−ζ2, N (τ) = f (t )/[(1−ζ2)mω2

0σ]).

Page 59: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 59

The spectral increment of the response process X (τ) is obtained directly by operating on the equationof motion (1.268) or by Fourier transformation of r (τ) given by (1.269) as

dG(ω) = dF (ω)

[ω2 − (1−α2)]2 +4α2 (1.276)

where dF (ω) is the spectral increment of the excitation process H(τ), that is,

dF (ω) = 4

πα(1+α2)dω (1.277)

It follows from (1.275) (or from (1.276) using (1.223)) that λ0 = r (0) = 1, λ2 =−r ′′(0) = 1+α2, and λn =∞for n ≥ 3.

According to (1.254) and (1.15) (or (1.221)) the Slepian model process Xu(τ) for the response X (τ) inthe vicinity of an upcrossing of the level u at τ= 0 becomes

Xu(τ) = r (τ)u − r ′(τ)Z /λ2 +R1(τ) = [(cosτ+αsin | τ |)u +Z sinτ]e−α|τ| +R1(τ) (1.278)

Substitution of (1.275) in (1.228) leads to

Cov[R1(τ1),R1(τ2)] = e−α|τ1−τ2| [cos(τ1 −τ2)+αsin(| τ1 −τ2 |)]

−e−α(|τ1|+|τ2|)[(1+α2)cos(τ1 −τ2)+αsin(| τ1 | + | τ2 |)−α2 cos(| τ1 | + | τ2 |)] (1.279)

By letting λ4 → ∞ the Slepian model process (1.264) can be used herein to obtain the Slepian modelprocess Xmi n(τ) for the response X (τ) in the vicinity of a local minimum at τ= 0. Since b(τ) → 0, (1.266),and c(τ) → r (τ), (1.267), for λ4 →∞, we get the limit Slepian model process

Xmi n(τ)Xmax (τ)

= X (0)r (τ)+R2(τ) (1.280)

that since Z disappears from the Slepian model for λ4 → ∞ also is the Slepian model Xmax (τ) for theresponse X (τ) in the vicinity of a local maximum at τ = 0. The abscence of Z in the limit λ4 → ∞ isrelated to the fact that the derivative process X (τ) is not differentiable. The covariance function of R2(τ)is identical to that of R1(τ) given by (1.16).

>From the first example in the previous paragraph we have the paradoxial result that the local maxi-mal values and the local minimal values have normal distribution identical to the one-dimensional dis-tribution of X (τ) itself. This result is obtained in spite of the fact that the sample functions of X (τ) forsmall damping (α << 1) appears to have narrow band nature such as described by the Slepian models(1.278) or (1.280) (and as it is also seen from the spectrum (1.276)). Nevertheless, since β→ 0 for λ4 →∞,the process is categorized to be a wide band process.

This paradox can be cleared up by the following consideration. According to the normality of Xmax ,negative values of Xmax are just as probable as positive values. Assume that −y < 0 is given as such a localmaximum value. Then the Slepian model (1.280) with X (0) =−y shows that the expectation of Xmax (τ)has a local minimum for τ= 0, that is, it curves upwards while the realization of the process itself curvesdownwards in the vicinity of this point. Since the residual standard deviation as obtained from (1.16) for| τ |<π

D[R2(τ)] =√

1−e−2α|τ| (1.281)

Page 60: Ditlevsen - MathematicalModels for Structural Reliability Analysis

60 Structural Reliability

is small for α<< 1, it follows that there is a large probability that there are minima next to the maximumat −y at almost the same value level. Thus the local maxima and minima correspond to small buckles onthe sample function. If these buckles are imagined to be smoothed out, the sample function by and largebehaves as the trajectory of an oscillatory motion between crests and troughs with period 2π and slowlyvarying amplitude and phase.

The following argument shows that this amplitude of the smoothed realization has a distribution thatasymptotically for α→ 0 is a Rayleigh distribution. Consider the Slepian model process Xu(τ) definedby (1.278) for any level u > 0. For simplcity of the argument assume that α is sufficiently small to beneglected at least for τ varying from the upcrossing of level u at τ= 0 until the mean of Xu(τ) reaches itsmaximum. Moreover, assume that the residual process R1(τ) can be neglected in this time interval. Then(1.278) becomes

Xu(τ) = u cosτ+Z sinτ, τ≥ 0 (1.282)

for which the first local maximum is at the smallest value of τ for which tanτ= Z /u and the correspond-ing maximum value is

M = u

√1+

(Z

u

)2

(1.283)

This value is denoted as a crest value. Since Z has a standard Rayleigh distribution asymptotically forα→ 0 we have

P (M > x | M > u) = P (Z >√

x2 −u2) = e−(1/2)x2/e−(1/2)u2

(1.284)

which shows that the conditional distribution of M given M > u is a standard Rayleigh distribution trun-cated at level u.

The fact that the mean function of the Slepian model process Xmax (τ) is symmetric in τ and that theresidual standard deviation is small in the vicinity of τ = 0 shows that the small buckles must predomi-nantly be located at the troughs and the crests of the sample curve.

Consistency between the normal distribution of the local maxima and the Rayleigh distribution ofthe crest values is then only achieved if the random number of local maxima or local minima (buckles)at a crest or a trough are relatively distributed in the proportion 1/x where x is the amplitude.

Example Assume that the linear oscillator considered in the previous example is made nonlinear by theintroduction of symmetric elasticity limits at dimensionless elastic displacement levels u > 0 and−u, andthat the yielding beyond these limits is ideal plastic. (In terms of the variables of the standard formulationwe have u = 2x∗mω0

√ζω0/

pπS, where x∗ is the elastic displacement limit of the physical oscillator). At

any time τmeasuring the response relative to the current value of the accumulated plastic displacementwe then have the dimensionless equation of motion for the symmetric elasto-plastic oscillator (EPO) asthe following modification of (1.274):

X (τ)+ (1+α2)u = N (τ) for X (τ) > u (1.285)

X (τ)+2αX (τ)+ (1+α2)X (τ) = N (τ) for −u ≤ X (τ) ≤ u (1.286)

X (τ)− (1+α2)u = N (τ) for X (τ) < u (1.287)

The viscous damping is not considered to be important in the plastic domain, and for simplicity it isneglected in (1.285). The oscillator with (1.274) as equation of motion is denoted as the associated linearoscillator (ALO) to the EPO.

Page 61: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 61

If the elasticity limit u is large as compared to 1, the stationary response of the EPO will most of thetime behave quite similar to the stationary response of the ALO, that is, almost as a stationary Gaussianprocess of zero mean, and covariance function (1.275). However, once in a while the EPO response willcross the elasticity limit u. This results in a transient behavior of bouncing between the upper and thelower plastic domain a random number of times before the response returns for a shorter or longer timeto the elastic domain.

Since this shorter or longer time of elastic vibrations must start with zero velocity either at the dis-placement −u or at the displacement u relative to the current plastic displacement, it behaves like anon-stationary response to the ALO as long as it stays within the elasticity limits. More precisely this canbe expressed as follows: Given that the EPO response is within the elasticity limits in a time interval oflength at least T after the start of the interval, then the probabilistic structure of the EPO response is inthe interval up to T identical to the conditional probabilistic structure of the ALO response with the sameinitial conditions, given that the ALO response is within the elasticity limits for a longer time than T .

The result of the bouncing of the EPO response between the upper and the lower plastic domain is anet plastic displacement increment

±(∆1 −∆2 +∆3 − ...+ (−1)N∆N+1) (1.288)

where ∆i is the absolute value of the displacement increment from the i th visit in the same clump tothe plastic domain. The sign +/− is used if the first visit is to the upper/lower plastic domain. Thefirst displacement increment ∆1 in the clump has as a random variable another distribution than thefollowing mutually independent and identically distributed random variables∆2,∆3.... This follows fromthe fact that ∆2,∆3, ... all come from an initial start of zero velocity at the elasticity limit, which is not thecase for ∆1. The number of terms in the sum (1.288) is determined by a random variable N that has thegeometric distribution

P (N = n) = (1−p)pn , n = 0,1, ... (1.289)

in which p is the robability that the EPO response moves directly from zero velocity at the elasticity limitu to the opposite plastic domain beyond −u. Thus N is the random “waiting time” until a plastic dis-placement clump is completed.

In the following it will be shown that Slepian model process considerations applied to the ALO re-sponse can be used to obtain closed form approximate expressions for the distributions of ∆i for i = 1and i > 1 and also to obtain the probability p in equation (1.289). Closed form approximations to the dis-tribution of the accumulated net plastic displacement increment (1.288) and the accumulated absolutedisplacement increment

∆1 +∆2 + ...+∆N+1 (1.290)

from a clump can be obtained on the basis of these distributions [61].The Slepian process model Xmax (τ) defined in (1.280) gives the conditional probability

P [X (−π) >−u | X (0) = z] = P [R2(−π)−µz >−u] =Φ(u −µz

σ

)(1.291)

in which

µ= e−απ, σ=√

1−µ2 (1.292)

Page 62: Ditlevsen - MathematicalModels for Structural Reliability Analysis

62 Structural Reliability

according to (1.275) and (1.281).Identifying X (0) with the standard Rayleigh variable M and applying (1.291) in Bayes’ formula then

gives the conditional density

fM (z | X (−π) >−u) ∝ P [X (−π) >−u | X (0) = z] fM (z) =Φ(u −µz

σ

)ze−1/2z2

, z > 0 (1.293)

This density function is directly interpreted as the density function of the crest values sampled alonga realization of the stationary ALO response but censored to include only those crest values that halfa period π before the crest are accompanied by a value of the realization above level −u. Taking thiscensoring rule supplemented with lower truncation at level u as a characterization of the beginning ofan outcrossing clump that starts out at level u, (1.293) gives for z > u the shape of the density function ofthe first crest value in a clump given that the clump starts at the upper level u.

The density function (1.293) is for z > u also approximately the density function of the first crestvalue in the first clump of crest values above level u of the transient response that corresponds to theinitial condition of zero velocity and displacements equal to −u or u with probability 1/2 on each pos-sibility. This is a consequence of having slowly varying amplitude over the period 2π because then thelast crest/trough value in a clump of crest/trough values above/below level u/−u is not much differentfrom the initial value u/−u. Thus the density function (1.293) can be applied even for small values of theyield limit to determine a good approximation to the distribution of the excess energy (M 2 −u2)/2 of theALO available for the EPO to be dissipated into plastic work after the first yield limit crossing in a clumpof crossings. Assuming that the plastic work u∆1 is equal to the excess energy the plastic displacementbecomes

∆1 = 1

2u(M 2 −u2) (1.294)

For white noise excitation the energy balance assumption (1.294) is a close approximation to whatis obtained by solving the part of the differential equation (1.285) that corresponds to movement in theplastic domain and with the proper initial conditions (matching conditions between ALO and EPO) atyield limit impact [62]. Then from (1.293) and (1.294) we get the approximating density function

f∆1 (x) = fM (√

u2 +2ux)u√

u2 +2ux∝Φ

[1

σ(u −µ

√u2 +2ux)

]e−ux , x > 0 (1.295)

The normalizing constant to be multiplied to the right side of (1.295) is obtained by integration by partsto be u/[1− (1+µ)Φ(−γu)] in which

γ= 1−µσ

=√

1−µ1+µ , µ= 1−γ2

1+γ2 (1.296)

The distribution of the plastic displacement that results from start of the EPO with zero velocity atthe level −u, given that is passes through the yield level at u in the first cycle, is obtained by changing theconditioning in (1.293) from X (−π) >−u to X (−π) =−u. This simply implies that (1.293) is replaced by

fM (z | X (−π) =−u) ∝ ϕ(u −µz

σ

)ze−(1/2)z2

∝ zϕ( z −µu

σ

), z > 0

(1.297)

Page 63: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 63

which in the same way as in (1.295) gives the common density function of ∆2,∆3, ... as

f∆2 (x) ∝ϕ

[1

σ(√

u2 +2ux −µu)

], x > 0 (1.298)

The normalizing constant to be multiplied to the right hand side of equation (1.298) is u/[σ2ϕ(γu)+σµuΦ(−γu)].

The probability p(u) that exceedance of level u actually occurs within the first period is directly ob-tained from the complementary distribution function of M given X (−π) = −u. According to (1.297) wehave

1−FM [x | X (−π) = u] =

∫ ∞

xzϕ

( z −µu

σ

)d z∫ ∞

0zϕ

( z −µu

σ

)d z

=σϕ

( x −µu

σ

)+µuΦ

(−x −µu

σ

)σϕ

(µu

σ

)+µuΦ

(µu

σ

) (1.299)

which for x = u gives

p(u) = 2γϕ(γu)+ (1−γ2)uΦ(−γu)

2γϕ

(1−γ2

2γu

)+ (1−γ2)uΦ

(1−γ2

2γu

) (1.300)

It is seen that p(u) →Φ(−γu) asymptotically as u →∞.These Slepian model based results have been successfully verified to be very good approximations

to the empirical distributions obtained by simulation of the response of the EPO [61]. Moreover, thereis a very convincing approximate numerical coincidence of the density (1.298) of ∆2 with a density ofa completely different analytical appearance obtained by solving a diffusion equation (Fokker-Planckequation) for the slowly varying amplitude formulated by the method of stochastic averaging [63].

Page 64: Ditlevsen - MathematicalModels for Structural Reliability Analysis

64 Structural Reliability

Page 65: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Bibliography

[1] Kolmogorov, A.N., Grundbegriffe der Wahrscheinlichkeitsrechnung, Erg. Mat. 2(3), 1933.

[2] Cramer, H., and Leadbetter, M.R., Stationary and Related Stochastic Processes, Wiley, New York, 1967.

[3] Journel, A.B. and Huijbregts, C.J., Mining Geostatistics, Academic Press, New York, 1978.

[4] Krige, D.G., Two-dimensional Weighted Moving Averaging Trend Surfaces for Ore Evaluation. Proc.Symposium on Mathematical Statistics and Computer Applications for Ore Evaluation, Johannes-burg, S.A., 13-38, 1966.

[5] Matheron, G., Principles of Geostatistics, Economic Geology, Vol. 58, pp. 1246-1266, 1962.

[6] Matheron, G., Estimating and Choosing (transl. from French by A.M. Hasofer), Springer Verlag,Berlin-Heidelberg, 1989.

[7] Loeve, M., Probability Theory, Van Nostrand, Princeton, N.J., 3rd ed., 1963.

[8] Lawrence, M., Basis random variables in finite element analysis. Int. J. of Numerical Methods inEngrg., 24(10), 1849-1863.

[9] Spanos, P.D., and Ghanem, R.G., Stochastic finite element expansion for random media. Journal ofEngineering Mechanics, ASCE, 115(5), 1989, 1035-1053.

[10] Ghanem, R., and Spanos, P.D., Spectral Stochastic finite-element formulation for reliability analysis.Journal of Engineering Mechanics, ASCE, 117(10), 1991, 2351-2372.

[11] Ghanem, R.G., and Spanos, P.D., Stochastic finite elements: a spectral approach, Springer Verlag,New York, N.Y., 1991.

[12] Li, C.-C. and Der Kiureghian, Optimal discretization of random fields. Journal of Engineering Me-chanics, ASCE, 119, 1993, 1136-1154.

[13] Ditlevsen, O., and Munch-Andersen, J., Empirical stochastic silo load model. Part 1: Correlationtheory. Journal of Engineering Mechanics, ASCE, 121(9), 1995, 973-980.

[14] Roberts, J.B., and Spanos, P.D., Random vibration and statistical linearization, Wiley, Chichester,UK, 1990.

[15] Sondhi, M.M., Random processes with specified spectral density and first-order probability density.The Bell System Technical Journal, 62(3), 1983, 679-701.

65

Page 66: Ditlevsen - MathematicalModels for Structural Reliability Analysis

66 Structural Reliability

[16] Der Kiureghian, A., Liu, P.-L., Structural reliability under incomplete probability information, Jour-nal of Engineering Mechanics. ASCE, 112, 1986, 857-865.

[17] Ditlevsen, O., Christensen, C., and Randrup-Thomsen, S., Reliability of silo ring under lognormalstochastic pressure using stochastic interpolation. Proc. of IUTAM Symposium: Probabilistic Struc-tural Mechanics: Advances in Structural Reliability Methods, San Antonio, Texas, June 1993. (eds.:P.D. Spanos and Y.-T. Wu), Springer Verlag, 1994, 134-162.

[18] Ditlevsen, O., Discretization of random fields in beam theory. Proc. of ICASP7: Applications of Statis-tics and Probability. Civil Engineering Reliability and Risk Analysis (eds.: Lemaire, M., Favre, J.-L.,and Mebarki, A.). Balkema, Rotterdam, 1995, 983-990.

[19] Ditlevsen, O., Christensen, C., and Randrup-Thomsen, S., Empirical stochastic silo load model. Part3: Reliability applications. Journal of Engineering Mechanics, ASCE, 121(9), 1995, 987-993.

[20] Der Kiureghian, A. and Ke, J-B., The stochastic finite element method in structural reliability. Prob-abilistic Engineering Mechanics, 3(2), 1988, 83-91.

[21] Vanmarcke, E.H., and Grigoriu, M., Stochastic finite element analysis of simple beams. Journal ofEngineering Mechanics, ASCE, 109(5), 1983, 1203-1214.

[22] Liu, W.K., Mani, A., and Belytschko, T., Random field finite elements. Int. J. Numerical Methods inEngrg., 23(10), 1986, 1831-1845.

[23] Grigoriu, M., Crossing of non-Gaussian translation processes, Journal of Engineering Mechanics,ASCE, 110(4), 1984, 610-620.

[24] Takada, T., Weighted integral method in multi-dimensional stochastic finite element analysis. Prob-abilistic Engineering Mechanics, 5(4), 1990, 158-166.

[25] Deodatis, G. and Shinozuka, M., Weighted integral method I: Stochastic Stiffness matrix, Journal ofEngineering Mechanics. ASCE, 117, 1991, 1851-1864.

[26] Zhang, J., and Ellingwood, B., Orthogonal Series expansion of random fields in first-order reliabilityanalysis. Journal of Engineering Mechanics, ASCE, 120(12), 1994, 2660-2677.

[27] Wong, E., Stochastic Processes in Information and Dynamical Systems, McGraw-Hill, New York, 1971.

[28] Bucher, C.G., and Brenner, C.E., Stochastic response of uncertain systems. Archive of Appl. Mech.,62, 1992, 507-516.

[29] Winterstein, S.R., Nonlinear vibration models for extremes and fatigue, Journal of Engineering Me-chanics, ASCE, 114, 1988, 1772-1790.

[30] Ditlevsen, O., and Madsen, H.O., Structural Reliability Methods, Wiley, 1996.

[31] Ditlevsen, O., Mohr, G., and Hoffmeyer, P., Integration of non-Gaussian fields. Probabilistic Engi-neering Mechanics, 1995.

[32] Mohr, G., and Ditlevsen, O., Partial summations of stationary sequences of non-Gaussian randomvariables. Probabilistic Engineering Mechanics, 1995.

Page 67: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 67

[33] Keaveny, J.M., Nadim, F. and Lacasse, S., Autocorrelation Functions for Offshore Geotechnical Data,in Structural Safety & Reliability, Vol. 1 (Ed. Ang, A.H.-S., Shinozuka, M. and Schuëller, G.I.), 263-270,Proceedings of ICOSSAR’89, the 5th International Conference on Structural Safety and Reliability,San Francisco, August 7-11, 1989. ASCE, New York, 1990.

[34] Kameda, H. and Morikawa, H., An interpolating stochastic process for simulation of conditionalrandom fields. Probabilistic Engineering Mechanics, 7(4), 1992, 243-254.

[35] Ditlevsen, O., Random field interpolation between point by point measured properties. Computa-tional Stochastic Mechanics (P.D. Spanos, C.A. Brebbia, eds.), Computational Mechanics Publica-tions, Southampton, Boston, Elsevier Applied Science, London, New York, 1991, 801-812.

[36] Munch-Andersen, J., Ditlevsen, O., Christensen, C., Randrup-Thomsen, S., and Hoffmeyer, P., Em-pirical stochastic silo load model. Part 2: Data analysis. Journal of Engineering Mechanics, ASCE,121(9), 1995, 981-986.

[37] Ditlevsen, O. and Gluver, H., Parameter estimation and statistical uncertainty in random field rep-resentations of soil strength. Proc. CERRA-ICASP6: Sixth international conference on applicationsof statistics and probability in civil engineering (L. Esteva, S.E. Ruiz, eds.), Institute of Engineering,UNAM, México City, 691-704, 1991.

[38] Ditlevsen, O.,Measuring uncertainty correction by pairing. Proceedings of ICOSSAR’93 - The 6thInternational Conference on Structural Safety & Reliability, (eds.: Schuëller, Shinozuka & Yao),Balkema, Rotterdam, 1994, pp. 1977-1983.

[39] Ditlevsen, O., Uncertainty Modeling. McGraw-Hill Inc, New York, 1981.

[40] Rice, S.O., Mathematical Analysis of Random Noise, Bell System Technical Journal, 23, 1944, 282-332,and 24, 1944, 46-156. Reprinted in Selected Papers in Noise and Stochastic Processes (ed.: N. Wax),Dover, New York, 1954.

[41] Leadbetter, M.R., Lindgren, G., and Rootzén, H., Extremes and Related Properties of Random Se-quences and Processes, Springer-Verlag, New York, 1983.

[42] Kac, M., and Slepian, D., Large Excursions of Gaussian Processes. Annals of Mathematical Statistics,30, , 1959, 1215-1228.

[43] Slepian, D., On the Zeros of Gaussian Noise, in Time Series Analysis (ed.: M. Rosenblatt), Wiley, NewYork, 1962, 143-149.

[44] Lindgren, G., Functional Limits of Empirical Distributions in Crossing Theory, Stochastic Process.Appl., 5, 1977, 143-149.

[45] Lindgren, G., Use and Structure of Slepian Model Processes for Prediction and Detection in Crossingand Extreme Value Theory. CODEN: LUNFD6/(NFMS-3079)/1-30/(1983), Univ. of Lund, Dept. ofMath. Sta., Box 725, S-220 07 Lund, Sweden, 1983.

[46] Lindgren, G., Local Maxima of Gaussian Fields. Ark. Mat., 10, 1972, 195-218.

[47] Wilson, R.J., and Adler, R.J., The Structure of Gaussian Fields Near a Level Crossing. Adv. Appl. Prob-ability, 14, 1982, 543-565.

Page 68: Ditlevsen - MathematicalModels for Structural Reliability Analysis

68 Structural Reliability

[48] Lindgren, G., On the Use of Effect Spectrum to Determine a Fatigue Life Amplitude Spectrum. ITM-Symposium on Stochastic Mechanics, Lund, 1983, CODEN: LUTFD2/(TFMS-3031)/1-169/(1983),Univ. of Lund, Dept. of Math. Stat., Box 725, S-220 07 Lund, Sweden, 1983.

[49] Lindgren, G. and Rychlik, I., Wave Characteristic Distributions for Gaussian Waves - Wave-length,Amplitude and Steepness. Ocean Engng. 9, 1982, 411-432.

[50] Rychlik, I., Regression approximations of wave length and amplitude distributions. Advances in Ap-plied Probability, 19, 1987, 396-430.

[51] Lindgren, G., Model Processes in Nonlinear Prediction with Application to Detection and Alarm.Ann. Probab. 8, 1980, 775-792.

[52] Hasofer, A.M., Slepian Process of a Non-stationary Process. ASCE Probabilistic Mechanics and Struc-tural and Geotechnical Reliability (ed: Y.K. Lin), 1992, 296-299.

[53] Hasofer, A.M., Representation for the Slepian process with applications. Journal of Sound and Vi-bration, 124(3), 1988, 435-441.

[54] Lindgren, G., Jumps and Bumps on Random Roads. Journal of Sound and Vibration, 78, 1981, 383-395.

[55] Ditlevsen, O., Duration of Visit to Critical Set by Gaussian Process. Probabilistic Engineering Me-chanics, 1(2), 1986, 82-92.

[56] Ditlevsen, O., Non-Gaussian Vortex Induced Aeroelastic Vibrations under Gaussian Wind. A SlepianModel Approach to “Lock In”. ASCE Probabilistic Mechanics and Structural and Geotechnical Relia-bility (ed: Y.K. Lin), 1992, 292-295.

[57] Christensen, C.F., and Ditlevsen, O., Fatigue damage from random vibration pulse process of tubularstructural elements subject to wind. Third International Conference on Stochastic Structural Dy-namics. San Juan, Puerto Rico, Jan. 1995.

[58] Vanmarcke, E.H., On the distribution of the first-passage time for normal stationary random pro-cesses. American Society of Mechanical Engineers, Journal of Applied Mechanics, 42, 1975, 215-220.

[59] Ditlevsen, O., and Lindgren, G., Empty envelope excursions in stationary Gaussian processes. Jour-nal of Sound and Vibration, 112, 1988, 571-587.

[60] Ditlevsen, O., Qualified envelope excursions out of an interval for stationary narrow-band Gaussianprocesses. Journal of Sound and Vibration, 173(3), 1994, 309-327.

[61] Ditlevsen, O., and Bognár, L., Plastic displacement distributions of the Gaussian white noise excitedelasto-plastic oscillator. Probabilistic Engineering Mechanics, 8, 1993, 209-231.

[62] Ditlevsen, O., Elasto-plastic oscillator with Gaussian excitation. Journal of Engineering Mechanics,ASCE, 112(4), 1986, 386-406.

[63] Roberts, J.B., The yielding behaviour of a randomly excited elasto-plastic structure. Journal of Soundand Vibration, 72, 1980, 71-85.

Page 69: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 69

Related papers on random fields by the author et al, published after 1996:

[64] Ditlevsen, O., Distributions of extremes of random fields over arbitrary domains with application toconcrete rupture stresses. Probabilistic Engineering Mechanics, 19(4), 2004, 373-384

[65] Franchin, P., Ditlevsen, O., and Der Kiureghian, A., Model correction factor method for reliabilityproblems involving integrals of non-Gaussian random fields. Probabilistic Engineering Mechanics,17, 2002, pp. 109-122

[66] Ditlevsen, O., Tarp-Johansen, N.J., and Denver, H., Bayesian Soil Assessments Combining Prior withPosterior Censored Samples. Computers and Geotechnics, 26(3-4), 2000, 187-198

[67] Ditlevsen, O. and N.J. Tarp-Johansen, N.J., Choice of input fields in stochastic finite elements. Prob-abilistic Engineering Mechanics, 14, 1999, 63-72.

[68] Hasofer, A.M., Ditlevsen, O., and Tarp-Johansen, N.J., Positive random fields for modeling materialstiffness and compliance. ICOSSAR’97, Kyoto, Japan, Nov. 24-28, 1997. Structural Safety and Reliabil-ity (eds.: N. Shiraishi, M. Shinozuka, Y.K. Wen), Balkema, Rotterdam, 1998, 723-730.

Related papers on elasto-plastic oscillators (Slepian model processes) by the author et al, pub-lished after 1996:

[69] Lazarov, B., and Ditlevsen, O., Slepian simulation of distributions of plastic displacements of earth-quake excited shear frames with a large number of stories. Probabilistic Engineering Mechanics 20,2005, 251-262.

[70] Lazarov, B., and Ditlevsen, O., Simulation by Slepian method of plastic displacements of Gaussianprocess excited multistory shear frame. Probabilistic Engineering Mechanics, 19(1-2), 2004, 113-126

[71] Ditlevsen, O., and Lazarov, B., Slepian simulation of plastic displacement distributions for shearframe excited by filtered Gaussian white noise ground motion. In Applications of Statistics and Proba-bility in Civil Engineering (eds.: Armen Der Kiurighian, Samer Madanat, Juan M. Pestana), Millpress,Rotterdam, pp. 259-266. Proc.of ICASP9, July 6-9, 2003, San Francisco, USA.

[72] Tarp-Johansen, N.J., and Ditlevsen, O., Time between plastic displacements of elasto-plastic oscil-lators subject to Gaussian white noise. Probabilistic Engineering Mechanics, 16(4), 2001, 373-380

[73] Ditlevsen, O., Randrup-Thomsen,S.R., and Tarp-Johansen, N.J., Slepian approach to a Gaussian ex-cited elasto-plastic frame including geometric nonlinearity. Nonlinear Dynamics, 24, 2001, 53-69

[74] Ditlevsen, O., and Tarp-Johansen, N.J., Slepian modeling as a computational method in random vi-bration analysis of hysteretic structures. Third International Conference on Computational Stochas-tic Mechanics. Island of Santorini, Greece, June 14-17, 1998.In: Computational Stochastic Mechan-ics (ed.: P.D.Spanos), A.A.Balkema, Rotterdam,1999, 67-78

[75] Randrup-Thomsen, S. and Ditlevsen, O., Experiments with elasto-plastic oscillator. Probabilistic En-gineering Mechanics, 14, 1999, 161-167.

[76] Ditlevsen, O., and Tarp-Johansen, N.J., White noise excited non-ideal elasto-plastic oscillator. ActaMechanica, 125, 1997, 31-48

Page 70: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Index

n-dimensional normal, 11

alarm policy, 56associated linear oscillator, 60

Bayesian method, 45Bayesian statistical method, 37buckles, 60bumps, 57

censored data, 46Choleski decomposition, 13classical interpolation, 33clipped sampling case, 46clumps of response process exceedances, 57coherence function, 10computational practicability, 38conditional expectation, 8Cone Penetration Test, 39, 43correlation coefficient, 7covariance function, 17CPT method, 39CPT-method, 43crests, 60

density, 11density function of the crest values, 62design conditions for vehicles, 57destructive testing measurements, 42Direct linear regression, 30duration of a wide band Gaussian process, 57

empty excursions, 58Envelope excursions, 57envelope processes, 57ergodicity, 52exponential transformation, 16

factorized correlation structure, 38fatigue accumulation, 57fatigue life studies, 56

first passage, 57Fokker-Planck equation, 63foundation on saturated clay, 38

Gaussian, 11Gaussian closure, 13Gaussian density, 45geometric distribution, 61gradient, 12

Hilbert transform, 57homogeneous, 18homogeneous Markov field, 40homogeneous Poisson stream, 25horizontal window crossings, 50

increasing marginal transformation, 16Integral average method, 30interpolation, 33interpolation functions, 20, 36isotropic fields, 39

J. de Maré, 54joint density of wave length and amplitude, 55joint Gaussian density, 38

Kac, 53Karhunen-Loeve expansion, 31Kolmogorov, 4Krige, 5kriging, 20, 30

Lagrange, 33Lagrangian remainder, 33likelihood function, 38, 45Lindgren, 53linear functionals, 21linear interpolation in the mean, 36linear regression, 6linear single degree of freedom oscillator, 58local maxima, 60

70

Page 71: Ditlevsen - MathematicalModels for Structural Reliability Analysis

Dimension Reduction and Discretization 71

local maximum, 55, 59local maximum values, 54local minimum, 59local minimum values, 54lock-in phenomenon, 57lognormal distribution, 16, 18long run density, 54long run fraction of qualified envelope excursion, 58long run sampling, 53

marginal median point, 16marginal transformation, 18marginally backtransformed linear regression, 30Markovian, 28maximum likelihood estimation, 34maximum likelihood principle, 45mean rate of upcrossings, 51mean value correct backtransformed linear regression,

30mean value function, 17measuring error model, 40Midpoint method, 30minima, 60

narrow band process, 55Nataf field, 18, 31noisy data, 40non-Gaussian distributions, 16non-informative prior, 37, 45nonlinear functions, 11nonnegative definite, 17

pairing method, 41Palmgren-Miner rule, 57paradox, 55Poisson load field, 26principle of simplicity, 34

qualified envelope excursions, 57quotient series expansion, 40

random process, 46random vibration, 12random vibrations, 58Rayleigh density, 52Rayleigh distribution, 60regression coefficient, 6regression coefficient function, 10regression coefficient matrix, 8regularity factor, 55residual, 6

residual covariance function, 10residual covariance matrix, 8residual vector process, 47Rice, 54Rice formula, 51

saturated clay deposit, 43Shape function method, 30shape functions, 20singular normal, 11Slepian, 53Slepian model vector process, 53soil body, 39spectral moments, 48spectral representation, 48spectral width parameter, 55standard Rayleigh variable, 54standardized normal, 11stationary, 18, 47stochastic averaging, 63stochastic interpolation, 33stochastic linearization, 12symmetric elasto-plastic oscillator, 60

troughs, 60

undrained failure, 38undrained shear strength, 43

vane test method, 43vortex shedding effect of the natural wind, 57

weakly homogeneous, 18weighted integrals, 21wide band process, 55, 59Winterstein approximations, 32