15
Stanford Exploration Project, Report 113, July 8, 2003, pages 231–246 Multiple realizations and data variance: Successes and failures Robert G. Clapp 1 ABSTRACT Geophysical inversion usually produces a single solution to a given problem. Often it is desirable to have some error bounds on our estimate. We can produce a range of models by first realizing that the single solution approach produces the minimum energy, minimum variance solution. By adding appropriately scaled random noise to our residual vector we change the minimum energy solution. Multiple random vectors produce multiple new estimates for our model. These various solutions can be used to assess error in our model parameters. This methodology strongly relies on having a decorrelated residual vector and, previously, was used primarily on the model styling portion of our inversion problem because it came closer to honoring the decorrelation requirement. With an appropriate description of the noise covariance, multiple realizations can be estimated. Examples of perturbing the data fitting portion of the standard inversion are shown on a 2-D decon- volution and 1-D velocity estimation problems. Results indicate that methodology has potential but is not well enough understood to be generally applied. INTRODUCTION Risk assessment is a key component to any business decision. Geostatistics has recognized this need and has introduced methods such as simulation to attempt to assess uncertainty in their estimates of earth properties (Isaaks and Srivastava, 1989). The problem is that the geosta- tistical methods are generally concerned with local, rather than global, solutions to problems and therefore can not be easily applied to global inversions problems that are common in geophysics. In previous works (Clapp, 2000, 2001a,b), I showed how we can modify standard geophys- ical inverse techniques by adding random noise into the model styling goal to obtain multiple realizations. In Clapp (2002) and Chen and Clapp (2002), these multiple realizations were used to produce a series of equiprobable velocity models. The velocity models were used in a series of migrations, and the effect on Amplitude vs. Angle (AVA) was analyzed. In previous papers on the subject, the concentration was on modifying the model styling goal and only briefly mentioned that it should be feasible to apply the same methodology to the data fitting goal. In this paper, I show the data fitting goal can also be changed to produce equiprobable 1 email: [email protected] 231

Multiple realizations and data variance: Successes and ...sep · some preliminary boundaries on model errors. I conclude by showing an example of creating multiple reasonable interval

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Multiple realizations and data variance: Successes and ...sep · some preliminary boundaries on model errors. I conclude by showing an example of creating multiple reasonable interval

Stanford Exploration Project, Report 113, July 8, 2003, pages 231–246

Multiple realizations and data variance: Successes and failures

Robert G. Clapp1

ABSTRACT

Geophysical inversion usually produces a single solution to a given problem. Often it isdesirable to have some error bounds on our estimate. We can produce a range of models byfirst realizing that the single solution approach produces the minimum energy, minimumvariance solution. By adding appropriately scaled random noise to our residual vectorwe change the minimum energy solution. Multiple random vectors produce multiple newestimates for our model. These various solutions can be usedto assess error in our modelparameters. This methodology strongly relies on having a decorrelated residual vectorand, previously, was used primarily on the model styling portion of our inversion problembecause it came closer to honoring the decorrelation requirement. With an appropriatedescription of the noise covariance, multiple realizations can be estimated. Examples ofperturbing the data fitting portion of the standard inversion are shown on a 2-D decon-volution and 1-D velocity estimation problems. Results indicate that methodology haspotential but is not well enough understood to be generally applied.

INTRODUCTION

Risk assessment is a key component to any business decision.Geostatistics has recognized thisneed and has introduced methods such as simulation to attempt to assess uncertainty in theirestimates of earth properties (Isaaks and Srivastava, 1989). The problem is that the geosta-tistical methods are generally concerned with local, rather than global, solutions to problemsand therefore can not be easily applied to global inversionsproblems that are common ingeophysics.

In previous works (Clapp, 2000, 2001a,b), I showed how we canmodify standard geophys-ical inverse techniques by adding random noise into the model styling goal to obtain multiplerealizations. In Clapp (2002) and Chen and Clapp (2002), these multiple realizations wereused to produce a series of equiprobable velocity models. The velocity models were used in aseries of migrations, and the effect on Amplitude vs. Angle (AVA) was analyzed. In previouspapers on the subject, the concentration was on modifying the model styling goal and onlybriefly mentioned that it should be feasible to apply the samemethodology to the data fittinggoal.

In this paper, I show the data fitting goal can also be changed to produce equiprobable1email: [email protected]

231

Page 2: Multiple realizations and data variance: Successes and ...sep · some preliminary boundaries on model errors. I conclude by showing an example of creating multiple reasonable interval

232 R. Clapp SEP–113

results. I show that it is a much more difficult problem than modifying our model stylinggoal because of the difficulties in building a realistic noise covariance operator. I begin byreviewing the methodology of multiple realizations with operators. I introduce a simple in-verse filtering example to illustrate that for simple cases we can create pseudo-datasets withrealistic noise distribution. These datasets can be used asinput to an inversion process to getsome preliminary boundaries on model errors. I conclude by showing an example of creatingmultiple reasonable interval velocity models from a singlevelocity scan.

REVIEW

Inverse problems obtain an estimate of a modelm, given some datad and an operatorLrelating the two. We can write our estimate of the model as minimizing the objective functionin a least-squares sense,

f (m) = ‖d−Lm‖2. (1)

We can think of this same minimization in terms of fitting goals as

0 ≈ r = d−Lm, (2)

wherer is a residual vector.

Bayesian theory tells us (Tarantola, 1987) that convergence rate and the final quality ofthe model is improved the closerr is to being Independent Identically Distributed (IID). If weinclude the inverse noise covarianceN in our inversion our residual beomes IID,

0 ≈ r = N(d−Lm). (3)

A regularized inversion problem can be thought of as a more complicated version of (3)with an expanded data vector and an additional covariance operator,

0 ≈ rd = Nnoise(d−Lm)

0 ≈ rm = εNmodel(0− Im). (4)

In this new formulation

rd is the residual from the data fitting goal,

rm is the residual from the model styling goal,

Nnoise is the inverse noise covariance,

Nmodel is the inverse model covariance,

I is the identity matrix, and

ε is a scalar that balances the fitting goals against each other.

Page 3: Multiple realizations and data variance: Successes and ...sep · some preliminary boundaries on model errors. I conclude by showing an example of creating multiple reasonable interval

SEP–113 Multiple realizations 233

Normally we think ofNmodel as the regularization operatorA. Simple linear algebra leads to amore standard set of fitting goals:

0 ≈ rd = Nnoise(d−Lm)

0 ≈ rm = εAm. (5)

The problem with this approach is that we never know the true inverse noise or model covari-ance and therefore are only capable of applying approximateforms of these matrices.

Model Variability

What we choose forA can have a significant affect on our model estimate. In theoryA shouldbe a matrix of sizenm by nm (wherenm is the number of model elements). People usenumerous approximations forA. Some of the more common

• a Laplacian or some type of symmetric operator,

• a stationary Prediction Error Filter (Claerbout, 1999),

• a steering filter (Clapp, 2001a), or

• a non-stationary PEF (NSPEF) (Crawley, 2000).

The first option makes the assumption that our model is smoothand stationary. The secondoption still assumes stationarity, but allows for much moresophisticated covariance descrip-tions. The third option allows for non-stationarity but is only valid when a model has a singledip at each location and requires somea priori knowledge about the model. The last is thebest representation of the inverse model covariance matrix. Unfortunately, it requires a fieldwith the same properties as the model in order to be constructed. In addition, it faces stabilityproblems (Rickett, 2001).

At least the first three methods, and possibly all four (depending on the field we use andthe filter description we choose to estimate the NSPEF) are limited to describing second orderstatistics. In addition we normally try to describe the inverse covariance through a filter withonly a few coefficients. In terms of the covariance matrix, weare putting non-zero coefficientsalong only a few diagonals. Finally,/ all of these approaches assume that the main diagonal ofthe inverse covariance matrix is a constant.

The problems with these approximations are demonstrated inFigure 1. The left panelshows measurements of the sea depth for a day. A PEF is estimated at the known locationsand used asA with fitting goals,

0 ≈ rnoise = d−Jm (6)

0 ≈ rmodel = εAm,

whered is shown in the left panel of Figure 1 andJ is a selector matrix (1 at known locations, 0at unknown). The center panel of Figure 1 shows the estimatedmodel. Note how the estimated

Page 4: Multiple realizations and data variance: Successes and ...sep · some preliminary boundaries on model errors. I conclude by showing an example of creating multiple reasonable interval

234 R. Clapp SEP–113

Figure 1: The left panel shows the result of recording the seadepth for one day. The centerpanel shows the interpolation result using a filter estimated from the known portion of the dataand then solving (6). Note the difference in variance in the known and unknown portions of themodel. The right panel is the filter response of the PEF used for interpolation. bob5-seabeam[ER,M]

portion of the model doesn’t have the right texture. The variance of the model is not the sameas the variance of the data. This is due to the combination of two factors.

• Our covariance description has a limited range. The right panel of Figure 1 show theresult of applying the inverse PEF (or more correctly (A−1)′A−1) to a spike in the centerof the model. Note how the response tends towards zero.

• Our inverse problem will give us a minimum energy solution, which means it is goingto want to fill the residual vectors with as small numbers as possible.

In previous papers (Clapp, 2000, 2001a), I showed how we can change what the minimumenergy solution will be by introducing an initial conditionto our residual vector filled withrandom numbers. In terms of our fitting goals (6) we are replacing the zero vector of ourmodel styling fitting goal with with standard normal noise vectorn, scaled by some scalarσm,

0 ≈ rnoise = d−Jm (7)

σmn ≈ rmodel = εAm.

For the special case of missing data problems, whereL is simply a masking operatorJ delin-eating known and unknown points, Claerbout (1999) showed how σm can be approximated byfirst estimating the model through the fitting goals in (6), then by solving

σm =1′Jr2

model

1′J1, (8)

where1 is a vector composed of 1s. Figure 2 shows the model using three differentn vectors.Note how the variance of the model is now similar to the variance of the data. A good checkof this is whether the original recording path can be seen.

Using equation (8) to estimateσm makes the assumption that our covariance descriptionis correct. When our assumptions on stationarity are incorrect or filter shape is insufficient tocorrectly describe the covariance, this approach will be ineffective.

Page 5: Multiple realizations and data variance: Successes and ...sep · some preliminary boundaries on model errors. I conclude by showing an example of creating multiple reasonable interval

SEP–113 Multiple realizations 235

Figure 2: Three realizations of the interpolation problem using fitting goals (8).bob5-seabeam2[ER,M]

Data Variability

As fitting goals (4) indicate, our fitting goals are in many ways symmetric. As a result, weshould be able to replace the zero vector of our data fitting goal with another scaled randomvector. We now have a second scaling factorσd and a new set of fitting goals:

σdη ≈ rd = Nnoise(d−Lm)

σmη ≈ rm = εAm. (9)

This new set of fitting goals offers the opportunity to see howthe uncertainty in our dataestimates affects our uncertainty in our model estimate. Inthe next section I will discuss someof the difficulties in making this formulation practical.

PROBLEMS

Getting fitting goals (9) to work correctly requires a good handle on several elements that arenot necessarily easy or even possible to achieve. To demonstrate these problems, I am goingto set up a simple deblurring inversion problem. The left panel of Figure 3 shows the modelm that we are going to attempt to invert for. The right panel of Figure 3 shows the datad, themodel blurred by a simple known filter. If we add random noise to the problem (left panel ofFigure 4), we can quickly get an unreasonable model estimate(right panel of Figure 4). If weadd an isotropic regularizer, our estimate improves substantially (Figure 5). We will speed upthe conversion of our problem by preconditioning the problem A−1 with a symmetric operatorand start with

0 ≈ rd = Nnoise(d−LA−1p)

0 ≈ rm = εp (10)

as our fitting goals.

Page 6: Multiple realizations and data variance: Successes and ...sep · some preliminary boundaries on model errors. I conclude by showing an example of creating multiple reasonable interval

236 R. Clapp SEP–113

Figure 3: The left panel is the model that we are going to attempt to invert for. The right panelis the data, a blurred version of the model.bob5-unblur1[ER,M]

Figure 4: The left panel is the data with Gaussian random noise added. The right panel is theresulting model estimate.bob5-rand[ER,M]

Figure 5: The model estimated fromthe data shown in the left panel ofFigure 4 using an isotropic regular-ization operator.bob5-rand2[ER]

Page 7: Multiple realizations and data variance: Successes and ...sep · some preliminary boundaries on model errors. I conclude by showing an example of creating multiple reasonable interval

SEP–113 Multiple realizations 237

IID Residuals

A more difficult, but at times more realistic, challenge is when our noise is not Gaussian. Theleft panel of Figure 6 shows smoothed Gaussian noise, of approximately the same amplitudeas the random noise used above, that we will add to our data (right panel of Figure 3). Theresulting model estimate (right panel of Figure 6) shows a clear imprint of the noise pattern. Ifwe look at therd vector (left panel of Figure 7), we see that we have disobeyedboth of the IIDrequirements. The residual is definitely correlated and if we look at histogram of the residuals(right panel of Figure 7), the identically distributed requirement looks suspect.

Figure 6: The left panel shows correlated noise that we are going to add to our data. The rightpanel shows the resulting inversion. Note how the imprint ofthe noise is visible in the model.bob5-iid [ER,M]

Figure 7: The left panel shows the residual (rd) associated with the model in Figure 6. Theright panel is a histogram of the residual. Note the structure in the residual and the deviationfrom Gaussian of the residual.bob5-iid2 [ER,M]

Up to this point we have ignored the inverse noise covariancematrix N. Claerbout (1999)and Guitton (2000) suggested using a PEF forN estimated from the residual. If we do this we

Page 8: Multiple realizations and data variance: Successes and ...sep · some preliminary boundaries on model errors. I conclude by showing an example of creating multiple reasonable interval

238 R. Clapp SEP–113

get an improved result (left panel of Figure 8) and a more decorrellate residual (right panel ofFigure 8). Remember thatN should be the inverse of the noise spectrum. Estimating it from theresidual does not necessarily give us the same information.Since this is a synthetic example,we have another option. We can estimateN directly from our noise estimate. Figure 9 showsthe result of the inversion using this filter and therd vector associated with the inversion. Theresulting estimate is generally of higher quality.

Figure 8: The left panel shows the inversion result using a PEF estimated from the residualasN. The right panel shows the resultingrd. Note the improved result compared to Figure 7.bob5-iid3 [ER,M]

Figure 9: The left panel shows the inversion result using a PEF estimated from the knownnoise asN. The right panel shows the resultingrd. Note the improved result compared toFigure 7. Compared to Figure 8 we see an improvement in image clarity at some expense of aboosting of the noise.bob5-iid4 [ER,M]

Types of noise

Looking at the residual in Figures 7-9 demonstrates anotherproblem with satisfying the re-quirements for multiple realizations to work correctly. There are two different structures that

Page 9: Multiple realizations and data variance: Successes and ...sep · some preliminary boundaries on model errors. I conclude by showing an example of creating multiple reasonable interval

SEP–113 Multiple realizations 239

we see in the residual. The first is the low spatial frequency structure that is associated withthe noise that we added to the original data. This is the actual noise in our problem and whatwe hope to attempt to exploit when doing multiple realizations.

The second type of error has a higher frequency component, the outline of the objectsin the photo are an example of this type of noise. This is a moredisturbing component ofthe residual. It is actually caused by our problem formulation. Our regularization operatorA makes an assumption of model smoothness that isn’t valid everywhere. Our data and datafitting operatorL indicate that the model should not be smooth at these locations. These twoconflicting pieces of information result in additional structure in our residual. If we estimateour noise covariance operator directly from our data residual, the resulting noise covarianceoperator will attempt to represent both types of noise and wewill be introducing additionalunwanted structure into our noise when doing multiple realizations.

Inverse noise covariance

If we apply fitting goals (9) in an attempt to create multiple models with about the same leveland type of noise, we obtain the models seen in Figure 10. These models have about theright amplitude of noise but have a higher frequency noise component than our synthetic test.The reason for this discrepancy is that the noise filter does not effectively describe the noisespectrum in our synthetic. Figure 11 shows the spectrum of the noise added in our synthetictest (the noise seen in the left panel of Figure 6). Note how the spectrum outside a verysmall band is zero. Emulating this behavior with a filter is nearly impossible and requires aprohibitively large filter. As a result, we are limited in what type of noise we can describe bythe practicality of building a filter that accurately captures its spectrum.

Variance

In the above example, the assumption that the variance of ournoise is spatially invariant iscorrect. In a more general problem, this will not be the case.As a result, a more appropriateformulation for our noise covariance is

N = NcNv. (11)

In this formulation,Nc is a description of the relation between points. It will takethe form ofa PEF or one of the other operators described above. The second operatorNv will attempt tonormalize the variance of the various data errors to the samelevel and should be equal to

N(i ) =

1

V(i ), (12)

whereV (i ) is the variance at a given data locationi .

Page 10: Multiple realizations and data variance: Successes and ...sep · some preliminary boundaries on model errors. I conclude by showing an example of creating multiple reasonable interval

240 R. Clapp SEP–113

Figure 10: Four realizations using the fitting goals (9). Note how the amplitude of the noiseis consistent with our synthetic experiment but the spectrum shows more energy and largewavenumbers.bob5-inc [ER,M]

Figure 11: The spectrum of the noiseadded in the synthetic problem shownin Figures 6-9.bob5-noise[ER]

Page 11: Multiple realizations and data variance: Successes and ...sep · some preliminary boundaries on model errors. I conclude by showing an example of creating multiple reasonable interval

SEP–113 Multiple realizations 241

ε and σ

A final problem is choosing appropriate values forσ andε. Often we have a good idea ofwhat our data variance is, but what we are actually adding random noise to is in the outputspace of our inverse noise covariance operator. How the variance in the data space translatesto a variance in our noise covariance operator is far from obvious. In addition, if ourσ valueis different from the variance of solving our estimation without noise added to the residual, wewill need to modify our value ofε to achieve the same balance between our data fitting andmodel styling goal.

1-D SUPER DIX

A relatively simple, but more realistic, example is estimating interval velocitiesvint from RMSvelocitiesvrms. Clapp et al. (1998) did this by taking advantage of the linear relation betweenv2

rms andv2int. We can keep our interval velocities relatively smooth by adding a roughening

operatorD. The fitting goals then become

0 ≈ rn = Tv2rms −Cv2

int

0 ≈ rm = εDv2int, (13)

whereC is causal integration andT is the result of causal integration with a vector of ones.

Figure 12 shows the result of applying this procedure on a simple CMP gather. The leftpanel shows the initial CMP gather and the center panel showsthe stack power of variousvrms

values. We use the maximum within a reasonable fairway (the solid lines overlaying the stackpower scan) as our data (dashed lines). The right panel of Figure 12 shows: our auto-pickedvrms (solid line), our invertedvint (dashed lines), and our interval velocity converted back toRMS velocity (dotted line).

Fitting goals (13) again assume a constant variance in our data. This is an incorrect as-sumption in this case for two very obvious regions. First, the T operator applied to our datameans that late times are going to be given a much larger weight in our inversion. A solutionto this problem is introduce a weighting operatorD1, which is simply 1

T . A second error inthe assumption of constant variance is that we know that not all our data (vrms measurements)are of the same quality. The center panel of Figure 12 shows that there are areas where thereare no significant reflectors. In addition, there are areas where our stack power results showan obvious maximum at a givenvrms value and other areas where the maximum is much lessclear. To try to take into account both of these phenomena I calculated a weighted variancewithin the fairway shown in the center panel of Figure 12,

v(i ) =∑e(i )

j =b(i )(v( j )−vmax(i ))2s(i , j )4

∑e(i )j =b(i ) s(i , j )4

, (14)

where

Page 12: Multiple realizations and data variance: Successes and ...sep · some preliminary boundaries on model errors. I conclude by showing an example of creating multiple reasonable interval

242 R. Clapp SEP–113

Figure 12: The left panel shows the initial CMP gather. The center panel shows the stackpower of variousvrms values. Overlaid is a fairway (the solid lines overlaying the stack powerscan) that we used for automatically picking RMS values (dashed lines). The right panel ofFigure 12 shows: our auto-pickedvrms (solid line), our inverted forvint (dashed lines), and ourinterval velocity converted back to RMS velocity (dotted line). bob5-dix1 [ER,M]

b(i ) is the beginning sample of the fairway at a given samplei ,

e(i ) is the ending sample of the fairway at a given samplei ,

v( j ) is thevrms at a given stack power location,

vmax(i ) is the velocity associated with the maximum stack power value (our data), and

s(i , j ) is the semblance value at time samplei and somevvrms value j .

The left panel of Figure 13 shows our stack power scan overlaid by vrms (dashed line) andvrms + −

√v (dashed lines). Note how at areas with a sharp stack power blob the variance

is small, while when the stack power blob is wide, where we have little coherent energy, thevariance is large. We can now estimate new interval velocitymodel using,

0 ≈ rn = N(Tv2rms −Cv2

int)

0 ≈ rm = εDv2int, (15)

whereN is VD1. The right panel of Figure 13 shows our datavrms, vint, andCvint.

Page 13: Multiple realizations and data variance: Successes and ...sep · some preliminary boundaries on model errors. I conclude by showing an example of creating multiple reasonable interval

SEP–113 Multiple realizations 243

Figure 13: The left panel shows our stack power scan overlaidby vrms (dashed line) andvrms +−

√v (dashed lines). The right panel of Figure 13 shows our datavrms, vint, andCvint.

bob5-dix2 [ER,M]

Multiple vint

Now we have almost all of the pieces to create multiple realistic interval velocity models fromour vrms measurements. The left panel of Figure 14 shows the residualafter estimating themodel using (15). We can see that there is a low frequency component to the residual. We canestimate a PEF from this residual and resolve (15) using

N = HVD1, (16)

whereH is convolution with our PEF calculated from the residual. The resulting residual(right panel of Figure 14) is now white.

Figure 14: The left panel is the resid-ual without using a whitening filter.The right panel is the residual using awhitening filter. Note how the resid-ual is showing only minimal struc-ture. bob5-dix3 [ER,M]

We can now use our multiple realization machinery to producemultiple interval velocitymodels that have approximately the correct covariance. Theright panel of Figure 15 shows 80

Page 14: Multiple realizations and data variance: Successes and ...sep · some preliminary boundaries on model errors. I conclude by showing an example of creating multiple reasonable interval

244 R. Clapp SEP–113

interval velocity realizations. The left panel shows thoserealizations converted back tovrms.Note how thevrms functions stay within variance bounds. In addition, as we expected, we seelarger variance where our bounds are wider. If we look at the mean and variance of our intervaland rms (Figure 16) estimates we get another interesting result. Areas of large variance arenot always correlated.

Figure 15: The right panel shows 80 interval velocity realizations. The left panel show thoseinterval velocities converted back tovrms overlaying the stack power for the CMP gather.bob5-dix4 [ER,M]

Figure 16: The left panel shows themean of thevrms (solid line) andvint (dashed line) of 80 realizationsoverlaying the stack power. Theright panel shows the variance ofvrms andvint for the 80 realizations.bob5-dix5 [ER,M]

CONCLUSIONS

In order to see the effect of data uncertainty upon model uncertainty, we must have a good un-derstanding of the properties of our noise and the accuracy of our modeling and regularization

Page 15: Multiple realizations and data variance: Successes and ...sep · some preliminary boundaries on model errors. I conclude by showing an example of creating multiple reasonable interval

SEP–113 Multiple realizations 245

operators. If we are able to make reasonable estimates on these properties we can producemultiple, equi-probable estimates. This methodology shows great promise in fields like ve-locity analysis where we understand the errors in our data but its complex interaction with themodel makes inferring model uncertainty difficult. Preliminary work using a simple RMS tointerval velocity estimation shows promise.

REFERENCES

Chen, W., and Clapp, R., 2002, Exploring the relationship between uncertainty of ava at-tributes and rock information: SEP–112, 259–268.

Claerbout, J., 1999, Geophysical estimation by example: Environmental soundings imageenhancement: Stanford Exploration Project,http://sepwww.stanford.edu/sep/prof/.

Clapp, R. G., Sava, P., and Claerbout, J. F., 1998, Interval velocity estimation with a null-space: SEP–97, 147–156.

Clapp, R., 2000, Multiple realizations using standard inversion techniques: SEP–105, 67–78.

Clapp, R. G., 2001a, Geologically constrained migration velocity analysis: Ph.D. thesis, Stan-ford University.

Clapp, R. G., 2001b, Multiple realizations: Model varianceand data uncertainty: SEP–108,147–158.

Clapp, R. G., 2002, Effect of velocity uncertainty on amplitude information: SEP–111, 255–269.

Crawley, S., 2000, Seismic trace interpolation with nonstationary prediction-error filters:Ph.D. thesis, Stanford University.

Guitton, A., 2000, Coherent noise attenuation using Inverse Problems and Prediction ErrorFilters: SEP–105, 27–48.

Isaaks, E. H., and Srivastava, R. M., 1989, An Introduction to Applied Geostatistics: OxfordUniversity Press.

Rickett, J., 2001, Spectral factorization of wavefields andwave operators: Ph.D. thesis, Stan-ford University.

Tarantola, A., 1987, Inverse Problem Theory: Elsevier Science Publisher.