9
1 One, none, hundred thousand tomographic images Aldo Vesnaver 1 and Gualtiero Böhm 2 1 Saudi Aramco, Dhahran, Saudi Arabia; 2 OGS, Trieste, Italy Seismic tomography provides a crude destiny to a geophysicist: he is seeking just one image, i.e. THE structure of the Earth, but missing or contradicting data may lead him to get none. Should he succeed finally, in some way, he gets a crowd of different images: at least, one for each Earth model that his mind and his computer can draw. Plus, of course, those ones drawn by his peers… An Italian writer of the past century (the Nobel-prized L. Pirandello) described likewise the image of a person: just one, apparently; but none really exists, because hundred thousand other individuals would have hundred thousand different images of him. He wrote a story entitled “One, None, Hundred Thousand”: it looks like tomography! Squared infinity? Actually, the geophysicist life is even harder. Let us suppose that, in some way, he convinces himself and all his colleagues that a given Earth model is adequate for a current case: for example, a few layers, or a number of regular bricks. Even so, he is quite far from getting a reliable, unique Earth image: in most cases, infinitely diverse parameter sets (as the velocity field, or the layer shapes) will satisfy equally well the experimental data, for that single particular model he chose. Since also the possible Earth models are infinite, we may say (roughly speaking) that a squared infinity of tomographic images exists for the given data. The information provided by infinite (and possibly contradicting) solutions has a practical value close to zero. Drilling engineers require just one precise location for placing a well and trying to extract oil and gas. Thus, geophysicists often choose just one of the acceptable images, and rarely attach to each parameter its uncertainty. In this way, there is a double (or squared) personal bias: first, the choice of the Earth model; second, the choice of a particular solution, or the use of arbitrary parameters (as damping factors) that heavily condition the final image. In other words, one often compensates for the squared infinity of the possible solutions with a squared personal bias. Figure 1. Different models may produce identical ray segments for a given recording geometry (left); however, many rays and some smoothness constraint may reduce this multiplicity and suggest simpler model domains (right).

One, none, hundred thousand tomographic images

  • Upload
    inogs

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

1

One, none, hundred thousand tomographic images

Aldo Vesnaver 1 and Gualtiero Böhm 2

1 Saudi Aramco, Dhahran, Saudi Arabia; 2 OGS, Trieste, Italy

Seismic tomography provides a crude destiny to a geophysicist: he is seeking just one image, i.e. THE structure of the Earth, but missing or contradicting data may lead him to get none. Should he succeed finally, in some way, he gets a crowd of different images: at least, one for each Earth model that his mind and his computer can draw. Plus, of course, those ones drawn by his peers…

An Italian writer of the past century (the Nobel-prized L. Pirandello) described likewise the image of a person: just one, apparently; but none really exists, because hundred thousand other individuals would have hundred thousand different images of him. He wrote a story entitled “One, None, Hundred Thousand”: it looks like tomography!

Squared infinity? Actually, the geophysicist life is even harder. Let us suppose that, in some way, he convinces himself and all his colleagues that a given Earth model is adequate for a current case: for example, a few layers, or a number of regular bricks. Even so, he is quite far from getting a reliable, unique Earth image: in most cases, infinitely diverse parameter sets (as the velocity field, or the layer shapes) will satisfy equally well the experimental data, for that single particular model he chose. Since also the possible Earth models are infinite, we may say (roughly speaking) that a squared infinity of tomographic images exists for the given data. The information provided by infinite (and possibly contradicting) solutions has a practical value close to zero. Drilling engineers require just one precise location for placing a well and trying to extract oil and gas. Thus, geophysicists often choose just one of the acceptable images, and rarely attach to each parameter its uncertainty. In this way, there is a double (or squared) personal bias: first, the choice of the Earth model; second, the choice of a particular solution, or the use of arbitrary parameters (as damping factors) that heavily condition the final image. In other words, one often compensates for the squared infinity of the possible solutions with a squared personal bias.

Figure 1. Different models may produce identical ray segments for a given recording geometry (left); however, many rays and some smoothness constraint may reduce this multiplicity and suggest simpler model domains (right).

2

According to the geophysicist Fabio Rocca, “the Earth is not cooperative”: the image we choose for the Earth may be later “refused” by the experimental evidence. For example, some smoothness or simplicity principles, which we may like for characterizing our model, may be totally absent in the actual geology of our area. Thus, both drilling engineers and seismic tomographers can get bitter surprises!

Does any principle exist to guide us within this Babylon of tomographic images? A few principles

A principle is a general hypothesis (sometimes undemonstrated) that we adopt as a guideline for our research work. For example, the basic principle of modern science is that any theory must be validated by real experiments. Our restrictive interpretation of this viewpoint for seismic tomography is that our Earth model should depend on the experimental data only. Otherwise, we should admit that our personal opinions condition the actual Earth structure… If we decide that our tomographic solutions must depend on the data only, we are lead to adopt two consequent principles for the actual inversion: that is, we have to remove the two types of ambiguities mentioned in the previous section.

First, let us consider the best-known one. Seismic inversion converges iteratively towards an acceptable solution, after starting from an initial hypothesis, chosen by the tomographer. For a given Earth model, one can prove mathematically that the null space exists if and only if the solution depends on the initial model choice. Therefore, if our solution changes when changing the initial hypothesis, our problem is ill-posed: our Earth model is not adequate for the available ray paths. Second, let us look at a simpler, but often neglected aspect. In seismic tomography, we project a few measurements (the traveltimes) along one-dimensional lines (the rays) for extrapolating two- or three-dimensional properties of the Earth. Mathematically, this approach may be sound only when infinite rays are available. Since this never happens, we should not be surprised by what we can call “geometrical ambiguities”. Figure 1 (at left) shows a few raypaths across two different models, with straight or bent boundary, which fit perfectly the same input data. Thus, we do not have any reason for preferring one of them. We remark that they are not contaminated by the null space at all. At most, if a very large number of raypaths cross a model domain (at

Figure 2. Very simple 2D model (a): two possible equivalent grids (b, c), and a non-equivalent one (d).

(a)

(c)

(b)

(d)

3

right), we can reduce the multiplicity of its acceptable shapes, especially if we impose some simplicity or smoothness criterion for the boundary, e.g., a piece-wise linear or a spline function. Some tough geophysicist might argue that some ambiguity can be removed by fully exploiting seismic information, i.e., by full waveform inversion, or by diffraction tomography. We think that this may be helpful in many cases, of course, but it cannot really solve all ambiguities. First, the limited bandwidth of seismic signals put severe limits in the spatial frequencies we can recover for the Earth structure. Second, a proper modeling of waveform in viscoelastic, or elastic, or even acoustic media, requires a high density of Earth parameters that can be hardly supported by the available data itself. We can summarize the above ideas in the two following principles. 1. The tomographic image should not change when perturbing the initial velocity

model. 2. The tomographic image should not change when perturbing the model structure. Of course, the value of any principle is providing us clear practical rules for our daily activity. Let us see if this happens for seismic tomography. Perturbing the initial velocities

Let us concentrate first on the velocity field only, keeping fixed the domains (or

voxels) in a blocky Earth model. The simplest choice for an initial velocity field is a constant, i.e., a homogeneous background medium. Such a choice seems quite neutral: we do not assume any geological anomalies! Actually, it is not so…

Figure 3. For the same ray path distribution but assuming different initial velocity fields (upper row), we get very different solutions (lower row).

Initial guess

Solution

4

Figure 2a shows a very simple model composed of three homogeneous layers with different velocities. Two other grids (one regular, another irregular) may fit perfectly the true model, while the last one (Figure 2d) will never be represent it properly. Since we do not know the Earth structure, this last case is the most likely to occur. Let us suppose we can measure the traveltimes along the raypaths indicated by arrows in Figure 3, assuming different initial models. If the model is discretized by horizontal domains (left column), we get a perfect reconstruction. In contrast, using vertical domains, we get two different solutions, and both are wrong! Figure 4 (left) shows a bit more geological model, still composed of homogeneous blocks in 2D. The layer interfaces are both horizontal and dipping, and a low-velocity anomaly in the central part simulates a hydrocarbon reservoir. As a recording geometry, we simulated a cross-well survey, with 20 sources at the left side, placed at 150 m interval, and 20 receivers at the opposite side, with the same depth interval (Figure 4, right). For simplicity, we computed the traveltimes by neglecting the ray bending due to Snell’s law, since it introduces further inversion problems that we are not discussing here. We adopted the SIRT method as an inversion algorithm, without any constraints or smoothness condition. Figure 5 shows the dependence of tomographic images, when using a coarse or a fine regular grid (left and right column), on different homogeneous initial models (at each row). We see immediately that the coarse grid produces similar results in all cases, unlike the finer one, which heavily depends on the initial hypothesis. The finer model is dominated by the null space, and its “higher resolution images” are worthless… Figure 6 displays (top row) the null space energy for the coarse and fine grid, which is an accurate estimate of the local ambiguities due to possible ill-constrained domains by the available raypaths. We notice that the largest uncertainties occur at the lower model part, where there are fewer raypaths with a poorer angular coverage. Based on the singular value decomposition of the tomographic matrix, this indicator is rigorous, but computationally expensive. A quick and dirty alternative emerges by comparing Figures 5 and 6 (top): where the tomographic images do not change as a function of the

Figure 4. A 2D model composed of homogeneous blocks (left) and the ray paths traced simulating a cross-well survey (right). The white numbers indicate the velocity (km/s) in each block.

1.5

2.0

2.5

3.0

3.5

km/s0.0

0.5

1.5

2.5

1.0

2.0

3.0

0.0

0.5

1.5

2.5

1.0

2.0

3.0

0.0

0.5

1.5

2.5

1.0

2.0

3.0

0.0

0.5

1.5

2.5

1.0

2.0

3.0km

2.1

2.41.5

3.0

3.5

2.2

2.4

2.6

2.8

5

initial hypothesis, the null space is minimum (green color). Thus, we averaged the relative absolute differences among different images, getting a reliability indicator (Figure 6, bottom) comparable to the null space energy, but much faster. Perturbing the initial velocity model can point out if the null space is contaminating our tomographic images or not. What about perturbing the tomographic model itself?

Figure 5. Tomographic images using a coarse 15x15 pixel (left column) and a fine 30x30 regular grid (right column). The initial model is a homogeneous medium with different velocities: 1500 m/s (top row), 3000 m/s (central row) and 4500 m/s (bottom row).

6

Perturbing the model structure When looking at an Earth outcrop (or just at a well log), we rarely encounter any regular structure. Even when major layer interfaces are detectable, they never match a regular spatial grid, composed of a number of identical cubes or parallelepipeds. Then, it is clear that we are heavily forcing the Earth into such a regular cage when choosing these grids for our tomographic models. Perturbing the grid gives us a chance that one of these (irregular) alternatives matches better the actual Earth (Figure 7). However, since we do not know the Earth interior at all, how can we choose among the many new different images we get?

Figure 8 displays a number of tomographic images obtained by perturbing the vertices of the regular coarse and fine grids, respectively, which we saw above. As an initial velocity model, we used a constant value of 1500 m/s in all these cases (and later too). We notice something similar we appreciated in Figure 5: the coarse images are quite stable with respect to the grid perturbation, while the finer ones are so at a much lesser extent. This fact suggests that the grid perturbation is another tool for investigating the inversion ambiguities – (although less simple than perturbing the initial velocity model). The model perturbation can be much more general than just shifting a bit regularly spaced vertices: we must perturb the number and shape of the voxels too! In principle, a lengthy and very expensive approach, such as simulated annealing could be a way for optimizing the grid; however, we found an

Figure 6. Null space energy of the coarse (a), fine grid (b), the average absolute difference among the tomographic images in the columns of Figure 5, i.e., for the coarse (c), and fine grids (d).

0.0

0.5

1.5

2.5

1.0

2.0

3.0

0.0

0.5

1.5

2.5

1.0

2.0

3.0km

0.0

0.5

1.5

2.5

1.0

2.0

3.0

0.0

0.5

1.5

2.5

1.0

2.0

3.0

0.0

0.5

1.5

2.5

1.0

2.0

3.0

0.0

0.5

1.5

2.5

1.0

2.0

3.0km

0.0

0.5

1.5

2.5

1.0

2.0

3.0

0.0

0.5

1.5

2.5

1.0

2.0

3.0

(a) (b)

(c) (d)

= random perturbation limit∆

Figure 7. A grid can be perturbed by moving the pixel vertices by random shifts in the x and y directions, up to a chosen percentage threshold ∆.

7

iterative procedure (not discussed here) for adapting the grid according to the available data, i.e., ray distribution, null space energy and local velocity gradients.

Figure 9 (at left) shows the image obtained: although the number of voxels is

lower than the fine regular grid, the image quality is superior. When perturbing this adapted irregular grid, in a similar way as above, we get images very similar to the unperturbed one. Thus, the tomographic result seems just loosely dependent on the adapted grid, unlike the situation with the fine grid, where the null space dominates.

Figure 8. Tomographic images obtained by perturbing a coarse grid (left column) and a fine grid (right column), with random shifts of the vertices coordinates within different thresholds: 10% (top row), 20% (central row), and 50% (bottom row).

8

The final solution

In the dark mediaeval centuries, women suspected of witchery had to resist tremendous tortures and survival tests, including stretching their bodies, before being recognized as innocent. Perturbing and stretching tomographic models, as we propose here, looks like those ancient checks. However, they guarantee that the devil – the null space – is not contaminating our images too much, and that they represent fairly well the real Earth. Of course, we will never get THE image, nor even an erroneous, for the reasons discussed above; we can get at most a number of quite similar images, which depend on the data only (or mostly), with a minimum personal bias. A possible better way for approaching THE image is averaging them into one. Figure 10 displays the averaged images obtained by the perturbed grids in Figures 7, 8 and 9. We see that the coarse grids provide an improved image with respect to its composing elements; in fact, the perturbation and averaging process increases the actual degrees of freedom and releases the limited shapes available for representing the model complexities. The finer grid gets less benefit from the average; it does not cure its ill-posedness, nor does it allow fitting much better the rock interfaces. Conversely, the adapted grids get a little further improvement: thus, although we will never get THE image, even in this way, we can get a better one. Conclusions

When looking at the Earth through a piece of glass, a geophysicist will see smooth elongated forms or a mosaic if the glass is gently bent or faceted. Photographers know that THE perfect lens does not exist: nevertheless, they produce often remarkably realistic pictures by properly exploiting their cameras, even by trial and error. They select the appropriate

Figure 9. Tomographic image obtained by adapting the grid according to the null space energy and velocity gradient (top), and perturbing it with different thresholds: 10% (top row), 20% (central row), and 50% (bottom row).

9

zoom, and then they move it forwards and backwards, focusing their target by choosing the focal distance that is stationary in terms of image sharpness. Seismic tomographers can follow a similar strategy. First, they should avoid grids that produce poor or ambiguous images, which depend on the initial model; second, they should evaluate the quality of such a grid by perturbing it and checking the stability of the resulting images. If these do not change a lot, such images (eventually averaged) provide a reasonable estimate of the Earth structure. Figure 10. Tomographic image obtained by averaging the perturbed grids in Figures 8 and 9: regular coarse (top), regular fine (center) and adapted grid (bottom).