241

Developments in Geophysical Exploration Methods

  • Upload
    others

  • View
    8

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Developments in Geophysical Exploration Methods
Page 2: Developments in Geophysical Exploration Methods

DEVELOPMENTS IN GEOPHYSICAL EXPLORATION METHODs--2

Page 3: Developments in Geophysical Exploration Methods

THE DEVELOPMENTS SERIES

Developments in many fields of science and technology occur at such a pace that frequently there is a long delay before information about them becomes available and usually it is inconveniently scattered among several journals.

Developments Series books overcome these disadvantages by bringing together within one cover papers dealing with the latest trends and developments in a specific field of study and publishing them within six months of their being written.

Many subjects are covered by the series including food science and technology, polymer science, civil and public health engineering, pressure vessels, composite materials, concrete, building science, petroleum technology, geology, etc.

Information on other titles in the series will gladly be sent on application to the publisher.

Page 4: Developments in Geophysical Exploration Methods

DEVELOPMENTS IN GEOPHYSICAL

EXPLORATION METHOD8-2

Edited by

A. A. FITCH

Consultant, Formerly of Seismograph Service (England) Limited, Keston, Kent, UK

APPLIED SCIENCE PUBLISHERS LTD LONDON

Page 5: Developments in Geophysical Exploration Methods

APPLIED SCIENCE PUBLISHERS LTD RIPPLE ROAD, BARKING, ESSEX, ENGLAND

British Library Cataloguing in Publication Data

Developments in geophysical exploration methods.­(The developments series). 2. I. Prospecting-Geophysical methods I. Fitch, A A 622' .15 TN269

ISBN-J3: 978-94-009-8107-2 e-ISBN-13: 978-94-009-8105-8 DOl: 10.1007/978-94-009-8105-8

WITH 2 TABLES AND 120 ILLUSTRATIONS

© APPLIED SCIENCE PUBLISHERS LTD 1981 Softcover reprint of the hardcover 1st edition 1981

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publishers, Applied Science Publishers Ltd,

Ripple Road, Barking, Essex, England

Page 6: Developments in Geophysical Exploration Methods

PREFACE

One facet of development in this field is that the methods of gathering and processing geophysical data, and displaying results, lead to presentations which are more and more comprehensible geologically. Expressed in another way, the work of the interpreter becomes progressively less onerous.

The contributions in this collection of original papers illustrate this direction of development, especially in seismic prospecting. If one could carry out to perfection the steps of spiking deconvolution, migration and time--depth conversion, then the seismic section would be as significant geologically as a cliff-face, and as easy to understand. Perhaps this is not yet achieved, but it remains an objective, brought closer by work such as that described by the authors.

The editor offers his best thanks to the contributors-busy geophysicists who have written with erudition on this range of subjects of current interest.

A. A. FITCH

v

Page 7: Developments in Geophysical Exploration Methods

CONTENTS

Preface v

List of Contributors IX

1. Determination of Static Corrections A. W. ROGERS

2. Vibroseis Processing 37 P. KIRK

3. The 11 Norm in Seismic Data Processing 53 H. L. TAYLOR

4. Predictive Deconvolution 77 E. A. ROBINSON

5. Exploration for Geothermal Energy 107 G. V. KELLER

6. Migration 151 P. HOOD

Index 231

vii

Page 8: Developments in Geophysical Exploration Methods

LIST OF CONTRIBUTORS

P. HOOD

Geophysicist, Geophysics Research Branch, The British Petroleum Co. Ltd, Britannic House, Moor Lane, London EC2Y 9BU, UK.

G. V. KELLER

Professor of Geophysics, Colorado School of Mines, President, Group Seven, Inc., Irongate 11 Executive Plaza, Suite 100, 777 South Wadsworth Boulevard, Lakewood, Colorado 80226, USA.

P. KIRK

Supervisor, Data Processing Division, Seismograph Service (England) Ltd, Holwood, Westerham Road, Keston, Kent BR2 6HD, UK.

E. A. ROBINSON

Consultant, 100 Autumn Lane, Lincoln, Massachusetts 01773, USA.

A. W. ROGERS

Supervisor, Data Processing Division, Seismograph Service (England) Ltd, Holwood, Westerham Road, Keston, Kent BR2 6HD, UK.

H. L. TAYLOR

Geophysical Consultant, P.O. Box 354, Richar4son, Texas 75080, USA.

ix

Page 9: Developments in Geophysical Exploration Methods

Chapter 1

DETERMINATION OF STATIC CORRECTIONS

ADRIENNE W. ROGERS

Seismograph Service (England) Ltd, Kent, UK

SUMMARY

Methods of determining static corrections have evolvedfrom the times when statics could be determined easily from production records. The widespread use of surface sources with large source and receiver arrays, and also of crooked line recording, have made these determinations less straightfor­ward, often necessitating separate weathering surveys such as LVL or up­hole surveys. Another aspect is the development of high-resolution work, needing extremely accurate static corrections. An automated method for determining these is described.

The choice of a processing datum is important, both for high-resolution work and for cases where shallow events on a section are important.

However good the automatic residual static programs are, the best results are obtained when the original field statics are as accurate as possible. A recent factor in the use of automatic statics is the cross-dip introduced by crooked line recording.

A set of examples shows some of the problems encountered in the use of automatic statics, including low-frequency static variations.

1. INTRODUCTION

The determination of accurate static corrections is becoming increasingly important at the same time as recording methods are making it more difficult for these to be determined from ordinary production data. There was a tendency to assume that the processing centre with its automatic

1

Page 10: Developments in Geophysical Exploration Methods

2 ADRIENNE W. ROGERS

residual static programs could make up for any deficiencies or inaccuracies in the field static corrections. Latterly, however, it has been realised that more attention needs to be paid to obtaining the best possible field static corrections as a starting point for the automatic static programs.

The increasing use of crooked line shooting techniques, where lines are recorded along roads and tracks, and the increasing use of surface sources such as Vibroseis®, with its long source and receiver patterns, have brought problems both in the determination of field statics and in the use of residual static programs. High-resolution recording also brings a need for greater accuracy.

2. THE WEATHERED LAYER AND THE PURPOSE OF THE STATIC CORRECTION

The purpose of the static corrections is to remove the effects of elevation changes and of the near-surface layer, and to relate the subsurface events to a datum. This is so that the shape of a reflected event on a section is not distorted by the presence of low velocity near surface material. A deep flat reflector, for example, might apparently follow the shape of the surface elevations if the static corrections were not applied. The application of static corrections to data simulates the placing of both source and receiver on the datum at points vertically below (or above) their actual positions, and where the weathered layer does not exist.

Thus in Fig. leA) the source static is the travel time from the source to datum through the weathering and partly through the consolidated layer, and similarly for the receiver static. In Fig. I(B), with a datum at the surface, both source and receiver are already placed at datum, but the weathered layer has to be 'replaced' by an equivalent thickness of material at elevation velocity. Thus the source static correction would be -dw/Vw + dw/Ve ,

assuming vertical travel path through the weathering. The weathered layer is usually defined as the near-surface unconsolidated layer, and this does not always coincide with any geological subdivision. This layer is identified by its low velocity, of the order of 300-600 m s -1. The base of this layer may be flat or may follow the surface elevation, or it may coincide with the water table. It will certainly have variations in thickness, caused for example by old river beds. Geological maps are useful for identifying areas where weathering variations occur, but of course, geological maps are not always available.

® Trademark of Continental Oil Company.

Page 11: Developments in Geophysical Exploration Methods

DETERMINATION OF STATIC CORRECTIONS

SOURCE X

DATUM

BASE OF WEATHERING

V e

A

B

V IZ

RECEIVER

---RECEIVER!

d' w

FIG. I. Two positions of datum which are commonly used.

3

Under the weathered layer there may be layers of intermediate velocity before the consolidated layer whose velocity is defined as 'elevation velocity'. Due to the low velocity of the weathered layer, any changes in this layer, for example a thickening of 5 m in a weathering of velocity 300 m S-l,

would, if not corrected, cause an apparent anomaly of 50 m in a deeper event of 3000 m s -1 velocity. An anomaly of this kind, if present at several surface stations, would be magnified on any inner traces where this anomaly occurred in both the source and the receiver static. Also the effect would be present on any other traces where the anomaly occurred in either the source or the receiver static. Thus for surface source records of 48 traces, the anomaly at only one station would affect 48 separate COPs as the spread moved across the affected station, and when the source position coincided with the station a whole record would be affected.

Page 12: Developments in Geophysical Exploration Methods

4 ADRIENNE W. ROGERS

Static corrections may be derived from production recording, using up­hole times on dynamite data, and first breaks. Where there are no up-hole times as on surface source recording such as Vibroseis, thumper, etc.,""and the first breaks may also be poor, special surveys have to be recorded for the purpose of determining static corrections. These may take the form of LVL surveys or up-hole surveys.

3. LVL SURVEYS

3.1. Recording Methods L VL records are short-interval refraction shots designed to record data from the low-velocity layers. A typical layout for a 12-station cable for shallow weathering is shown in Fig. 2. The spread is recorded with the

Sm x x x x x x

10m x x x

Sm x x x

FIG. 2. Example of an L VL spread for shallow weathering.

source at either end. The closer spacing at the ends of the spread is used in order to record the weathering velocity. The geophones are single production phones, undamped.

The source may be a small dynamite charge or a hammer blow on to a metal plate. For this latter method a time break is recorded, either by hammering immediately next to the first geophone or by using an inertia switch on the hammer placed as near as possible to the head of the hammer. This means that the source can be placed 5 m from the first geophone, thus giving an extra 5 m interval to help record the weathering velocity.

As an example of more sophisticated equipment, a Nimbus 12-trace summer can be used to add successive blows of the hammer to improve the signal-to-noise ratio. This equipment also includes the facility to keep certain traces unaltered after a number of blows while continuing to add to the other traces. Thus the inner traces would probably be frozen after three or four blows, while the outer traces might need many more according to the type of surface. In a recent survey it was found that chalk needed about 8 blows, clay about 15, sand 20, while alluvium and road embankments took about 50 to 60 blows to obtain reasonable records.

Even with the summing of records, it is essential to avoid heavy traffic noise or power lines and telephone cables. The spread should also be as straight and flat as possible.

With a cable spacing as shown in Fig. 2 it may be necessary to record with

Page 13: Developments in Geophysical Exploration Methods

DETERMINATION OF STATIC CORRECTIONS 5

50 or 100m offset in order to obtain first breaks at true elevation velocity. Where there is deep weathering, and perhaps a layer of intermediate velocity, a longer cable with 24 traces and wider trace spacing is required. A typical example would be a cable with trace spacings in metresof3, 3, 5, 5, 5, 10,20, 30, 30, 30, 30, 30, 30, 30, 30, 30, 20, 10, 5, 5, 5, 3, 3.

The spacing is of course adjusted to suit the weathering depths in the area. An LVL for a high-resolution survey might have a spacingt,t, 1,2,2, 3,4,4, 5, 6, 7, 7, 10, 10, 20 etc.

Wherever possible, reference should be made to geological maps to determine the type of near-surface material and where changes occur in order to position the L VLs so that good control of weathering depths and velocities can be obtained.

The frequency of recording L VLs should be sufficient to give adequate control over changes in weathering depths and velocities. In practice, the frequency would probably be determined by the time available for such surveys.

3.2. The Picking and Computation of LVLs The first breaks should be picked as consistently as possible. This is not always easy, as the character of the first breaks changes across the records and also with the geology of the surface material.

In most cases there will not be a simple change from a low weathering velocity to the elevation velocity. An intermediate layer is very often recorded, and on the short 12-trace LVLs the elevation velocity may not be recorded at all. In this case offset shots should be taken, as mentioned above, until the elevation velocity is obtained. This could be difficult with a weak energy source but an intercept time for this layer would give control on the depth of the intermediate layer. The value of the elevation velocity can be checked against that which is derived from plotting the first breaks on the production records.

The spreads should be recorded from both ends to allow for dipping refractors.

If the dip is small, the refractor velocity is approximately equal to the arithmetic mean of the velocities measured up- and down-dip.

Figure 3 shows the travel paths in relation to the velocities plotted from a refraction record.

For the case where there is only one weathered layer of depth do and velocity Vo, and V1 is the sub-weathering velocity,

do = Tl V1 vo/2(Vi - V~)1/2 + ds /2

Page 14: Developments in Geophysical Exploration Methods

6 ADRIENNE W. ROGERS

TI M E

--

T 2 I T,

r-----------------~----~/------~--DISTANCE

\\-- ____ / /\ dO t \L ____________ // v, d, t

v2

FIG. 3. Time-distance plot and travel paths from a refraction shot.

where Tl is the intercept time of the velocity V1 and ds is the depth of shot, which is of course set to zero for a surface source.

For the case of two-layer weathering, where V1 is now either another weathered layer or the intermediate velocity seen on the LVL plots, and the thickness is d1 , then

d- 1 T-T 02 V [ (I -(V2/V2»)1/2] 1 - 2[1 _ (V~/VDF/2 2 1 1 - (V6/V~)

Figure 4 is an example of a plot obtained from L VL records where an intermediate velocity is recorded.

3.2. Other Types of LVL Vibroseis has been used as an energy source of LVLs with one stationary vibrator. The breaks are a little more difficult to pick than those from a

Page 15: Developments in Geophysical Exploration Methods

100

E ;.::

DETERMINATION OF STATIC CORRECTIONS

10 20 30 40 50 60 70 60 90 Distance in metres

FIG. 4. Time-distance plot from an LVL survey.

7

hammer or dynamite source, particularly on near traces. Vibroseis can be useful for offset LVLs or where insufficient energy is obtained from other sources due to traffic noise or geology of the area.

A technique used in Libya with weight dropping as a source has been described by J. F. Thompson.! In this method, fixed refraction geophones at each end of the spread recorded first arrivals from the drop points at 10m intervals between them. Thus the same results were obtained as those obtained by using fixed shots and a spread of geophones.

4. UP-HOLE SURVEYS

The most direct method of determining weathering depth and velocity is by the use of special up-hole surveys, where a deep hole is drilled for that purpose. Where the rock formation drilled is hard, a cable with geophones at intervals along it can be lowered into the hole and a shot taken near the top to record times at a number of different levels. In softer formations the hole would be liable to collapse and the cable would be lost. In this case charges are detonated in the hole at different depths starting from the bottom, and the travel times are recorded using geophones at the top of the hole.

Page 16: Developments in Geophysical Exploration Methods

8 ADRIENNE W. ROGERS

TIME IN MILLISECONDS 0 4 8 12 16 20 24

ORILL .................... 440 m/s

LOG " Earthy

10 '-. Clay

\ 20

\ '\.

30

\ 1400m/s

40 '\ 50 .\ Blue

I- Clay W W U.

\ z6

:r: I-

fu 70 0 2010m/s

80

\ 90

100 ' r Limestone

110 \T" Blue Clay

120

FIG.5. Up-hole plot showing gradual increase in velocity.

When the travel times are plotted and compared with the drill log for the hole, much useful information can be obtained, and without any calculations! Figures 5 and 6 show plots obtained from up-hole surveys and their accompanying drill logs. Figure 5 shows a gradual increase in velocity over the shallow section giving no clear depth of weathering, though this would probably be considered to be about 10 feet. Figure 6 shows a more

Page 17: Developments in Geophysical Exploration Methods

o

10

20 M.S.L.

30

40

50

t-ILl 60 ILl LL

~ J: 70 t-o. ILl Cl

eo

90

100

110

120

DETERMINATION OF STATIC CORRECTIONS

TIME IN MIlliSECONDS 4 8 12 16 20 24

'--\--. -'

fa 3

-------------------~----------------------------

\ \

\ \ \%

3 iii

\ \ \

DRILL LOG

Sandy Clay

Blue Clay

Limestone

Clay

FIG. 6. Up-hole plot showing clear definition of base of weathering.

9

conventional base of weathering and weathering velocity. Both plots show an intermediate velocity layer.

The limitations of up-hole surveys are in the cost and the fact that they cannot give continuous weathering control. The frequency of the surveys will be limited by the cost of the surveys and the amount of time available for them, but they can give very useful control points where the intermediate statics are being determined from production data.

Page 18: Developments in Geophysical Exploration Methods

10 ADRIENNE W. ROGERS

5. METHODS FOR PARTICULAR SURFACE CONDITIONS

In addition to the standard methods of static determination, methods have been developed for particular kinds of surface conditions in different parts of the world. Sand dunes are a good example of this. When stations have to be laid across sand dunes, the method usually used for corrections is to assume that the firm ground on either side can be interpolated underneath the dune, and the whole of the elevation difference between that level and the surface of the sand dune is corrected for at sand velocity. It can happen that the sand is consolidated at the base of the dune, and this is corrected for on a trial-and-error basis by assuming that the solid base of the dune has an increase in elevation towards the centre of the dune. The amount of this increase is determined by whatever gives the best resl,llts when the data across the dune are stacked.

A method developed for the weathering problems in Western Canada has been described by GendzwilU Here the problem is glacial drift, where the weathering velocity is variable but the sub-weathering uniform. First break times are used although there are no direct arrivals recorded through the weathered layer. This problem was also described earlier by Patterson. 3

Another kind of weathering problem occurs when permafrost is present, as this has a higher velocity than the material underneath it. This means that waves entering the permafrost are refracted away from the vertical. 4

6. STATIC CORRECTIONS FROM PRODUCTION RECORDS

6.1. Up-hole Method This simple method is still used on dynamite surveys where a deep shot hole can give a valid up-hole time. The assumption is that the shots are below the base of the weathering. When shots are taken at every station, as is very common, this method affords good control of the weathering. The geophone correction from Fig. 7 is thus

[(Eg - ds)!Ve] + ts

where ds and ts are the shot depth and up-hole time of the shot taken at the geophone station and Eg is the elevation above datum. The shot correction at that same point is simply (Eg - ds)/Ve, where the elevation velocity Ve is measured from the first break plots.

Page 19: Developments in Geophysical Exploration Methods

Eg

and

DETERMINATION OF STATIC CORRECTIONS

Es~~ ___ r--

1_ ~ __ v_e __ DATU_M _

11

FIG. 7. Up-hole method from production records with shots taken at every station.

6.2. Plus-Minus Method This method is adapted from Hagedoorn's refraction interpretation method,S and assumes the shots to be immediately below the weathered layer.

The sum of the two times recorded at a geophone station from shots at A and B (Fig. 8) is given by

[(X - x)/Vel + 2tw = Ta + Tb

where X is the distance between A and B, x is the geophone pattern length and tw is the time in the weathered layer:

A plot of Ta - Tb against distance from A gives a line of slope 21Ve. This

Alii( X----------------------~~-IB

'I' I~x ~

z Vw

, ,

ve

FIG. 8. Plus-minus static method.

Page 20: Developments in Geophysical Exploration Methods

12 ADRIENNE W. ROGERS

0·0

0·5

FIG. 9. First breaks of a crooked line Vibroseis record.

method has the advantage that weathering depths can be obtained at each station, and with multiple coverage a number of results for the same stations can be obtained. A disadvantage is that the weathering velocity has to be determined from the first break plots and might not therefore be very reliable, particularly as most spreads are offset from the shot by at least one or two stations.

6.3. First Break Plots The above methods are only valid for dynamite data with reasonably deep shots. As mentioned previously, the use of surface sources with large source

Page 21: Developments in Geophysical Exploration Methods

DETERMINATION OF STATIC CORRECTIONS 13

0.1

0.2

FIG. 10. First breaks of a high-resolution record.

and geophone patterns have brought their own problems. The patterns are designed to attenuate energy travelling horizontally, i.e. ground roll, and as the first breaks refracted along the base of the weathering are also travelling horizontally these too are attenuated.

Figure 9 shows first breaks from a Vibroseis source and crooked line recording. While a few of the first breaks could be used, the majority are not at all clear, and certainly do not lie on a straight line.

Some use can be made of this kind of first break however, as a check on

Page 22: Developments in Geophysical Exploration Methods

14 ADRIENNE W. ROGERS

the elevation velocity derived from the L VL surveys which should accompany such recordings. First breaks can be plotted from records on straight parts of the line, and wherever possible the exact source-receiver distances could be obtained from the processing centre as these distances have to be derived as one of the first steps in processing the data. This is not always possible, of course, because of the distance of the centre from the field recording, or because static corrections are often needed urgently and any such delay could not be tolerated.

Another development in recording methods has been the high-resolution recording used for the surveys for coal and minerals. At the other extreme from the wavy-line Vibroseis first breaks, these records, with a station interval of say 10 m, are more like L VL records and much useful information can be obtained from them. Figure 10 shows the kind of first breaks obtained from high-resolution shooting, and the time scale shows the accuracy with which they can be picked.

7. HIGH-RESOLUTION STATIC CORRECTIONS

In his paper 'Seismic profiling for coal on land', 6 Anton Ziolkowski points out the importance of accurate static corrections in the National Coal Board surveys carried out to detect faulting in the coal measures and associated formations. The effect of static errors is to introduce apparent faulting, and these errors are magnified when the shooting geometry is scaled down for high-resolution work. As already shown, a static error at one station affects a number of CD Ps, giving a smearing effect which makes it look more like a fault and less like a static jump, until inner traces or single-cover data are examined.

Another feature of high-resolution work is that with the high frequencies recorded, a static shift of say 2 m, instead of being a small fraction of a wavelet, could be nearly a quarter of the length of the wavelet if 100 Hz is recorded. Such an error would lead to a serious distortion and much smearing of a reflection would take place. When larger static errors are present, of the order of more than half the length of the wavelet, these can cause events to align on the wrong cycle when stacked and cycle skipping can occur. Also errors of this order cannot so easily be rectified using automatic residual static programs, unless there are good lower-frequency events elsewhere on the section. Therefore in high-resolution recording there is an even greater need for accuracy in the initial static corrections.

Page 23: Developments in Geophysical Exploration Methods

DETERMINATION OF STATIC CORRECTIONS

8. AN AUTOMATED METHOD FOR DETERMINING RESIDUAL WEATHERING CORRECTIONS

15

With the need for very accurate static corrections for use in National Coal Board surveys, an automated method using first break plots was developed at the instigation of, and in conjunction with, the National Coal Board.

Firstly, the utmost care was taken to obtain field statics that were as accurate as possible. It was ensured that the shot was below the weathering. This was achieved by having an LVL survey ahead of the recording. To minimise any errors in elevation velocity, a datum as near as possible to the level of the shots was used. Corrections for the geophone stations were computed using up-hole times.

These statics (both shot and geophone) were then applied to the records which were plotted on a large scale as in Fig. 10. This record was recorded with a long offset, so that the first breaks were refracted arrivals. If the field corrections that were applied had been perfect, the first breaks would lie in a straight line, assuming the refractor to be level or to have a uniform dip. In fact it can be seen that these first breaks do not lie in a straight line and therefore variations in weathering occur across this record .

A composite first break plot is made from all the records along a line, using the following method. The playout of the records is placed on the pressure-sensitive pad on the digitiser as shown on Fig. II. This is linked up to a Hewlett- Packard programmable desk top calculator, and a printer.

The lowest and highest station numbers for the line are entered on the calculator, enabling axes to be set up on the printer.

The surface station number for the first trace on the first record is requested by the calculator. The record is then positioned by pricking out

FIG. II . High-resolution records ready for plotting on the pressure-sensitive pad of the digitiser.

Page 24: Developments in Geophysical Exploration Methods

16

-I '" IS'

\II ~ 161

Z o ~ 171

:;; ~ 181 o J: g, w ,g. C)

201

211

221

ADRIENNE W. ROGERS

o TIME (M ILLISECONDS)

50 100 150 200 ! !

FIG. 12. Composite first break plot.

20;0

the top two corners of the record and also a point at a fixed time, e.g. 300 ms. This enables the calculator to set up coordinates for the record. The first break time for each trace is then pricked out, using a ball-point pen or similar pointer.

After one record is completed, the next is positioned and so on. Provision is made for dead or distorted traces to be omitted. A plot as in Fig. 12 is thus obtained with very little of the tediousness involved in timing and plotting first breaks by hand. Consecutive records have been plotted using a solid line, broken line and dotted line for clarity. It can be clearly seen from this plot how anomalies align vertically over certain surface geophone stations.

The next stage is the editing of unreliable picks. The calculator lists the difference in first break times between each pair of geophone stations for all records using those stations, and gives the opportunity to edit out any unreliable values. For example, for all the records using a certain pair of geophone stations, the va1ues in milliseconds might be 7, 6, 7, 8, 7, 6, 8, 8, 8, 7,6,8,5,7; in this case the 5 might be edited out. In general, however, little editing is required.

Page 25: Developments in Geophysical Exploration Methods

DETERMINATION OF STATIC CORRECTIONS

TIME (MLLISECONOS)

o 20 <0 60 eo 100 1851---~------'"-----r--=---~-----':';

195

205

215

22'5

z ~ 235 :;;

255

265

275

285

FIG. 13.

'.

\ Plot of averaged time differences with filter curve.

17

Next a continuous plot of the time differences along the refractor is produced . The average of the time differences for each pair of geophone stations is computed by the calculator. In order to reduce the slope of the line to around zero, a constant, the average of all the time differences is subtracted from the time difference for each station. The values for each station are now plotted, starting at an arbitrary time value and plotting each value relative to the preceeding one. These are plotted against geophone station numbers, as shown by the dotted line in Fig. 13.

A zero phase low-pass spatial filter is applied to the plot to obtain a well fitting curve. (The sum of the filter coefficients must equal unity to ensure that no d.c. shift occurs.) This filtered curve is shown by the solid line in Fig. 13. A filter length of21 stations was used in this case which smooths the plot

Page 26: Developments in Geophysical Exploration Methods

18 ADRIENNE W. ROGERS

well enough to retain only long-wavelength variations of the order of half the filter length or more. The variations of the time difference plot from the filtered curve give the short-period residual statics to be applied at each geophone station. These are listed by the calculator.

If the refractor was flat or had a constant dip, and a constant velocity, the continuous refractor plot would approximate to a straight line. This is of course not usually the case, and the use of a filter to obtain the best fitting line gives an added flexibility to this method.

In theory, as deep a refractor as possible should be selected for making a continuous refractor plot, as the travel path through the weathering is more nearly vertical and nearer to the travel path of a reflection. In practice, however, the first breaks are usually used.

Care must be taken to use the same refractor if possible all along the line. Ifthe refractor fades out, another may be used but a good overlap is needed to ensure continuity of the residual statics obtained.

In addition to changing from one refractor to another it is also possible to use records shot in the reverse direction where necessary, when these have been recorded to fill gaps in coverage. These must be treated as a separate plot and there should be enough overlap allowed to match the residual statics with the main plot.

9. SHOT POINT RESIDUAL STATICS

The method of determining the residual statics to be applied to the shot point corrections is less automated. A complete set of NMO corrected single-cover sections is produced, i.e. 12 sections for a 12-fold stack. These have the field statics applied to them, and also the residual geophone statics previously determined. This will help to check the residual geophone statics. A stacked section is now produced, using field statics, residual geophone statics and also automatic statics in order to obtain the best possible stack at this stage. The single-cover sections are then matched against the stack so that the time shift of each record compared with the stack can be measured, thus giving the residual shot statics. These are then applied to the data, which are restacked and processed through automatic residual statics.

This method cannot fully correct for long-period statics; some attempt can be made by using a long filter operator, but there is no filtering of the shot statics. Problems of long-period static variation~would have to be handled using automatic static programs.

Page 27: Developments in Geophysical Exploration Methods

DETERMINATION OF STATIC CORRECTIONS 19

0·3

o· ~-

0.6_ FIG. 14. Before and after application of residual weathering corrections.

Figure 14 shows part of a section before and after the application of the residual weathering corrections determined using this method. A good improvement has been obtained, although the section is still not perfect. There is a change of datum between the two sections, reducing the total static correction applied to the second section.

10. CHOICE OF DATUM

The choice of datum for a prospect area is important. Ideally it should be chosen so that the static corrections are as small as possible, thus minimising the errors. This is not always possible as sometimes a fixed level such as mean sea level has to be used for tying in data to previous work in an area. If large static corrections have to be applied, any errors in elevation velocity now become important, whereas previously only errors in weathering depths and velocities have been considered important. Distortion of a reflector can occur if there are incorrect variations in elevation velocity along the line, and also if there are large elevation changes

Page 28: Developments in Geophysical Exploration Methods

20 ADRIENNE W. ROGERS

along the line coupled with an incorrect elevation velocity. In these cases, the shape of a reflector could appear quite different from its real shape, and may for instance appear to follow the shape of the surface elevations. Another effect of large static corrections, if applied before normal move­out, is the distortion of near-surface velocities. Also if the processing system is such that events above zero time are lost, then shallow data can disappear altogether when a large negative static correction is applied. All these effects, however, can be mitigated with the careful use of static applications.

Let us take for example the case where the data have to be corrected eventually to a deep regional datum to tie in with previous work. If the shallow part of the section is not too vital, then all the data could be corrected to a datum level below the deepest shot depth on the survey, using the elevation velocities given. It is very probable that the elevation velocities determined near the surface by the usual methods do not refer to all the material between the temporary datum and the regional datum, and that there is a change of velocity with depth. A separate assessment can then be made of a suitable velocity to be used for correcting from the temporary to the regional datum. If the object is to tie in to previous work, probably the best method is to compare new and old lines at intersections and calculate a constant or smoothly varying correction velocity to use for this final datum shift, thus minimising distortion to the structures on the new data.

Where very shallow data and velocities are important, but a fairly large shift still needs to be applied, a different method can be used which makes use of an intermediate surface 'floating' datum. Firstly, all traces are corrected to the floating datum, which can be for example the mean elevation of the common depth point. This correction takes the weathering depth into account and replaces it with the same depth of material at elevation velocity, as in Fig. I(B). At this stage the velocity analysis is carried out, velocities are determined with reference to this surface datum and are applied to the data. This avoids any distortion of the near-surface velocities which occurs if a large static correction is applied before NMO, and avoids distortion of the hyperbolic shape of the near-surface reflections prior to NMO.

The only disadvantage of running velocity analyses at this stage is that if there are rapid elevation changes, and any form of constant velocity stacking method is used, then the reflectors can appear discontinuous and it is less easy to pick accurate velocities.

After NMO and muting have been applied, the data can be shifted if required by a constant positive bulk static. This shift should be at least equal to the remaining static correction to be applied. This is done so as not

Page 29: Developments in Geophysical Exploration Methods

DETERMINATION OF STATIC CORRECTIONS 21

to lose any shallow events which might otherwise be shifted above zero time and thus lost. The remaining static correction is then applied, to correct from the surface datum to the final datum using elevation velocity.

This method solves two problems: that of obtaining near-surface velocities that are not distorted by the application of statics; and the retention of shallow events which could be lost above zero time. The problem remains, however, that the elevation velocity must still be accurate and can cause distortion if in error.

In the interest of keeping the static corrections as small as possible to minimise errors, it is possible to use a sloping datum or contoured datum. This will minimise effects of lateral changes in elevation velocity. It will however make interpretation more difficult, and assumptions about velocities will still have to be made when the structures are finally converted from time to depth.

A contoured datum has to be carefully chosen; it must be smoothly sloping; and depth variations must be very small over a spread length.

11. AUTOMATIC RESIDUAL STATIC CORRECTIONS

As stated previously, automatic residual static programs are not a substitute for having accurately calculated static corrections.

For any two unstacked traces that go to make up a section, the static difference between them is made up of:

(1) residual source static; (2) residual receiver static; (3) residual NM 0; (4) structure.

Probably all automatic static methods are based on cross-correlation. These various sources of residual static can be eliminated or determined by the choice of data to cross-correlate. If traces within a COP are cross­correlated, then the structure element can be ignored.

The residual NM 0 can be eliminated by cross-correlating common offset traces, i.e. traces from adjacent COPs which have the same source-to­receiver distances.

Similarly, the residual source and receiver statics can be determined by cross-correlating pairs of traces which have adjacent source positions and common surface positions, or adjacent receiver stations and common source positions. With the high folds of stack currently in use, many such

Page 30: Developments in Geophysical Exploration Methods

22 ADRIENNE W. ROGERS

cross-correlations can be obtained for the determination of surface consistent residual statics.

Thus residual static programs can use cross-correlation of common CD P traces, common offset, common surface positions or any combination of these, to solve equations for the unknown residual statics.

12. RESIDUAL NMO

The NMO should of course be as accurate as possible before the application of automatic statics. However, if the static problem is sufficiently bad to affect the picking of velocities, then it is useful to determine residual statics using the first attempts at picking velocities, apply these residuals to the input data and then run more velocity analyses. This should enable more accurate velocities to be obtained for input to the final automatic static programs.

13. STRUCTURE

The structure element can be removed from surface consistent residual statics by the use of a filter which is similar to that used in the residual weathering correction method described earlier (see Fig. 13). If the filter is too short, it will follow the scatter of residual statics too closely and any long-period static variations, i.e. those lasting over a number of stations, will remain on the section. If the filter is too long, however, the filter operator will depart too much from the scatter of points and genuine structure can be removed. A certain amount of interpretation is thus needed as to what is genuine structure and what is a long-period static. However, if both the continuity of the section and the smoothness improve when a longer operator is used, then long-period statics are probably present.

Structures such as faults can be smeared on a section if use is made of a 'model' trace that consists of a number of stacked traces. Such a trace (usually a weighted combination of stacked traces) is sometimes used for cross-correlating with individual traces in a CD P. This method can be very useful where the signal-to-noise ratio is poor, but should be used with care.

14. SURFACE CONSISTENCY

It is generally acknowledged that a good automatic residual static method should be surface-consistent. Thus the residual static for a trace is made up

Page 31: Developments in Geophysical Exploration Methods

DETERMINATION OF STATIC CORRECTIONS 23

, of a residual source static and a residual receiver static, apart from residual NMO and structure. The residual source static is the same for all traces in

. one particular record and the residual receiver static is the same for all traces recorded at that receiver, regardless of source position.

This is a good approximation to the truth. In fact, the angle of the travel path through the near surface to a receiver will vary, depending on the distance and direction of the source and also on the depth of the event being recorded. If there are rapid variations in weathering depth or velocity, there could be a noticeable difference in travel path to a receiver from source positions on either side of that receiver. In practice, however, surface consistent residual static programs give good results. With the high folds of stack that are used many cross-correlations can be obtained, giving many equations that can be solved for the residual statics using iterative procedures.

15. SIGNAL-TO-NOISE RATIO

In common with many programs, automatic static programs work best on data with a good signal-to-noise ratio.

Poor signal-to-noise gives rise to poor cross-correlations. Many programs have automatic editing of cross-correlations on the basis of the shape of the cross-correlation or the distance of the peak from the zero position; and a maximum value for the static can usually be set into a program by the user. This value is based on the frequency of the data as well as the quality. The cross-correlations that contribute to a summed cross­correlation for a pair of surface stations can be automatically edited, but it is useful for the summed cross-correlations to be printed out for further manual editing by the user.

16. CROOKED LINE PROCESSING

With the advent of crooked line recording more problems are posed, for the data processor as well as in the determination offield statics. Crooked line recording affects the production of field static corrections because the first breaks on a record are no longer in a straight line for use in the determination of the elevation velocity. However, this can be remedied by obtaining the true source-receiver distances from the data processing centre.

Page 32: Developments in Geophysical Exploration Methods

24

W N

. '

• 0 •

o 0 '

M ,'" N : G

ADRIENNE W, ROGERS

. . o N N

J

o ... w~----------------~~---r-----------------­N

i. o • N

en

o o N

o -. r ~

',' OJ •

g:~----------------------~--~------------N

en en <:D

.' . . ~

Page 33: Developments in Geophysical Exploration Methods

DETERMINATION OF STATIC CORRECTIONS

.. ~ •. < ..

':1( . .. , ..

x •

~t~ :r , ·X - ~ . • x 0

, II>

w z :::J a.

-- lOI. · N ~r-~~~~~~---------------+------------------N '-1. "a: 0

... ~ UU . " ,

... ' 0 , ..., ' . ..: N . . '

" . .. 'f

" , .

25

., C

Page 34: Developments in Geophysical Exploration Methods

ELEU~T[ON

UEL

OCI

TY

M/S

25

00

1950

21

00

2200

<

><

><

><

><

III

JS:)

II! t;; J3

)

z: z ;;

£-

~ :l1

0

;:

:19:

)

~ ... ~ ~

-

1500

P NO

. 17

0

.. z:

-so

190

210

2050

20

00

2200

><

><

>

230

250

270

290

310

-8:)

-;0

:)

-8:)

--::;:

::::z ~ ~~

-so

;;;=

7

150D

P

NO

. 17

0 19

0 21

0 23

0 25

0 2

70

29

0 31

0

~

3'J

~ <!

O u. "

10

~ ... 0

:B)

J3

)

:lto

:19:

)

Z'O

-so

-8:)

-;0

:)

-8:)

-so

3'J

<!O 10

FIG

. 17

. S

tati

cs c

om

pu

ted

usi

ng t

wo-

laye

r w

eath

erin

g an

d v

aria

ble

elev

atio

n ve

loci

ty.

~ "' z: ! ~ ... g: ~ .. I:

!!: ., ~ ... '" in ~ Iii ~ ~

tv

0\

> o ~ m

z z tTl ~ ~ C'l

tTl ~

CIl

Page 35: Developments in Geophysical Exploration Methods

DETERMINATION OF STATIC CORRECTIONS 27

Page 36: Developments in Geophysical Exploration Methods

28 ADRIENNE W. ROGERS

In the processing of this data, the first stage is to obtain a plan of the true subsurface scatter of points (Fig. 15) by using all the grid coordinates of the source and receiver position. Also marked on this plot are the surface stations. The next stage is the selection of a processing line through the scatter of subsurface points. This can be done either by the computer program or manually.

Taking CDP positions along this line at the correct CDP interval, the individual subsurface points are allocated to the nearest CDP position using certain criteria. For example, in Fig. 15 the surface station interval is 50 m; thus the CDP interval is 25 m. The subsurface points are allocated as indicated in Fig. 16; the radius of the arcs of the circles forming the outer limit of the catchment area for each CDP is known as the 'half bin width'. For a surface station interval of 50 m, a half bin width of200 m is commonly used.

It can be seen, then, that in one CDP, by using a 200-m half bin width the subsurface positions can be up to 400 m apart, or equal to 16 CD P intervals. Thus if cross-dip is present there could be a considerable difference in the time of an event on different traces in a CDP due to cross­dip alone. This would appear as a residual static. These 'statics' would not be constant with depth unless the cross-dip was constant with depth, nor would they be surface consistent.

If the dip is not too severe or variable, most automatic static programs should be able to handle this problem. With a large lateral scatter and cross­dip, the question of three-dimensional processing arises. In this case a number of parallel CDP lines can be chosen, using a narrow half bin width to avoid stacking together data with cross-dip. A number of cross-dip lines can also be selected. In deciding upon a half bin width, a balance has to be made between a sufficiently small width to exclude too much cross-dip and a sufficiently large width so as not to reduce the fold of stack too much and so degrade the section.

17. EXAMPLES OF AUTOMATIC STATIC PROBLEMS

The section shown in Fig. 17 shows many of the problems that can occur in automatic residual static determinations. The recording is crooked line, Vibroseis, with a 50-m surface station interval. The surface and subsurface plots of part of this line are shown in Figs. 15 and 16. The field statics were determined from a shallow refraction L VL survey, taken at l-km intervals. The weathering depths were computed using the intercept time formula.

Page 37: Developments in Geophysical Exploration Methods

000

150

0001

70

0001

90

0002

10

0002

30

0002

50

000Z

i'0

0002

93

1-0 ~'iJili!iWWIIl'IM!iJ"~

_ ~lJmIY!~

'nlll

li'f

ijli

i::I

ii' ~(jj

FIG

. 18

. S

tati

cs c

ompu

ted

usin

g co

nst

ant

elev

atio

n ve

loci

ty a

nd n

o w

eath

erin

g.

" m

-I

m

:;<:I s:: Z

>

-I

(5

Z

0 "r1 '" -I >

-I

(i

(') 0 :;<:I

:;<:I m

(') ::l

0 z '" N

\0

Page 38: Developments in Geophysical Exploration Methods

30 ADRIENNE W. ROGERS

a b

0·0 0 ·0 0·0

0·5 0·5 0·5

1·0 1·0 1·0

\·5 \·5 1·5

2,0

FIG. J9.(a) Cross-dip line at COP 222. (b) Cross-dip line at COP 249.

As can be seen from the elevation profile on Fig. 17, there was a weathered layer, and an intermediate layer which was not always present. The elevation velocity varied from 1950 to 2500 m s -1, the weathering velocity from 300 to 700 m s -1 and the intermediate velocity from 900 to 1630 m s - 1. During the application of the statics by the computer, the lateral velocity changes were interpolated to avoid sudden changes in static corrections. This can be seen between COPs 150 and 175, where there is an elevation velocity change from 2500 to 1950 m s -1 .

The first test made was to see whether or not static problems were being introduced by the variable weathering depths and velocities. Accordingly, the section was stacked using a constant elevation velocity and no weathering corrections. This is shown in Fig. 18, and the section still has static problems.

Page 39: Developments in Geophysical Exploration Methods

0013

150

0013

1113

13

0019

0 00

0211

3 00

0230

00

0250

00

0211

3 00

0290

FIG

. 20

. C

rook

ed l

ine

half

bin

wid

th r

educ

ed t

o 10

0m t

o re

duce

cro

ss-d

ip.

o tTl ;;l :xl s:: Z

>

--l ~ ~ CI' --l >

::l (")

(") o : tTl

(") ::l ~ CI'

(,;.

J

Page 40: Developments in Geophysical Exploration Methods

32 ADRIENNE W. ROGERS

~ i ..;

0 <;; ....

~ ~

~ 0 ~ .... 0

..c: en

~ en

.S!

~ § <S> en

.S!

m

<;; E 0 S '" c

~ ~

~ en 'v;

<S> C 0 U

<S> Q)

0) u

i ~ .... ::l

C/l

"" ..... C"l -~ 0 ~

~ -"" ~

Page 41: Developments in Geophysical Exploration Methods

0001

50

0001

10

0001

90

0002

10

1/100

230

0002

S0

0002

10

1/100

290

FIG

. 22

. S

urfa

ce c

onsi

sten

t au

tom

atic

sta

tics

med

ium

ope

rato

r.

a m

;:;l ::<:I :: Z

:>

-l

(3

Z ~ en

-l

:>

-l

(=i

("'J o ::<:I ~ ("

'J ::l ~ en

w

w

Page 42: Developments in Geophysical Exploration Methods

34 ADRIENNE W. ROGERS

~ ,.; ~ 0

~ ... cu

~ 0.. N 0

~ oJ) s:

..Q

'" ~ .S=!

~ § '" .~

'" .., ,...., E ~ 0

~ "" '" c ~ 2

~ <f)

'Vi s: 0 u cu

~ U ~

i ... ::J

{/J

~ ,....; N

~ 0 Ii:

~ i

Page 43: Developments in Geophysical Exploration Methods

DETERMINATION OF STATIC CORRECTIONS 35

Although this line does not have a great lateral scatter of subsurface points, the question of cross-dip was investigated. Two cross-dip lines were processed at CDPs where the scatter was greatest (Figs. 16 and 19). Some cross-dip can be seen on the cross-line at CD P 222, and it varies with time.

The section was also restacked, reducing the half bin width from 200 m to 100 m to lessen the effect of cross-dip (Fig. 20). This showed little difference from Fig. 17, as there was little cross-dip, and few subsurface points at a distance greater than 100 m from the CD P line. A section with severe cross­dip could however be improved in this way.

A surface-consistent automatic static program was applied to the line. Figure 21 shows the effect using a short filter operator of only eight stations. The continuity of the line is somewhat improved, but the events still have the low-frequency ripple on them. Next, a medium length operator of 16 stations was used (Fig. 22). There is further improvement in continuity but there are still some ripples on the data, particularly at CDPs 200-250. Finally a long operator of 32 stations was used (Fig. 23). The continuity is still further improved and the ripple effect has gone, while leaving the structure, which in fact can be seen more clearly than in Figs. 17 or 21. A further lengthening of the operator might remove this structure.

It might be argued that perhaps there should be ripples on these events, and that we are removing genuine small structures; but the fact that the continuity and general quality of the data were still improving with the use of a 32-station operator would suggest that the ripples are just a static effect caused by the very variable near-surface conditions.

18. ACKNOWLEDGEMENTS

The author wishes to thank the Directors of Seismograph Service (England) Limited for permission to publish this contribution; her colleagues there who have assisted by supplying examples, reports, etc.; the National Coal Board for permission to describe the residual weathering correction method and to publish Figs. 10-14; and finally Shell, BP and Todd for permission to publish Figs. 9 and 15-23. The opinions and conclusions reached are the author's own.

REFERENCES

1. THOMPSON, J. F., A technique for solving the low-velocity layer problem, Geophysics, 28, pp. 869-76, 1963.

Page 44: Developments in Geophysical Exploration Methods

36 ADRIENNE W. ROGERS

2. GENDZWILL, D. J., A method of weathering corrections, Geophys. Prospecting, 26, pp. 525-37, 1978.

3. PATTERSON, A. R., Datum corrections in glacial drift, Geophysics, 29, pp. 957-67, 1964.

4. TURHAN TANER, M., KOEHLER, F. and ALHILALI, K. A., Estimation and correction of near-surface time anomalies, Geophysics, 39, pp. 441-63, 1974.

5. HAGEDooRN, J. G., The Plus-minus method of interpreting seismic refraction sections, Geophys. Prospecting, 7, pp. 158-82, 1959.

6. ZIOLKOWSKI, A., Seismic profiling for coal on land, Developments in geophysical exploration methods: I, p. 271, Applied Science Publishers, London, 1979.

BIBLIOGRAPHY

BALACHANDRAN, K., Determination of weathering thickness by a seismic P-S delay technique, Geophysics, 40, pp. 1073-5, 1975.

BOOKER, A. H., LINVILLE, A. F. and WASON, C. B., Long wavelength static estimation, Geophysics, 41, pp. 939-59, 1976.

DISHER, D. A. and NAQUIN, P. J., Statistical automatic statics analysis, Geophysics, 35, pp. 574-85,1970.

GEYER, R. L., Vibroseis refraction weathering techniques, Geophysics, 38, pp. 285-93,1973.

HILEMAN, J. A., EMBREE, P. and PFLUEGEN, J. c., Automated static corrections, Geophys. Prospecting, 16, pp. 326-58, 1968.

KIRKHAM, D. J. and POGGIAGLIOLMI, E., Long period static determination by inverse filtering, Geophys. Prospecting, 24, pp. 737-55, 1976.

LARNER, K. L., GIBSON, B., CHAMBERS, R. and WIGGINS, R. A., Simultaneous estimation of residual statics and cross-dip time corrections, Geophysics, 44, pp. 1175-92., 1979.

SAGHY, G. and ZELEI, A., Advanced method for self-adaptive estimation of residual static corrections, Geophys. Prospecting, 23, pp. 259-74, 1975.

WIGGINS, R. A., LARNER', K. L. and WISECUP, R. D., Residual static analysis as a general linear inverse problem, Geophysics, 41, pp. 922-38, 1976.

ZIOLKOWSKI, A. and LERWILL, W. E., A simple approach to high resolution seismic profiling for coal, Geophys. Prospecting, 27, pp. 360-93, 1979.

Page 45: Developments in Geophysical Exploration Methods

Chapter 2

vmROSEISt PROCESSING

P. KIRK

Seismograph Service (England) Ltd, Kent, UK

SUMMARY

The Vibroseis system of recording seismic data employs a fundamentally different approach to methods using an impulse source such as dynamite. Instead of attempting to input all the energy in one short instant in time, each frequency is vibrated independently with the frequency of vibration being gradually changed until the entire frequency range that it is desired to input has been vibrated. Because of this difference in recording technique, Vibroseis data needs to be processed in a different manner compared with other types of seismic data, in order that it may be presented in an interpretable form. The purpose of this chapter is to outline aspects of data processing which are peculiar to Vibroseis data with particular emphasis on recent developments in this area.

1. INTRODUCTION

The Vibroseis system of seismic exploration 1 has been widely used in many areas of the world for the past two decades. As the Vibroseis source, unlike dynamite, is non-destructive due to its limited peak force, it has been widely used in densely populated areas by working along public highways. In the past this has meant restricting the hours of recording and also recording straight sections of seismic profile with 'fish-tail' intersections at each bend in the road. Overcoming these restrictions has led to developments in data

t The term Vibroseis, which is widely used throughout this chapter, is a registered trademark of Continental Oil Company.

37

Page 46: Developments in Geophysical Exploration Methods

38 P. KIRK

processing methods in order to combat the high levels of traffic noise which are encountered and also to process seismic profiles which were not recorded along straight lines.

The first section of this cha pter will deal with aspects of processing which are common to all lines recorded using the Vibroseis method, whilst subsequent sections will deal with the problem of traffic noise and the processing of crooked lines.

For economic reasons, a great deal of Vibroseis data processing is accomplished by field recording instrumerits. The processes performed in this way include noise suppression or rejection techniques, summation (or vertical stacking) and cross correlation with the pilot signal (sweep). It is a tribute to the designers of these instruments that such sophisticated data processing techniques can be incorporated into real-time systems.

1.1. Conventional Vibroseis Processing The standard Vibroseis technique as used in desert areas consists of vibrating a swept-frequency sinusoid lasting typically 14-28 s. A number of such sweeps are recorded at each vibrator point, typically 8 or 16. The sweep itself will be a constant-amplitude sinusoid that changes frequency linearly with time from a low frequency to a high frequency (up-sweep) or from a high frequency to a low frequency (down-sweep).

The sweep signal is usually simultaneously generated by several (two to four) truck-mounted vibrator units, which are controlled by radio signals transmitted from the recording truck. This enables more energy to be put into the ground during the recording time available. The spacing of the vibrator units and their move up between sweeps determines a source pattern which can be designed to attenuate the surface waves (or ground roll), which originate from the vibrators. The wavelengths of these surface waves are determined from a noise record obtained by recording the response of a single vibrator into a spread of single detectors. Having done this, the length and spacing of the source and detector arrays are chosen to give optimum attenuation of the surface waves. The number of elements in the source array is given by the number of vibrators used multiplied by the number of sweeps recorded at each vibrator point, and the spacing is determined by the distance between the vibrators and their move up between each sweep. Thus, by using a surface source such as Vibroseis, we can easily employ large source arrays, which would be too costly to drill if we were using dynamite.

The first stage in processing Vibroseis data is simply to sum all the individual records which were recorded from the same vibrator point. In

Page 47: Developments in Geophysical Exploration Methods

I I I -200

VIBROSEIS PROCESSING

-...

I -100

.11 A,.. All ~

\A N"

;l

I I o

CORRELATION LAG (1145)

--... ""

I I 100

I 200

FIG. 1. Typical sweep autocorrelogram (Klauder wavelet).

39

I

addition to attenuating surface waves, this process will also increase the signal level with respect to random noise.

Since the usual Vibroseis signal is many times longer than the interval between reflections, it is not possible to distinguish individual reflections on the record and another process is required to compress the signal to a relatively narrow wavelet or pulse. This is achieved by cross-correlating the geophone output with the input signal (sweep). The process of cross­correlation involves cross-multiplying all elements of the two arrays and summing all the products to give the first sample of the output correlogram. The sweep array is then shifted down one sample and the process repeated to give the next sample and so on. Since the geophone output can be represented as the earth response convolved with the input sweep, the output correlogram can be represented as the earth response convolved with the input sweep cross-correlated with itself(an autocorrelation). Thus, for each reflection coefficient in the earth response we can substitute a wavelet which is the autocorrelogram of the sweep modified in phase and amplitude by the transmission characteristics of the two-way path to the reflector. The autocorrelogram of the sweep is often referred to as the Klauder wavelet and its characteristics are determined by the basic specifications of the sweep itself. An autocorrelogram of a typical sweep of frequencies 13-75 Hz is illustrated in Fig. 1.

Since an autocorrelogram is always zero phase, the wavelet is

Page 48: Developments in Geophysical Exploration Methods

40 P. KIRK

symmetrical about the central peak. The half-width a of the central peak is equal to one-half the period of the middle frequency (44 Hz; thus a sweep with a higher mid-frequency will give a narrower Klauder wavelet and finer resolution on the seismic section. The larger the bandwidth of the sweep, the closer the wavelet will approximate to a spike. A good measure of the 'spikiness' of the wavelet is provided by the ratio of the amplitudes of the central peak and the first trough. This ratio is found to be equal to the square root of the ratio of the terminal frequencies in the sweep; thus for a sweep with a bandwidth of two octaves the ratio would be 2: 1. The sidelobes of the wavelet show a high-frequency ripple superimposed on a low­frequency wave and the periods of these two waves will be found to be equal to the periods of the terminal frequencies in the sweep. These side lobes are in fact caused by the rectangular or 'box car' envelope of the sweep and can be considerably reduced by applying a cosine taper to each end of the sweep envelope. Such a taper should not be too long since this would result in a broadening of the wavelet and a reduction in the signal-to-noise ratio improvement obtained in the cross-correlation process.

The recorded geophone output from any source is normally subject to phase changes caused by the recording instrument filters. If these phase changes are not removed they will distort the shape of the wavelet and give rise to timing errors in the data. With Vibroseis data it is a simple matter to avoid this distortion; the pilot sweep is passed through the same filters as the recorded data. Since the cross-correlation process measures the difference in phase between the sweep and the data, no phase distortion occurs. The geophones themselves also introduce frequency-dependent phase shifts into the data. This distortion can be removed in a similar way by passing the sweep through a 'black box' which duplicates the phase response of the type of geophone used.

Once they have been cross-correlated, Vibroseis data can be processed in much the same manner as data recorded using impulse sources. However, some care is needed when determining the parameters used in deconvolution. The purpose of deconvolution is to compensate for the changes in wavelet shape due to absorption and filtering effects caused by reverberation between closely spaced beds. The earth acts as a minimum phase filter, both for direct transmission losses and for multiple reflections. Deconvolution attempts to restore the delayed energy to the original input wavelet, which is assumed to be a single sample spike. This is a good assumption for impulse sources such as dynamite, where the input wavelet can be regarded as having minimum phase characteristics. However, as we have seen, the equivalent input wavelet for a Vibroseis source is the Klauder

Page 49: Developments in Geophysical Exploration Methods

VIBROSEIS PROCESSING 41

wavelet, which has zero phase characteristics (i.e. it has sidelobes sym­metrical about zero time). If we could transmit a sweep with a bandwidth wider than that of the earth, the resultant cross-correlated Vibroseis data would be minimum phase. In practice this cannot be achieved, particularly at the lower end of the spectrum, due to mechanical limitations of the vibrators. Thus cross-correlated Vibroseis data are not minimum phase, and as a result deconvolution using a minimum phase spiking operator will not work effectively (in fact it will tend to accentuate the sidelo bes on the left side of the wavelet). There are two methods of overcoming this difficulty. The first is to use predictive deconvolution with a gapped operator. With this approach, reverberations with a period of less than a specified predictive time are not deconvolved. If this predictive time is specified so as to include the most significant part of the wavelet (usually up to the second zero crossing), then the wavelet will be untouched by the deconvolution process and reverberations with a period longer than the wavelet will be attenuated. The second approach is to convert the data to minimum phase before deconvolution by convolving the data with an operator which is the minimum phase equivalent of the Klauder wavelet (sweep autocor­relogram). The minimum phase equivalent of the Klauder wavelet is a wavelet which has an identical amplitude spectrum to the Klauder wavelet but which has a phase spectrum which is the Hilbert transform of the logarithm of its amplitude spectrum. A 'spiking' deconvolution can now be applied to the data but it will be necessary to filter the data afterwards to remove noise with frequencies outside the sweep bandwidth, which will be introduced. The advantage of this method is that complete spectrum whitening can be attempted if this is thought to be desirable, which it usually is since the data has lost some high-frequency content resulting in broadening and distortion of the pulse shape compared with the ideal Klauder wavelet. A disadvantage is that the data will be minimum phase and the minimum phase wavelet has a longer tail than its equivalent zero phase wavelet, which might obscure later reflections on the seismic section. If the bandwidth of the sweep is broad (say three or more octaves), the Klauder wavelet will be so similar to a spike that the difference between the two approaches will be minimal.

2. NOISE REDUCTION TECHNIQUES

Unwanted signals which can produce damaging effects upon the desired seismic reflection signals can be split into two categories: source-generated noise and non-source-generated noise.

Page 50: Developments in Geophysical Exploration Methods

42 P. KIRK

Source-generated noise includes surface waves (ground roll) and air waves. Surface waves are generally attenuated by the use of an appropriate source pattern, as previously mentioned, although they can be attenuated by the use offilters, both frequency and spatial (20 filtering). Airwaves are not normally a problem on Vibroseis records, since the vibrators are well coupled to the ground and practically all energy is transmitted downwards or along the surface.

Non-source-generated noise includes power transmission line pick-up, ambient noise of a random nature (wind and rain noise) and sporadic noise, which is mainly caused by vehicular traffic. Power transmission line pick-up is normally attenuated by means of a frequency notch filter (50 Hz in the UK), either in the recording instruments or in the data processing sequence. This is not entirely satisfactory, as the notch filter introduces sidelobes which distort the pulse shape. Other ambient noise is relatively low level and random, and is heavily discriminated against in the cross-correlation process. This leaves high-amplitude sporadic noise caused by traffic, which is by far the most damaging since it is typically 30--60 dB above the reflected signal levels. Further discussion will therefore concentrate on methods of alleviating the effects of traffic noise. These fall into two categories: noise suppression, whereby noise is reduced in amplitude; and noise rejection, where portions of traces containing high-amplitude noise are totally eliminated in the summation process.

2.1. Noise Rejection This method can be applied in hardware-summing systems attached to the recording instruments, or in computer processing of unsummed raw data which have been recorded with instantaneous floating-point. With such data, a gain sample accompanies each data sample and the gain is adjusted to give maximum significance to the data. The gain is the amount, in 6 dB steps, by which the data have to be boosted. Thus high-amplitude signals have a low amplifier gain associated with them. The noise rejection process detects the low gains associated with high-amplitude noise and the noise-contaminated data are rejected, or zeroed out, from the summation.

The first systems to employ this method employed gain references for each data channel which were manually set by the operator, and any incoming data were compared with the references and rejected if necessary. However, this system relied upon the operator to make optimum settings for changing field conditions, and did not accommodate the time-varying nature of the seismic signals. As a result of these shortcomings, automatic noise rejection systems were developed.

Page 51: Developments in Geophysical Exploration Methods

VIBROSEIS PROCESSING 43

Automatic systems operate as follows. Firstly the reference gains for each data channel are established, either by a special initialisation record or by the first sweep of the day. During this record, the input minimum gains (associated with the highest value samples) during successive time windows are detected and stored. The window lengths are an optional parameter but 256 ms is a typical value. During successive sweeps the reference gains are

~~~~~~N~~ - 6dBl GAIN FOR O~---' NEXT RECORD 6dB

REJECT YES I OUTPUT NO L-. ------1

<Xl "C

o

~ 48

72

84

r2omS1 I '-----------------------I I

------ STORED REFERENCE GAIN VALUE

90,L-------~25-6-------:5~12--------,,-.76-::-8 -------:-:10""24,--------­

RECORD TIME IN MS

FIG. 2. Automatic noise rejection system operation.

updated by ± 6 dB if the incoming gain minimum is not equal to the stored value. A dead channel detector must also operate to detect the continuous high gains associated with a dead channel. If a channel is found to be dead, the reference gains associated with it are not updated for that sweep.

Incoming data are analysed over successive time windows and compared with the reference gains for those windows. If an incoming gain is less than the reference gain minus a delta value (typically 12 dB) then that data sample and all successive samples for a specified period (say 120 ms) are rejected. The reject time is started again if a later signal has a gain which is too low during the rejection interval. An illustrative example of the process appears in Fig. 2.

Noise rejection techniques as described above have proved very effective but they are not without some shortcomings. The main drawback is that the data must be judged as either noisy or quiet and subsequently rejected or accepted. There is no facility for suppressing slightly

Page 52: Developments in Geophysical Exploration Methods

44 P. KIRK

noisy data less severely than very noisy data. This is important when operating in areas with continuous heavy traffic noise, since in these conditions the amplitude of the noise varies enormously depending upon the type of vehicles and their speeds. Often in these circumstances there will be a stationary line of vehicles (with engines running) on one side of the road and a moving line of mixed traffic on the other. A test used to demonstrate the efficiency of this technique was to drive two vehicles along the spread from opposite directions while recording 16 sweeps. It is not difficult to visualise a noise rejection process working well in such a test, since when a vehicle is not passing a recording station conditions will be quiet and, with a spread of nearly 5 km, quiet conditions will prevail most of the time. Thus, after the noisy parts of the records are eliminated, there will be plenty of relatively noise-free data going to make up the final summed record. This will not be the case where very noisy conditions, such as mentioned previously, exist since either too much data will be rejected or that which is accepted will still be contaminated with varying degrees of noise.

A second shortcoming is that amplitude fluctuations will be introduced down the length of the summed record by the rejection process. These fluctuations could affect the reflection pulse shapes since time is related to frequency on an uncorrelated Vibroseis record. There are, however, methods of overcoming this. Firstly, a taper-in and a taper-out can be added to the rejection interval to prevent abrupt discontinuities, and secondly the final summed traces can be compensated in amplitude according to the number oflive traces which went to make up the sum at any particular time down the record. Naturally this shortcoming becomes less serious as the number of records per sum is increased.

2.2. Noise Suppression The earliest form of noise suppression was incorporated in instruments with fixed-gain amplifiers and fixed-point summing. In these systems, high­amplitude noise was clipped by the saturation of the analogue to digital converter and an improvement in the signal-to-noise ratio was thus obtained. This crude method of noise suppression suffered from several limitations. It was dependent upon the operator for the setting of optimum amplifier gains, and in any case the clipped noise was still of a higher amplitude than the data. Furthermore, the clipping resulted in undesirable square waveforms.

With the advent of floating-point recording, either instantaneous floating-point or fully floating-point (where each sample is recorded in exponent and mantissa form), a different approach had to be found. One

Page 53: Developments in Geophysical Exploration Methods

VIBROSEIS PROCESSING 45

effective method of noise suppression is time-variant equalisation or AGe (automatic gain control). This involves splitting each trace into time windows and finding the absolute mean of all the samples in each window. A scaler is then computed as being equal to a constant divided by the absolute mean, and is stored in the centre of the appropriate window in a gain trace. The remaining samples of the gain trace are computed by linear interpolation and the seismic trace is then normalised by cross­multiplication with the gain trace. Since the mean amplitude of the noise is brought down to the same level as the data, a large improvement in the signal-to-noise ratio is effected. However, with this technique there is no preservation of relative signal amplitudes, either on a trace-to-trace basis or with time, and since time is related to frequency on an uncorrelated Vibroseis record, there is distortion of the original frequency content of the data. These drawbacks may not prove too serious if signal continuity is of primary importance but there is a noise suppression technique which preserves the original signal amplitudes: namely diversity stacking. 2

The diversity stack algorithm first involves scaling the data before summation in exactly the same manner as described for the time-variant equalisation process, with the exception that the total energy within each gate is used to compute the scaling factors rather than the absolute mean. The total energy within each gate is equal to the sum of the squares of each sample in that gate. A gain trace is computed and applied to the data in the manner described above. Now, as the scaled records are summed, their associated gain traces are also summed and the final stack is then normalised by dividing the stack by the summed gain traces. Thus the original amplitude variations down the record are restored. The process is illustrated by the example in Fig. 3.

It can be shown mathematically that, in order to optimise the signal-to­noise ratio, a given record should be scaled in proportion to the signal amplitude and inversely proportional to the square of the noise amplitude. This is because signal is in phase and combines additively, while noise is out of phase and combines destructively. The diversity stack algorithm uses this fact by making the assumption that the signal amplitude will remain constant record by record within a sum group and that the relative total power (record by record) within a window will be approximately equal to the relative noise power. The latter assumption will be good if the noise amplitude is appreciably higher than the signal amplitude. A close approximation to an optimum stack is thus obtained.

There is a further sophistication which is incorporated into some diversity stack processes. This is a facility to reject very quiet records which

Page 54: Developments in Geophysical Exploration Methods

46

RECORD 1 TRACE 1

RECORD 2 TRACE 1

Signal Noise

Signal Noise

Signal NOise

STACKED RECORD TRACE 1

Scaling factors Signal Noise

RECORD 1 TRACE 1

Scaling factors Signal Noise

RECORD 2 TRACE 1

Sum of scaling factors

Signal NOise

STACKED RECORD TRACE 1

Signal Noise

STACKED RECORD TRACE 1

WINDOW 1

2 o

2 o

4 o

P. KIRK

WINDOW 2

2 100

2 o

4 100

WINDOW 3

1 o

1 o

2 o

INCOMING DATA

~~ VERTICAL STACK

0·25 0·5 o

0·25 0·5 o

0·5 1 o

0·000096 0·000192 0·009612

0·25 0·5 o

0·250096 O· 500192 0·009612

1 1 o

1 1 o

2 2 o

SCALED DATA

I~-~~,--+ .~ SCALED STACK

2 o

2·000000 0·036432

1 o

~VVVVVV~~--~------1 DIVERSITY STACK (Scaled stack scaled byirM'f"se of the SLm of scaling factors)

SIGNAL TO NOISE RATIO FOR WINDOW 2

VERTICAL STACK 0·04 DIVERSITY STACK 52 ·04

Note that the scaling factors are derived from total energy which equals the square of the total mean values. Also note that the variation in signal amplitude between window 1 and window 3 is preserved.

FIG. 3. Diversity stack operation.

Page 55: Developments in Geophysical Exploration Methods

VIBROSEIS PROCESSING 47

contain little or no signal and which would be blown up by the diversity stack process, thereby destroying the stack. It is unlikely that such records should occur; they might be caused by a vibrator failing to start, but then the record would probably be repeated. However, it is a useful safeguard and it works by comparing the energy in each record with a reference energy level and rejecting it if it falls short by a certain ratio (say 18 dB). The reference energy level, or gain history, is continually up-dated throughout the day to take account of changing field conditions, and is automatically determined from a weighted average of the early levels of the previous records with more weight being given to the most recent records.

The diversity stack process is superior to the methods previously discussed, both because it gives the closest approximation to an optimum stack and because it preserves the amplitude variations of the incoming records. It does, however, suffer from the drawback that it will not work well if the number of records in the stack is low. To illustrate this point, consider the extreme case of only one record per vibrator point. The diversity stack will leave the record totally unchanged but noise rejection and time-variant equalisation will reject or suppress the very noisy parts of the record, albeit at the cost of distorting the frequency content of the cross­correlated record. With more records per vibrator point the chances of this happening naturally decrease.

2.3. The Effect of Sweep Length on Noise The signal-to-noise ratio of Vibroseis records is further improved by the cross-correlation process, and the resultant improvement is dependent on the length of the input sweep. Landrum 3 developed the following relationship for the improvement of signal-to-noise ratio when the noise is of a random nature:

and

where T = input sweep length in seconds,f!, f2 = start and end frequencies of sweep in Hz, andfn2 - fn! = bandwidth of the noise in Hz. For 'white' noise,fn! andfn2 will be the lower and upper limits of the recording system: probably the low-cut and anti-alias filters.

All noise with frequencies outside the sweep bandwidth will not correlate at all with the sweep and so will be completely removed. If we ignore the signal-to-noise improvement obtained by removing frequencies outside the

Page 56: Developments in Geophysical Exploration Methods

48 P. KIRK

sweep bandwidth (since we would not process such frequencies anyway), the previous equation becomes

SIN improvement = 20[lOglO(TA)1/2] dB

where /). = input sweep bandwidth = f2 - fl' Inserting some typical figures into the above equation, for a 10-60 Hz

sweep the SIN improvement would be 24 dB for a 5 s sweep and 30 dB for a 20 s sweep. Such improvements would be sufficient to eliminate ambient random noise caused by wind, rain, animal movement and normal ground unrest, but not noise caused by heavy vehicles: hence the need for the methods of noise reduction mentioned earlier. It is also important to realise that such figures do not take into account the loss of signal due to earth filtering, especially at the high end of the spectrum. The sweep bandwidth could be doubled, but this would not improve the SIN ratio if little or no signal is recovered at the higher frequencies.

Frequency analyses of traffic noise show that it is not truly random but is very band-limited-almost monochromatic. However, it is not possible to predict the frequency a particular vehicle emits since this depends upon the type of vehicle and its engine speed. Unfortunately the noise frequencies generally lie between 10 and 40 Hz: right in the middle of the useful seismic frequency range. The length of time during which a vehicle affects a particular channel is also important and for traffic moving at normal speeds this tends to vary between 5 and lOs. Given these facts, we can see that there is a probability of a vehicle passing a particular recording station whilst the vibrators are not vibrating at the noise frequencies which the vehicle is emitting, and furthermore that this probability is directly related to the sweep length. When this occurs the vehicle noise will not correlate with the recorded sweep and thus will not appear on the final record. In this respect, the cross-correlation process acts as a powerful time-variant filter. The process may be illustrated with a plot of time versus frequency (Fig. 4).

In Fig. 4 it can be seen that vehicle I passed the recording station whilst frequencies of aroundfnl were being vibrated and recorded. As a result, the output correlated trace will be contaminated with noise of frequency fn l .

However, when vehicle 2 passed the recording station the frequencies being vibrated and recorded were much higher than frequency fn 2 • As a result, the noise from vehicle 2 arrived too late on the trace to correlate with the sweep and did not appear on the output correlated trace.

The sweep length which can actually be recorded is limited by two factors. The first is the number of data samples which can be handled by the computer which performs the cross-correlation process. This is likely to be

Page 57: Developments in Geophysical Exploration Methods

0

Ui .. E ;::

I T

VIBROSEIS PROCESSING

Frequency (Hz) --

II Inl In,

Noise from Vehicle 1

Noise from Vehicle 2

" £

0. c :; .. .. ...J

a. £ E .. ~J= .. c ~ ..

<fl ...J

" (; u .. a:

Output Record Length or

Cross-correlation Lag

'" C c ! !!! ...J

FIG. 4. Time v. frequency plot of a Vibroseis recorded trace.

49

about 16000 samples (64 sat 4ms sampling rate) for a computer based at a processing centre and much less, say 4000 samples, for a field-based correlator. The second factor is the required rate of production. For example, if we are allowed approximately two minutes per vibrator point we could vibrate four 30 s sweeps or eight 16 s sweeps. In other words, we must strike a compromise between sweep length and the number of sweeps per vibrator point. In order to optimise the signal-to-noise ratio, we should ensure that the number of sweeps is sufficient to allow the noise reduction process to work efficiently and to give us an efficient source array, and that the sweep length is longer than the time it takes a vehicle to pass a recording station.

2.4. Crooked Line Processing For conventional common depth point processing we assume that the survey line is straight, with regular intervals between source and receiver stations. However, if we wish to record Vibroseis lines along public highways we cannot conform to such specifications, since very few roads are straight. Ifwe attempted to process such lines as though they were straight, incorrect calculation of source to receiver distances would result in false normal move-out corrections being applied. Furthermore, traces would be gathered to the wrong CD Ps, resulting in a deterioration of the stack and ambiguity as to the location of the seismic profile itself. Therefore the first step in processing the data must be to plot accurately the grid coordinates

Page 58: Developments in Geophysical Exploration Methods

50 P. KIRK

vibrator point

x geophoM station

x chos~n COP

midpoint 01 trace

FiG. 5. Example of crooked line processing.

Page 59: Developments in Geophysical Exploration Methods

VIBROSEIS PROCESSING 51

for all source and receiver stations; to define the recording geometry of the line and then to compute the position of the midpoints between the source and receiver locations for each recorded seismic trace. The next step is to define a new effective line of profile. Typically, this will be a smooth curve that follows the local mean of the midpoint positions, and it will be either selected by eye or automatically determined by a mathematical algorithm. It is, however, difficult to devise such an algorithm which will perform perfectly in all cases; either it will follow the mean position too closely, giving undesirable sudden changes in direction, or it will be unable to cope with sudden changes in the line direction and deviate too far from the mean position. As a result manual editing of the automatically picked line is often necessary.

The process is illustrated by the example shown in Fig. 5. Having chosen our new processing line, we next sort the seismic traces into new gathers that relate to the new CO Ps along the'line. Any trace whose mid-point is further from its closest COP than a defined radius of acceptance will be rejected from the stack. The radius of acceptance is usually chosen so as to accept most, if not all, of the traces which have been recorded. A maximum fold of stack is often also specified, and traces are sometimes taken from a COP which has more than its quota of traces and given to an adjacent CD P which is deficient in traces. If this is not possible, then traces whose midpoints are furthest from the COP are rejected.

The areas of acceptance for each common depth point, illustrated in Fig. 6, are often referred to as 'bins', and the radius of acceptance as a half bin width.

Since there is appreciable scatter of mid-points of traces in a perpen­dicular direction to the line, there is a possibility of cross-dip affecting the section. In most cases where subsurface dip is gentle or the radius of acceptance is chosen to be small, crooked lines can be processed normally once they have been sorted to a chosen processing line. However,

r = rad i us of acceptance

CHOSEN PROCESSING LINE

FIG. 6. Stacking areas for a crooked line.

Page 60: Developments in Geophysical Exploration Methods

52 P. KIRK

in regions with appreciable subsurface dip, we must attempt to remove the component of cross-dip to prevent it adversely affecting the quality of the final section. Also, we may wish to determine cross-dip as an aid to interpretation, although such an estimate may not be reliable as it is often difficult to separate cross-dip from residual normal move-out and residual statics. Cross-dip analyses are performed at various points along the line where there is a wide scatter of midpoints, or alternatively every common depth point may be analysed on a continuous basis. Traces from a common depth point or several adjacent common depth points are sorted into a short line perpendicular to the processing line, and this line is stacked and displayed. The cross-dip may now be measured, by eye or automatically by cross-correlation, and is applied on a time and space variant basis to the data according to the perpendicular offset of any particular trace from the line. Cross-dip analysis must be performed on data which have already been corrected for normal move-out and for residual statics.

ACKNOWLEDGEMENTS

I would like to thank the management and drawing office staff of Seismograph Service (England) Limited for their assistance in the preparation of the diagrams and examples included in this chapter.

REFERENCES

1. GEYER, R. L., The Vibroseis system of seismic mapping, J. Can. Soc. Exploration Geophysicists. 6, pp. 39~57, 1970.

2. EMBREE, P. Diversity seismic record stacking method and system: US patent 3398396, 1968.

3. LANDRUM, R. A. JNR., Extraction of signals from random noise by cross­correlation, 37th Ann. Mtg of Society of Exploration Geophysicists, 1967.

Page 61: Developments in Geophysical Exploration Methods

Chapter 3

THE 11 NORM IN SEISMIC DATA PROCESSING

H. L. TAYLOR

P.O. Box 354, Richardson, Texas 75080, USA

SUMMARY

For solving a linear model by minimising the residuals r i = bi - 1:aijX j , the 11 norm uses the sum of absolute values of the residuals rather than the sum of the squares of the residuals, which is used in the least-squares procedures associated with the 12 norm. The 11 norm defines a robust procedure which is useful in handling certain types of model errors and data containing a few wild data points. The 11 norm solution x to r = b - Ax is also the maximum likelihood estimate of the system Ax + e = b where the errors e have a Laplace distribution. In seismic data processing, the 11 norm has possible applications in earthquake centre location and in numerous reflection seismic prospecting steps, including residual statics, velocity analysis, stacking, filter design and deconvolution. The 11 deconvolution of a seismic trace is of special interest since the resulting spike train contains a sparse spike representation of the reflectivity train of the earth rather than a smooth band­limited representation. The sparse spike representation can be useful for wavelet extraction,production of a stacked section and correlation with well log data.

1. INTRODUCTION

Although both the 11 norm and seismic processing have been around for many years, it is only very recently that they have been brought together. A basic familiarity with the nature of exploration reflection seismic prospecting and the associated elementary mathematical models, as well as

53

Page 62: Developments in Geophysical Exploration Methods

54 H. L. TAYLOR

common data processing methods, will be assumed below. These models and methods have mostly come into use with the availability of digital computers in the 1950s and 1960s.

The definition of the 11 norm for discrete systems will be given in the next section. Historically the basic concept of the 11 norm was known centuries ago to such mathematicians as Gauss and Laplace,1 and many of its properties were well known by the early part of this century.2 Although there were repeated attempts to develop practical algorithms to implement 11 norm methods during the first half of this century, 3 - 7 the modern development of 11 solution techniques started around 1955 with the recognition that the methods of linear programming could be applied to obtain solutions of 11 problems. 8 The first practical algorithm specifically written to solve discrete 11 norm problems was published in 1966 by Barrodale and Young. 9 Preliminary application of the 11 norm to technological problems started to appear about this time/o including applications to geophysical problems. 11 An interesting and valuable collection of possible applications to seismic data was published in 1973 by Claerbout and Muir. 12 The field of 11 norm applications to seismic data processing is very young, as noted above, and many of the important references below will unfortunately be to papers and talks that are only available at this time in preprint form from professional societies, or from the authors themselves. The emphasis of this chapter will be to examine the properties of the 11 norm and their implications for selected applications to seismic data processing. Details of the mathematical algorithms will be referred to the published literature, and so we can concentrate on problem definition in this chapter.

2. LINEAR MODELS AND CONVOLUTIONS

The general linear system is defined by an M x 1 data vector b which has been measured, and a model of the system that is specified as an M x N matrix A. Thus for any N x 1 vector x of parameters the forward problem is to compute the system response Ax that would result. The corresponding inverse problem is to find the vector x such that

Ax =b (2.1)

Unfortunately, solving the set of linear eqns. (2.1) can present several problems. The problem may be underdetermined, as for example when the number of variables N is greater than the number of equations M. In such a

Page 63: Developments in Geophysical Exploration Methods

THE II NORM IN SEISMIC DATA PROCESSING 55

o

W· M

N FIG. 1. Wavelet matrix (from Geophysics, used by permission).

case, there will be many solutions to choose from. When M > N it often happens that no vector x exists which satisfies eqn. (2.1), and hence the system is called overdetermined. These elementary mathematical problems are compounded in many practical situations where A is poorly conditioned and the data b are contaminated by errors. In such cases, what appears from a purely mathematical point of view to be overdetermined may in a practical sense be underdetermined. The following example will illustrate this situation. The usual method of coping with these difficulties is to make the system of eqn. (2.1) overdetermined by adding additional constraints to the system and then finding the vector x that minimises the residual vector

r=b-Ax (2.2)

in some sense to be discussed below. As an example of a linear system, consider the convolutional model of a

seismic trace

(2.3)

as the convolution of a wavelet wand a spike train s with some additive noise e. To put this in matrix notation, assume the data trace t is represented as an M x 1 vector, and that the wavelet is known and represented as a K x 1 vector w. Let N = M - K + 1 and define the M x N wavelet matrix W from the wavelet vector w by Wij = wi - j+ 1 if 1 ::; (i - j + 1) ::; k and Wij = 0 otherwise. This wavelet matrix is illustrated in Fig. 1. This is the complete convolutional matrix Wand eqn. (2.3) can now be written as

e = t - Ws (2.4)

Page 64: Developments in Geophysical Exploration Methods

56 H. L. TAYLOR

which is the same form as eqn. (2.2). Note that Ws is just the convolution of wand s in vector form.

Although N < M, the system of eqns. (2.4) is effectively underde­termined in practice because of the band-limited character of the seismic wavelet w. To illustrate this, let f, W, § and e represent the Fourier transforms of f, w, sand e respectively in eqn. (2.3), and let w* be the complex conjugate of W, Assuming wand e are uncorrelated, so that w*e = 0, the formal solution of eqn. (2.3) can be written formally as

§ = w*f/lwI 2 (2.5)

However, this is not a proper solution since w is normally band-limited, so Iwl 2 will have many zeros. This difficulty, which is easily observed in the Fourier domain, expresses itself as a high degree of singularity or poor conditioning of the matrix W.

Several methods of handling these difficulties in the Fourier domain have been developed. The most common of these is to replace eqn. (2.5) by

§ = w*f/(lwI 2 + A) (2.6)

where A is a small positive real number used to whiten the part of the power spectrum of w that is small prior to the division. Some of the implications of using eqn. (2.6) will be discussed in Section 6.

For most linear systems an M x 1 vector p of positive weights is needed to make the residuals comparable. Let P = pD be the M x M diagonal rna trix with the com ponents of p as the corresponding en tryon the diagonal and zeros elsewhere. The quantities to be minimised are the weighted residuals Pro The need for such weights is easily illustrated by the convolutional model of the trace. The amplitude of a seismic trace decreases with time (index). If the weights Pi are defined by

i+H

p i- 1 = (l/(2H + 1)1 L Ifhl (2.7)

i-H

for integral H where 3 < H < K/2, then the weighted residuals Pr are all compared relative to the local amplitude. The effect is similar to the application of an automatic gain to the trace; however, the use of weights as described above does not change the actual model. These weights could be further modified by including a taper at the ends to reduce end-effects, or by reducing the weights on data points thought to contain unusually large errors. The local average magnitude eqn. (2.7) could be replaced by a local

Page 65: Developments in Geophysical Exploration Methods

THE II NORM IN SEISMIC DATA PROCESSING 57

maximum magnitude, or local r.m.s. (root mean square) calculation for most practical purposes.

3. MODEL NORMS AND MODEL ERRORS

Since the residual vector r in eqn. (2.2) contains M components, some definition of 'small' is needed. This is usually accomplished by defining some function that assigns a positive real number as a measure of size to any vector r or Pro As shown by Claerbout, 13 there are numerous such measures that have useful applications for various seismic processing operations. The most important such measures are called norms. Ifr and e are vectors and a is a real number, the norm ofr is written as IIrll and satisfies the relationships

Ilrll > 0 if r =I 0

lIarll = lal x Ilrll

IIr + ell ::;; IIrll + lIell

(3.1)

(3.2)

(3.3)

Although there are many different norms that could be defined, some of the most useful ones and best known are the Ip norms, defined by

M

1I~lIp,= (LlriIP)I/P (3.4)

i= 1

for some real number p ;;::: 1 and

The general properties of these norms are well known. 2,6, 7

An intuitive way of understanding the nature of these norms is to consider the unit circle defined by Ilrllp = 1 for M = 2, as illustrated in Fig. 2. The following properties can be observed. For p < 1 eqn. (3.4) does not generate a norm because IIrllp is not convex and hence does not satisfy eqn. (3.3), The notation for 100 is justified by

lim Ilrllp = IIrlloo (3.5) p-oo

The norm 12 is rotationally invariant and smooth everywhere, which makes it easier to use with traditional mathematical methods. The norms II and 100 are piecewise linear, and hence the techniques oflinear programming14 can

Page 66: Developments in Geophysical Exploration Methods

58 H. L. TAYLOR

FIG. 2. Unit circles in the lp norms.

be used to minimise these norms when applied to the residuals of a linear model. 6

Among the Ip norms, only the 11, 12 and 100 norms are considered mathematically tractable for use as a general model norm.15 The names minimax, Chebyshev or Tchebysheff are closely associated with the 100 norm, which is often used to improve the mathematical approximation of functions6.7 and for other mathematical problems. 10 It has also found many uses in geophysics for gravity problems, 16 array design and wavelet inversion. 1 7 If b in eqn. (2.2) is some desired noiseless response and A contains evaluations of some functions such as sines, cosines, polynomials or shifted wavelets, then the minimisation of the 100 norm of r might be an appropriate definition of the best solution. If, however, b contains real data with noise, then the 100 norm is usually not appropriate as the model norm. Thus the remainder of this paper will be concerned with the 11 and 12 norms.

The 12 norm is used for least-squares procedures by minimising

Ilrll~ = rTr where the superscript T means transpose, or in the weighted form

IIPrll~ = rTpTpr

(3.6)

(3.7)

In this form it is probably the most commonly used model norm.1.13 Moreover, ifx and yare two vectors, then IIx - YII~ is the familiar Euclidean distance between x and y. Because of its great popularity, long history and

Page 67: Developments in Geophysical Exploration Methods

THE 11 NORM IN SEISMIC DATA PROCESSING 59

ease of use, any other proposed norm must have some justification before being seriously considered. One property of the 12 norm that will be useful to note is that the arithmetic average 11 is the solution to the least-squares problem of rank one:

Minimise 111ll: - bll; (3.8) It

where l: is the M x 1 vector with all components equal to 1. Thus l: = (l, 1 ... Wand

(3.9)

This is equivalent to the fact that the sample mean defines the minimum sample variance.

The 11 norm is sometimes said to give least (absolute) deviation procedures when minimising

(3.10)

or in weighted form

(3.11)

where Irl is the vector whose components are the absolute values of the corresponding components of r. When the 11 norm is used as the model norm for the linear system (2.2), the resulting problem can be solved using linear programming:6.9·18 The use of linear programming allows the addition of equality and inequality constraints to the system. 19 The best published algorithm for the general linear 11 problem appears to be that of Barrodale and Roberts.2o.21 The solution IX of the rank I problem

Minimise 1Ia:l: - bill (3.12) a

is the median of the numbers bi' Thus at least half of the bi are less than or equal to IX and at least half ofthe bi are greater than or equal to IX. Note IX can always be one of the bi' but may not be uniquely defined if M is even. The spread is defined as the median of the numbers IIX - b;l.

Comparing the median solution of eqn. (3.12) to the average solution (3.9) of eqn. (3.8) illustrates one of the fundamental differences between the 11 and 12 norms. Let bi = i for i = 1 ... 7. Then the median and the average are both 4. If a large error of, say, 28 were added to b7 , so b7 becomes 35, the median would still be 4 but the average would become 8. This demonstrates the robustness of the 11 norm. Robustness of a procedure means that a few large errors among many good points will not make a major change in the

Page 68: Developments in Geophysical Exploration Methods

60 H. L. TAYLOR

solution. Figure 3 illustrates this robustness property again. It shows the fitting of a straight line to a set of data with a few data points that contain large, biased errors. This robustness shows why the 11 norm may be a good choice for use with erratic data.

Since a mathematical model cannot fully represent reality and must be simplified in many respects, there will always be some residual error in the predicted response relative to the real data that is best considered as model

FIG. 3. Curve fits of a straight line to data points.

error rather than statistical error. The effect of the model norm on these model errors needs to be considered. The following examples illustrate the possible advantages of the robustness of the 11 norm in the presence of model errors.

One of the most common steps in the processing of reflection prospecting seismic data is the stacking of traces. Let (j.k be the partially processed seismic traces with trace number k = I ... kr and with samples at a uniform sample rate in time for j = I ... M. Moreover, assume that the traces have been translated (statically shifted) and stretched (adjusted for normal move-out) so that reflection events from horizontal stratum of the earth should have the same index in j. The stacked trace i j is usually computed by averaging the values tj .k for k = I ... kr which is an 12 type solution. Considering the previous discussion, should ij be computed as a median of the tj,k rather than an average? If some large non-reflection event such as a surface wave or edge diffraction cuts across these traces, then the answer would probably be 'yes', since the 11 solution, the median, would be less disturbed by this 'noise', The 'median stack' has in fact been tried by several research groups within the petroleum industry. Inquiries by the author seem

Page 69: Developments in Geophysical Exploration Methods

THE /1 NORM IN SEiSMIC DATA PROCESSING 61

to indicate that the results were the same or somewhat better than those obtained by using the average, depending on the test data. The continued use of the average rather than the median for stacking appears to be based on two considerations: firstly, the average calculation is slightly faster than the median calculation, and secondly, the usual trace-orientated organi­sation of seismic data makes the averaging of trace values easier and requires less storage.

As a second example, consider the problem of finding a discrete filter f as an N x I vector that converts a processed trace t into a given segment b of a velocity well log which has been converted into an M x I vector digitised in equal two-way travel time increments. Let T be the truncated convolutional matrix formed by setting Ti•j = li- j+i, where i l gives the right centring of the trace with respect to b. Note that Ti,j is not set equal to zero unless the corresponding trace values are undefined. Which model norm should be used to measure the size of the residuals r = b - Tf and hence be minimised to define f? Much of the noise on a velocity well log consists of sharp spikes due to cycle skips, hole noise, etc. The robustness of the II norm is again desirable because such spikes represent a few large errors among many good values.

4. STATISTICAL ERRORS

Some errors are best considered as the failure of the model to fully describe the process that generated the data, while other errors can be best described as statistical in nature. For a better understanding of these latter types of errors, consider the linear model of the data

b = Ax + e (4.1)

where A will be assumed to be a perfect model of the process, and the errors ei will be assumed to be a sampling from some distribution!; with known characteristics. Assuming that the errors are independently distributed, the joint probability distribution for any residual vector r will be

N

fR(r) = [l!;(rJ (4.2)

i = I

where r = b - Ax. The maximum likelihood estimate for x is that x which

Page 70: Developments in Geophysical Exploration Methods

62 H. L. TAYLOR

gives a maximum offR(r). Let all the errors be identically distributed and of the same generalised Gaussian type: thus

/;(r) = pexp( -JrjP) (4.3)

for some fixed p > 0 and the appropriate constant p > O. Considering -logfR(r) shows that the maximum likelihood estimate x for fR is the same x which would minimise the Ip norm as defined by eqn. (3.4).

The Gaussian or 'normal' distribution is defined by

(4.4)

for some fixed value of the positive number (J, and assuming a mean of zero. As noted above, the maximum likelihood estimates for eqn. (4.4) are just the 12 or least-squares solution of the linear system. The principal statistical model for the Gaussian distribution is provided by the central limit type theorems, which require that the random variable ei be the sum or average of many other random variables, all having a similar probability distribution. 22 Although the central limit theorem is often cited to justify the assumption of the 'normal' distribution, most data errors are not generated by such a process. Some authors 23 •24 have gone so far as to say 'normality is a myth; there never has been, and never will be, a normal distribution' .

The problems associated with assuming the normal distribution for statistical estimation are particularly acute when the actual distribution of errors has a long tail. In recent years, many new robust procedures have been developed to deal with data containing occasional wild points resulting from probability distributions with long tails. 25 - 27 The use of Ip norms other than 12 has been studied for this purpose. 12 ,13,27,28 The /1

norm has been found to be particularly useful in this context because of its robustness. A number of other measures which are not convex, or fail condition (3.2), have also been suggested. One interesting option suggested by Huber29 combines the properties of the 11 and the 12 by assigning the measure of the residual vector r as the sum of the functions

if r i < I if r i ;;::-: 1

(4.5)

rather than using the square or absolute value only. Note Fig. 4. This measure preserves some of the robustness of the 11 norm with regard to a few wild points, while giving a smoother treatment to small residuals. One negative aspect of such an approach is that it is very sensitive to the weighting of the residuals and to the scaling of the data.

Page 71: Developments in Geophysical Exploration Methods

THE 11 NORM IN SEISMIC DATA PROCESSING 63

I r;l V;;,(rj)

rj

FIG. 4. Graph of the Huber criterion l/Ih'

The exponential distribution can be defined by its probability density function

fE(r;) = (l/Jl)exp( - r;lJl) if ri ;;::: 0 (4.6)

and zero otherwise, or by its cumulative probability function

(4.7)

where the constant Jl > 0 is the mean and the variance is (12 = Jl2. The usual model for this distribution is the exponential life model discussed in many elementary statistics books,22 which can be described as follows:

Consider a component, say electrical, which fails by chance and not by fatigue or age. Then the probability of the unit failing at some time r is the probability that the unit will last until time r multiplied by the constant conditional probability of failure, say (1/ Jl), i.e.

fer) = (l - F(r»(l/Jl) (4.8)

Equations (4.6) and (4.7) can be derived from eqn. (4.8) by elementary calculus sincef(r) = dF(r)/dr.

The notable properties of this one-sided distribution include the following:

(l) it models a failure type of process on a continuous variable; (2) it has a rather long-tailed distribution function; and (3) among all probability density functions f such that fer) = 0 for

r < 0 and the mean Jl = J;;"f( r) dr is fixed,fE is the one of maximum entropy. 30

Page 72: Developments in Geophysical Exploration Methods

64 H. L. TAYLOR

The Laplace (or double exponential) distribution23.25.2B can be defined by

IdrJ = (l/2rx)exp( - itl/rx) (4.9)

for a fixed parameter rx > o. It has mean 11 = 0 and variance (72 = 2rx2. Since FL(rJ = IE(lr;l)/2 it shares many properties with the exponential distri­bution including the long tails and a failure-type statistical model. As mentioned in conjunction with eqn. (4.3), the maximum likelihood estimate x for independently distributed errors in a linear model of type (4.l) is the same solution obtained using the 11 norm.

One common problem in seismic processing is that of timing events on a trace or record and then using the resulting times to solve for the desired information. Earthquake location and residual statics calculation for reflection seismic are examples of this type of calculation. A good discussion of the linear models for residual statics calculation is contained in an article by Wiggins et al. 31 Donoh032 has discussed the lack of normality of the errors in event timing and the use of robust methods in residual statics calculations, while Claerbout and Muir12 have pointed out that there is a bias toward late times when picking first arrivals and recommended an asymmetric version of the 11 norm. The statistical errors in event timing can be classified into three types: (1) small measurement errors; (ii) medium­sized error due to picking on the wrong phase; and (iii) large errors due to picking the wrong event. A large earthquake, a reflection from a discrete shallow gas sand or other favourable circumstances may produce picks with only small timing errors, in which case the selection of a model norm would not be critical. But more often weak events, poor geophone placement, interfering events, etc., introduce the other errors. Timing errors that are off by a phase shift tend to produce error distributions with multiple modes, and sometimes with non-zero means. Although it is not available for publication, the author has seen travel time data which exhibited this multimodal behaviour. When such probability distributions can be estimated, an appropriate measurement of the model error can be defined; however, simple modifications of the 11 norm can be used in many cases. 12

Finally, the large errors due to picking the wrong event will determine the tails of the error distribution and may give the central part of the distribution a more uniform character. Although the hypothesis is difficult to test since the tails of distributions contain few points, consideration of the process of starting near the correct event and searching until a wrong event is identified should give a Laplace type distribution in the tails, which again suggests a use of the 11 norm.

Page 73: Developments in Geophysical Exploration Methods

THE 11 NORM IN SEISMIC DATA PROCESSING 65

5. 11 DECONVOLUTION

Given a wavelet wand a trace t, the deconvolution problem is to find the spike train s which minimises the residuals

r = t - Ws (5.1)

corresponding to the linear model (2.4) discussed previously. This problem has been studied extensively3 3 and the usual form of solution is to minimise

M N

IIrll~ + AlIslI~ = L rf + A L sJ (5.2)

i = 1 j= 1

where the positive number A is used to stabilise the process and reduce the large effects that data errors would have due to the band-limited nature of the wavelet. Use of the term Allslb is analogous to methods referred to as prewhitening, diagonal perturbation, ridge regression, etc. The formal solution can be written as

(5.3)

which is a time domain analogue to eqn. (2.6). When the residuals are unweighted, the matrix WTW has the special Toeplitz structure which makes the solution s in eqn. (5.3) very easy to compute. 33

The 11 analogue of the 12 procedure would be to minimise

M N

/lrll1 + AlIslll = Ilril + A I Is) (5.4)

i=l j=l

The use of criteria (5.4) has probably been considered by workers in many fields. The first public oil industry references appeared in late 1972 in the preprint ofthe paper by Claerbout and Muir12 and in the preprint by Carter et al. 34 The formulation in Claerbout and Muir12 is not presented in the form (5.4) but is mathematically equivalent. The formulation by Carter et al. 34 was more explicit, although the section was extensively rewritten with a different emphasis for the published paper. 35 A talk by Jon Claerbout in March 1975 to the Stanford Exploration Project sponsors' meeting encouraged the author to undertake a more extensive study of the nature and applications of the 11 formulation of the deconvolution problem.

Although the use of the 11 norm of the residuals has some advantages as indicated previously, the major motivation for the use of the criterion (5.4)

Page 74: Developments in Geophysical Exploration Methods

66 H. L. TAYLOR

rather than (5.2) lies in the second term and its relationship to the desired representation of the spike train. The reflectivity series of the earth tends to have a spiky nature, 36 and often includes large isolated spikes due to ocean bottom, gas sands, volcanic layers, intrusives, marker beds, etc. The 12 solutions tend to give smooth results and discriminate against spikes, whereas the 11 formulation (5.4) does not. This latter behaviour happens because the terms Sj are squared in eqn. (5.2) but not in eqn. (5.4). Thus the spike train estimate S would be better according to the 12 criteria if two spikes of size 1 could be used to match the data rather than one spike of size 2, since criteria to be minimised would only have 12 + 12 = 2 added to it rather than 22 = 4, whereas the 11 criteria would be indifferent to this distinction. The smoothing nature ofthe 12 criteria can also be inferred from the reduction in high-frequency spectral components that would result in eqn. (2.6) for A > O.

The first major results of using the 11 criteria for deconvolving a seismic trace were published by Taylor et al. 37 The /1 criteria were used in the weighted form

(5.5)

where the weights p on the residuals were defined by eqn. (2.7) and the weights q on the spike train were defined by

(5.6)

With this definition of q, if A > 100 then s = O. Note that qD is the diagonal matrix, with its diagonal components being

the components of q. Figure 5 illustrates the spike-preserving nature of the /1 norm. The assumed spike train S and wavelet w were convolved and sufficient random noise e was added that the resulting synthetic trace t = w*s + e has a signal-to-noise ratio of 4. Using A = 25 and the wavelet w, the estimated spike train swas extracted from t. Except for a few small noise spikes and the missing small spike at 420 ms, it shows good agreement with the original spike train s.

All of the better published algorithms used in solving the /1 norm deconvolution are based on the concepts of linear programming. Taylor et al. 3 7 shows how to use general linear /1 solution techniques such as those of Barrodale and Roberts20 •21 to solve the deconvolution problem, and Banks and Taylor38 show how to modify the Barrodale-Roberts algorithm to save computer time and space for this problem. Computer times are still rather slow with these algorithms. A number of improved algorithms have been developed very recently by various workers but have not yet been

Page 75: Developments in Geophysical Exploration Methods

THE 11 NORM IN SEISMIC DATA PROCESSING 67

o 200 400 600

FIG. 5. 11 deconvolution of a synthetic trace (from Geophysics, used by permission).

published. Although computation speeds are not likely to be as good as those of the unweighted 12 norm methods, acceptable computation costs can be expected. Some of the new techniques do not rely on linear programming. In fact Bamberger et al. 39 have shown results on inversion of the one-dimensional wave equation which is non-linear in the coefficients being estimated. Their approach is also interesting in that it minimises the 12 norm of the residuals while constraining the II norm of the variation of the velocity, which results in a non-smooth solution to II deconvolution. Benveniste et al. 53 have reported success in using the weighted sum of the II norm of the filter coefficients and a more general norm of prediction error for deriving an inverse filter of a non-minimum phase system from its response to a non-Gaussian input signal.

The relative meanings of II and 12 deconvolution results are an important consideration. Given a band-limited wavelet and noise data, the resulting deconvolution problem is highly underdetermined. There are many reasonable solutions and no way a priori to select the right one. The 12 deconvolution selects a band-limited spike train estimate which will not look like the reflectivity series calculated from a well log from the same locations as the trace, because the reflectivity series contains too many spectral components. To make any reasonable comparison, the reflectivity series from the well log must be filtered to limit its spectral content. Instead of limiting the number of non-zero spectral components, a sparse spike approach limits the number of non-zero spike train values. By choosing the

Page 76: Developments in Geophysical Exploration Methods

68 H. L. TAYLOR

400 ~?=---t---::::r--F--~>--

600 -:::::;;;;;;'~-'-----\----F"=---\---

800 ----'<:.----i--->,--='\-----'.;--

FIG. 6. Sparse spike representation of a reflectivity function from a well log (from Geophysics, used by permission).

value of A properly, the 11 deconvolution produces such a sparse spike representation of the desired spike train. This 11 representation will also not compare directly with reflectivity series from the well log since usually it contains too many spikes. Again, the comparison should be made with a processed version of the well log. Figure 6 illustrates a way of obtaining a sparse spike representation of a reflectivity series s from a well log. A noiseless synthetic t is produced by convolving the original wavelet or a wavelet of similar spectralcontent with s. In Fig. 6 the wavelet used was a 60 ms Ricker wavelet. The resulting trace t = w*s was then deconvolved using the same size A to be used on the seismic trace. In the example A = 25 was used. Note that on a noiseless synthetic, the original spike train s could be recovered using A = 0, but this would defeat the purpose. The result of deconvolving

Page 77: Developments in Geophysical Exploration Methods

THE 11 NORM IN SEISMIC DATA PROCESSING 69

the trace t is the spike train s shQwn. It can be nQted that sand s are very hard tQ CQmpare directly, just as a band-limited versiQn .of s may be hard tQ CQmpare with s. HQwever, the recQnvQlutiQn .of s with the same wavelet w shQWS that in fact s dQes represent s in the sense that t = w*s and w*s are essentially the same.

PreviQusly it has been assumed that the wavelet w was knQwn and the spike train swas tQ be fQund. CQnversely, assume the spike train s and trace tare knQwn. Define the spike train truncated cQnvQlutiQnal matrix S by Sij = Si _ j + l' where s is defined and zero .otherwise. The wavelet w can nQW be estimated by minimising pTlrl, where r = t - Sw. AdditiQnal techniques fQr implementing such a wavelet extractiQn are given by TaylQr et al. 3 7 alQng with an example .of alternatively extracting wavelets and spike trains tQ decQmpQse a seismic trace fQr which neither is knQwn initially. TaylQr et al. 40 have suggested a technique fQr making an initial guess at a wavelet using a median stacking technique and validated CQnvergence of the iteratiQn between wavelet and spike train extractions on a set of synthetic data. Although no general proof of convergence is available for this specific iterative deconvolution, GQdfrey41 has given an analysis of a large class of such iterative deconvolution methods under certain statistical assumptions.

6. SPARSE SPIKE PROCESSING

The use .of sparse spike representatiQn has become of increasing interest in recent years. Jon Claerbout has used the wQrd 'parsimonious' tQ describe this type of representation.42 The minimum entropy deconvolution method, as developed by Wiggins43 and others,44.45 uses a concept .of the spike train solution as containing a small number oflarge spikes tQ develop a filter from multichannel data. It has been modified to define the parsimonious deconvolution and the multichannel variable norm ratio as discussed by Gray. 42 StQne46 - 48 has developed an iterative deconvolution based on maximum entropy spectral estimation which includes a sparse spike train estimation.

A threshold procedure has been found to be helpful for iterative deconvolutions, which involves design of a spiking filter. 42 .48 Statistical models of velocity distribution by Godfrey41 and Kormylo and Mendel49

have also led to the representation of the reflectivity sequence as a sparse spike train. The concept ofblockiness of a velocity log is directly related to the concept of spikiness of its reflectivity sequence. The degree ofblockiness of a representation of a velocity log would be described as the level of

Page 78: Developments in Geophysical Exploration Methods

70 H. L. TAYLOR

parametrisation used to represent the log by authors in some disciplines. 3s .so

The examples in Figs. 5 and 6 show that the 11 deconvolution can be used to generate a sparse spike train by proper choice of A in eqn. (5.5), usually between 20 and 35; a good match of the data can be obtained with only a few non-zero spikes. Most of the values on the spike train are equal to zero exactly because of the problem definition and are not due to any additional thresholding procedure. The ability of the 11 norm to produce a spike train on which the non-zero spikes are very sparse can be enhanced by using a two-step procedure. First, a large value of A is used to identify only those spike components that are to be non-zero. Second, the values of qj are multiplied by 100 if Sj = 0, or qj is set to zero if Sj ¥- 0 and then the 11 deconvolution is rerun to solve for the final spike values.

In 1978, Siraki and TaylorS1 presented a preliminary report on the application of sparse spike train concepts to the processing of a few CDP gathers of a reflection seismic prospecting line. No wavelet had been reported for this dynamite-generated land data; however, one of the CDP traces was located near a well which had been logged for both velocity and density. Several of the process modifications, conclusions and observations of this study are worth noting. A wavelet for the study was constructed by the 11 iterative deconvolution discussed at the end of the last section, except that that process was initiated by using a reflectivity sequence produced from the well logs to extract the wavelet at the first iteration. This method of extracting a wavelet appears to have worked well, except that the proper alignment of the unstacked near-trace and reflectivity sequence from the well logs required considerable effort to establish, since it was difficult to correlate accurately the usual stacked trace and the synthetic from the well logs. The problems of trace and well log correlation and wavelet verification will be examined in more generality later.

The remainder of the processing reported by Siraki and TaylorS1 was accomplished using standard processing techniques. The following problems and opportunities were noted. Standard curve plotting routines are not well suited to the display of sparse spiked traces. When sparse spiked data are used for velocity analysis in a standard constant velocity stacking (averaging) program, the results allow for a higher precision in the velocity analysis. This is probably due to the alleviation of some of the smearing of the NM 0 stretch and to the better definition of events through sharp spikes as opposed to wave forms. However, some smearing within one sample interval was reintroduced by the interpolation procedure, which was designed to interpolate wave forms rather than shifting spikes.

Page 79: Developments in Geophysical Exploration Methods

THE 11 NORM IN SEISMIC DATA PROCESSING 71

Moreover, in the velocity analysis, and again in the final stack, the usual averaging procedure was used rather than the median stack discussed previously in Section 3. The display, interpolation and stacking of sparse spike data present no theoretically difficult problems, but there are several points in the usual processing sequence where modification to accommodate the sparse spike results would be valuable. It was also demonstrated in this study that a seismic pseudo-impedance log produced from the stacked sparse spike traces has a blocky character, as might be expected.

The problem of correlating well logs and sparse spiked traces was treated in a general context by Taylor. 52 The word 'correlation', as it is used here in the general sense, does not refer to mathematical correlation coefficients of statistics, which are not useful tools for the analysis of sparse spike trains.

Figures 7 and 8 illustrate the processes of well log and trace comparisons for traces containing a residual wavelet, presumably a band-pass wavelet and sparse spike traces. The wavelet may be known from field measurements, or extracted from the trace by using phase assumptions or by iterative deconvolution. The process of starting with velocity and density logs, to generate the impedance log, reflection series and synthetic trace, is the same in both cases. The extraction of the spike train is analogous in both cases, as discussed previously in Section 5. The following operation of producing a seismic log from the spike train is the same in both cases. As discussed in Section 5 and illustrated in Fig. 6, it is very difficult to make a direct comparison of an extracted spike train and the reflection series from a well log, so generally the reflection series will have to be filtered, as described in the previous comparison of the waveform case, and sparse spiked from the synthetic, as described in the other case. Essentially the same comments hold for the impedance log and seismic log. A filtered impedance log or blocky synthetic seismic log can form a useful immediate step, since it is related mathematically to the impedance log but has more of the character of the seismic log. The synthetic is important since the velocity of the wavelet and final correlations can only be established by direct comparison of this synthetic with the measured, possibly stacked, original trace.

Unfortunately, small errors in the estimated wavelet, seismic trace or well logs may make it difficult to find the correlation initially by comparing the synthetic and trace. It is often easier to find a valid correlation visually in the spike train domain, particularly using a sparse spike train. Moreover, the correlation can be made more precise in this domain since both convolution with a smooth wavelet and integration to obtain a seismic log are smoothing operations which will decrease the accuracy of timing estimates. However, once the proper correlation has been established near a well, it may be easier

Page 80: Developments in Geophysical Exploration Methods

MO

DE

L I

NG

SEIS

MIC

DA

TA

TRA

CE

t

SYN

TH

ET

IC

TRA

CE

i 0

Ra

* W

CO

NV

OLU

TIO

N D

ECO

NV

OLU

TIO

N

WA

VEL

ET

PRO

CE

SSIN

G

TR

AD

I T

IO

NA

l

WAV

E TR

AIN

A

s

RE

FLE

CT

IO

N

SER

IES

-FI

LT

ER

ED

R

NA

RRO

W

BAN

D

FIL

TE

RIN

G

RE

FLE

CT

ION

SE

RIE

S A

PPA

REN

T R

a PR

IMA

RY

Rp

INV

ER

SE

M

OD

EL

ING

SE

ISM

IC

LOG

A

L

IMPE

DA

NC

E LO

G

-FI

LT

ER

ED

I

FIL

TE

R IN

G

IMPE

DA

NC

E LO

G

I •

PV

(SE

C)

FIG

. 7.

W

avef

orm

sei

smic

tra

ce a

nd w

ell

log

com

pari

sons

.

.....,

J N

;t:

!"" ~ ><: ~

Page 81: Developments in Geophysical Exploration Methods

TRA

CE

SPIK

E

IMPE

DA

NC

E

SP

IKE

T

RA

IN

SP

IKE

T

RA

IN

SE

ISM

IC

LOG

• E

XT

RA

CT

ION

s

L W

AV

EL

ET

I E

XT

RA

CT

ION

SY

NT

HE

T I

C

~SPIKE T

RA

IN'I

S

YN

TH

ET

IC

SY

NT

HE

TIC

TR

AC

E

SP

I K

E

TR

AIN

S

E I

SM

IC

LO

G

t s

Ra

* W

E

XT

RA

CT

ION

S

L

I R

EF

LE

CT

ION

S

ER

IES

~EISMIC

IMP

ED

AN

CE

LO

G

CO

NV

OL

UT

IO

N

AP

PA

RE

NT

R

Q

I •

PV

P

RIM

AR

Y

Rp

SIM

UL

AT

ION

(S

EC

)

FIG

. 8.

S

pars

e sp

ike

seis

mic

tra

ce a

nd w

ell

log

calc

ulat

ions

.

I M

UL

T

~VEL

OC IT

Y ~)

FT

_S

EC

DE

NS

ITY

P

'"" := tTl

~- Z

0 :e ~ Z

en

tTl tn ~ n 0 > '"" > "tl :e 0 Ii

tTl

en

en Z

Cl

-..)

W

Page 82: Developments in Geophysical Exploration Methods

74 H. L. TAYLOR

to follow lithologic units by identifying the proper intervals on the seismic log than by trying to trace boundaries on the spike trains. This appears to be particularly true for blocky seismic logs.

REFERENCES

1. PLACKETT, R. L., Studies in the history of probability and statistics; Chapter 29, The discovery of the method of least squares, Biometrika, 59, p. 239, 1972.

2. HARDY, G. H., LITTLEWOOD, J. E. and POLYA, G., Inequalities, Cambridge University Press, Cambridge, 1934.

3. BARRODALE, I., LI approximation and the analysis of data, Appl. Statist., 17, p. 51, 1968.

4. KARST, O. J., Linear curve fitting using least deviations, J. Am. Statist. Ass., 53, p. 118, 1958.

5. WAGNER, H. M., Linear programming techniques for regression analysis, J. Am. Statist. Ass., 54, p. 206, 1959.

6. RICE, J. R., The approximation offunctions, Addison-Wesley, Reading, Mass., 1964.

7. CHENEY, E. W., Introduction to approximation theory, McGraw-Hill, New York, 1966.

8. CHARNES, A., COOPER, W. W. and FERGUSON, R., Optimal estimation of executive compensation by linear programming, Manag. Sci., 1, p. 138, 1955.

9. BARRODALE, I. and YOUNG, A., Algorithm for best LI and Loo linear approximation on a discrete set, Numer. Math., 8, p. 295, 1966.

10. RABINOWITZ, P., Applications of linear programming to numerical analysis, SIAM Rev., 10, p. 121, 1968.

11. DOUGHERTY, E. L. and SMITH, S. T., The use of linear programming to filter digitized map data, Geophysics, 31, p. 253, 1966.

12. CLAERBOUT, J. F. and MUIR, F., Robust modeling with erratic data, Geophysics, 38, p. 826, 1973.

13. CLAERBOUT, J. F., Fundamentals of geophysical data processing with application to petroleum prospecting, McGraw-Hill, New York, 1976.

14. GASS, S. I., Linear programming methods and applications, McGraw-Hill, New York, 1958.

15. RUST, B. W. and BURRUS, W. R., Mathematical programming and the numerical solution of linear equations, Elsevier, New York, 1972.

16. SAFON, c., VASSUER, G. and CUER, M., Some applications of linear programming to the inverse gravity problem, Geophysics, 42, p. 1215, 1977.

17. TAYLOR, H. L. and WOOD, M., 'Wavelet contraction and shaping using minimax criteria, presented Soc. Explor. Geophys. 49th Ann. Mtg., New Orleans, 1979 (submitted to Geophysics, 1980).

18. HADLEY, G., Linear programming, Addison-Wesley, Reading, Mass, 1962. 19. WATSON, G. A., The calculation of best linear one-sided Lp approximation,

Math. Comput., 27, p. 607, 1973. 20. BARRODALE, I. and ROBERTS, F. D. K., An improved algorithm for discrete II

linear approximation, SIAM J. Numer. Anal., 10, p. 839, 1973.

Page 83: Developments in Geophysical Exploration Methods

THE 11 NORM IN SEISMIC DATA PROCESSING 75

21. BARRODALE, I. and ROBERTS, F. D. K., Algorithm 478, solution of an overdetermined system of equations in the 11 norm, Commun. ACM, 17, p. 319, 1974.

22. FRASER, D. A. S., Statistics, an introduction, John Wiley, New York, 1958. 23. RICE, J. R. and WHITE, J. S., Norms for smoothing and estimation, SIAM

Rev., 6, p. 243, 1964. 24. GEARY, R. c., Testing for normality, Biometrika, 34, p. 209, 1947. 25. ANDREWS, D. F., A robust method for multiple linear regression,

Technometrics, 16, p. 523, 1974. 26. HARVEY, A. c., A comparison of preliminary estimators for robust regression,

J. Am. Statist. Ass., 72, p. 910, 1977. 27. HOGG, R. V., 'Adaptive robust procedures: a partial review and some

suggestions for future applications and theory', J. Am. Statist. Assoc., 69, p.909, 1974.

28. EKBLOM, E. and HENDRIKSSON, S., Lp--Criteria for the estimation of location parameters, SIAM J. Appl. Math., 17, p. 1130, 1969.

29. HUBER, P. J., Robust estimation ofa location parameter, Ann. Math. Statist., 35, p. 73, 1964.

30. SHANNON, C. E. and WEAVER, W., The mathematical theory of communication, University of Illinois Press, Urbana, Illinois, 1963.

31. WIGGINS, R. A., LARNER, K. L. and WISECUP, R. D., Residual statics analysis as a general linear inverse problem, Geophysics, 41, p. 922, 1976.

32. DONOHO, D., Robust estimation of residual statics corrections, presented Soc. Explor. Geophys. 49th Ann. Mtg., New Orleans, 1979.

33. WEBSTER, G. M., Deconvolution (vols. I and II.), Geophysics Reprint Series, Society of Exploration Geophysicists, Tulsa, Okla., 1978.

34. CARTER, R. D., KEMP, L. F., JR., PIERCE, A. C. and WILLIAMS, D. L., Performance matching with constraints, Soc. Pet. Engrs. Preprint. No. 4260, 1973.

35. CARTER, R. C., KEMP, L. F., JR., PIERCE, A. C. and WILLIAMS, D. L., Performance matching with constraints, Soc. Pet. Engrs J., p. 187, 1974.

36. GODFREY, R., MUIR, F. and FABBIO, R., 'Modeling seismic impedance with Markov chains', Geophysics, 45, p. 1351, 1980.

37. TAYLOR, H. L., BANKS, S. C. and McCoy, J. F., Deconvolution in the 11 norm, Geophysics, 44, p. 39, 1979.

38. BANKS, S. C. and TAYLOR, H. L., A modification to the discrete 11 linear approximation algorithm of Barrodale and Roberts, SIAM J. Sci. Statist. Comput., 1, pp. 187-98, June 1980.

39. BAMBERGER, A., CHAVENT, G., HEMON, CH., and LAILLY, P., Inversion of normal incidence seismograms, presented Soc. Explor. Geophys.48th Ann. Mtg, San Francisco, 1978.

40. TAYLOR, H. L., SIRAKI, E. S. and WHITSETT, R., The poor man's wavelet extractor, presented Soc. Explor. Geophys. 48th Ann. Mtg, San Francisco, 1978.

41. GODFREY, R. J., A stochastic model for seismogram analysis, 1979 Ph.D. Dissertation, Stanford University, Calif., 1979.

42. GRAY, W., Variable norm deconvolution, Ph.D. Dissertation, Stanford University, Stanford, Calif., 1979.

43. WIGGINS, R., Minimum entropy deconvolution, p. 7, Proc. IEEE Computer

Page 84: Developments in Geophysical Exploration Methods

76 H. L. TAYLOR

Society International Symposium on Computer-Aided Seismic Analysis and Discrimination, Falmouth, Mass., June 9~1O, 1977.

44. WIGGINS, R., LARNER, K. and JOHNSTON, D., Minimum entropy decon­volution, presented Soc. Explor. Geophys. 47th Ann. Mtg, Calgary, 1977.

45. OOE, M. and ULRYCH, T. J., Minimum entropy deconvolution with an exponential transformation, Geophys. Prospecting, 27, p. 458, 1979.

46. STONE, D. G., Robust wavelet estimation by structural deconvolution, presented Soc. Explor. Geophys. 46th Ann. Mtg, Houston, Texas, 1976.

47. STONE, D. G., Estimation of reflection coefficients from seismic data, presented Soc. Explor. Geophys. 47th Ann. Mtg, Calgary, 1977.

48. STONE, D. G. and ATHERTON, J. L., Decomposition of seismic traces, presented Soc. Explor. Geophys. 49th Ann. Mtg, New Orleans, 1979.

49. KORMYLO, J. and MENDEL, J. M., On maximum likelihood detection and estimation of reflection coefficients, presented Soc. Explor. Geophys. 48th Ann. Mtg, San Francisco, 1978.

50. SHAH, P. c., GAVALAS, G. R. and SEINFELD, J. H., Error analysis in history matching: the optimum level of parameterization, Soc. Pet. Engrs J., p. 219, 1978.

51. SIRAKI, E. S. and TAYLOR, H. L., An application of sparse spike train concepts to seismic data processing, presented Soc. Explor. Geophys. 48th Ann. Mtg, San Francisco, 1978.

52. TAYLOR, H. L., Spikes and waveforms in well log and seismic trace comparisons, presented Houston Geophys. Soc. Symp. on Seismically Derived Pseudo Acoustic Impedance Logs, Houston, 1978.

53. BENVENISTE, A., GOURSAT, M. and ROGET, G. A robust adaptive procedure for solving a non-Gaussian identification problem. Optimization techniques-Part 1, ed. J. Stoer, Springer~Verlag, Berlin, Heidelberg, New York, 1978.

Page 85: Developments in Geophysical Exploration Methods

Chapter 4

PREDICTIVE DECONVOLUTION

E. A. ROBINSON

100 Autumn Lane, Lincoln, Mass. 01773, USA

SUMMARY

Deconvolution is a general term for data processing methods designed to remove effects which tend to mask the primary reflected events on a seismogram. Some of the undesirable effects are produced by the earth, such as absorption, reverberation, ghosting and multiple reflections, whereas others are produced by the seismic sources and receivers. In order to deconvolve seismic data we must first estimate the parameters of these unwanted effects, and then design and apply the required deconvolutionfilters to remove them. This chapter presents the basic concepts of seismic deconvolution in a non-mathematical way; an appendix which makes use of seismic ray tracing methods puts some of these ideas in a mathematicalform.

1. INTRODUCTION

Basic to the understanding of deconvolution in the processing of reflection seismic data is the development of a model of the earth consisting oflayered strata. With this approach in mind, let us look at a single horizontal interface between two sedimentary layers. When a wave of unit energy strikes the interface, some of the energy is reflected and the remainder of the energy is transmitted. The amount of reflected energy depends upon the reflection coefficient of the interface. This reflection coefficient represents the required information about the interface. The reflected energy as measured at the surface of the earth makes up the observed reflection

77

Page 86: Developments in Geophysical Exploration Methods

78 E. A. ROBINSON

seismogram and represents the known information. Ideally, then, we would have a situation as depicted in Fig. I. In Fig. I there are three interfaces which produce the corresponding three events as seen on the reflection seismogram. In such an ideal situation we can see that one can readily infer the sedimentary structure from the events on the seismogram.

The events seen on the seismogram in Fig. 1 are called primary events. They represent energy that has travelled a direct path from surface to

a

b

EVENT E,

/ EVENT E2 / EVENTE3 ~ REFLECTION

SEISMOGRAM

SURFACE

INTERFACE 1

INTERFACE 2

INTERFACE 3

FIG. I. (a) Three reflected events on a seismic trace. (b) The corresponding three primary paths through the geologic section of the earth.

reflecting interface and then back to the surface. However, for every primary event there are many multiple events. A mUltiple event is one that makes one or more round-trip paths within the sedimentary layers before returning to the surface of the earth. Figure 2 shows a primary path and a multiple path, each of which produces reflected energy on the seismogram at the same arrival time. Thus energy from both of these paths combines to make up the observed event E on the reflection seismogram. In a sense we may say that the multiple energy (which, as we see, never reaches interface 3) actually reinforces the primary event due to interface 3. In this sense multiples are good in that they can reinforce primary reflections.

Why is such reinforcement necessary? As we have seen, each time seismic energy is transmitted through an interface some energy is lost due to reflection. Thus, for a primary event, the amplitude of the source pulse must be multiplied by each layer's transmission coefficient in the direct path down to a reflecting horizon, as well as by each layer's transmission coefficient in the direct path back to the surface. Each of these two-way transmission coefficients is less than one, so the net effect is that the

Page 87: Developments in Geophysical Exploration Methods

PREDICTIVE DECONVOLUTION 79

transmission losses can greatly reduce the amplitude of a primary event. Reinforcement of primary events by multiples is nothing more than the regaining by the primary of some of the energy that was lost by the transmission effects. In Fig. 2 we see that energy lost by the primary at A is partly regained at B.

So much for the advantageous effects of multiples. Now let us turn to the disadvantageous effects. The great disadvantage is that a multiple event can

a EVENT

-J\r-b

REFLECTION SEISMOGRAM

SURFACE

SOLID LINE PRIMARY

DASHED LINE, MULTIPLE

INTERFACE 1

INTERFACE 2

INTERFACE 3

FIG. 2. (a) A reflected event on a seismic trace. (b) The primary path (solid line) and a multiple path (dashed line) corresponding to this event.

appear on a seismic trace where no primary event exists. Thus when we see such an event, without further analysis we could mistake it for a primary event. More generally, many such multiples would interfere with the primaries, masking them and making it impossible to delineate them. Thus multiples represent a serious kind of noise in the interpretation of seismic events.

There are two main ways of reducing the disadvantageous effects of multiple events in current seismic processing practice. These two processing methods are stacking and deconvolution. Stacking works in the space domain whereas deconvolution works in the time domain. Stacking is a method of averaging over space that reinforces the primaries and cancels the multiples. Deconvolution is a method of averaging over time that does the same thing. Ideally stacking, deconvolution and migration would be carried out simultaneously in seismic data processing as one overall operation. From a practical point of view these processing operations are carried out separately in such a manner that the overall seismic data processing package is extremely robust and stable. In this chapter we discuss the deconvolution operation as a separate entity, distinct from the other processing methods.

Page 88: Developments in Geophysical Exploration Methods

80 E. A. ROBINSON

2. THE OPTIMAL CASE

As we have seen, two distinct possibilities can occur. One is that a multiple event can constructively reinforce a primary event. The other is that a multiple event can confound primary events. Is there an example in nature where multiples only behave constructively and not destructively? The answer is Yes, and we will now describe this optimal situation.

An oil well drilled in a sedimentary basin will reveal the layering. If we plot the reflection coefficients of these layers as a function of two-way travel

a b

------'i'~-SURFACE

INTERFACE 1 REFLECTION COEFFICIENT 1

INTERFACE 2 REFLECTION COEFFICI ENT 2

INTERFACE 3 REFLECTION COEFFICIENT3

SEDIMENTARY COLUMN REFLECTIVITY FUNCTION

FIG. 3. (a) The geologic section showing interfaces between sedimentary layers. (b) The corresponding reflectivity function consisting of the reflection coefficients of

the interfaces.

time we obtain the so-called reflectivity function. Ideally in the case of distinct well defined layers, this reflectivity function would have a pip at each interface. The size of such a pip would be equal to the reflection coefficient at that interface (see Fig. 3). It is an observable fact that the magnitudes of reflection coefficients encountered in petroleum exploration are small: in fact much less than I, and we shall assume that this is so throughout our discussion. In the case of no multiples and no transmission losses, the reflection seismogram would be the result obtained by attaching the source wavelet to each pip on the reflectivity function. In case the source wavelet is a spike, then for this ideal situation the reflection seismogram would be the reflectivity function itself. In order to simplify our discussion let us assume throughout that the source wavelet is a spike.

In the case of transmission losses and no multiples, the reflection seismogram would be the result obtained by diminishing each pip on the reflectivity function by the amount of the transmission losses. In addition, if there were multiples, the reflection seismogram would be the result of

Page 89: Developments in Geophysical Exploration Methods

PREDICTIVE DECONVOLUTION 81

adding the multiple events to the seismogram of the preceding case (see Fig. 4).

The optimal situation which we mentioned above can now be described. In the best possible case we would want the reflection seismogram to look exactly like the reflectivity function. In Fig. 4, we see that seismogram (b) is the same as the reflectivity function (a). However, seismogram (b) has neither multiples nor transmission losses, and so does not represent a

(0) (b) (c) (d)

FIG. 4. (a) Reflectivity function. (b) The corresponding spike-source hypothetical seismogram in the non-physical case of no multiples and no transmission losses. (This hypothetical seismogram is the same as the reflectivity function.) (c) The foregoing hypothetical seismogram but now with transmission losses. (d) The spike-

source physical seismogram (with multiples and transmission losses).

physical seismogram that can be measured in nature. In fact, seismograms (b) and (c) are only mathematical idealisations. The only kind of seismogram that we can observe is seismogram (d), which necessarily has both multiples and transmission losses.

In the optimal situation, the energy in the primary events lost through the transmission effects would be exactly counterbalanced by the energy gained through the multiples. That is, transmission effects and multiple effects would exactly cancel each other, and we would be left with the ideal seismogram (b), or equivalently the reflectivity function (a), as shown in Fig. 4.

Such an optimal situation can actually exist in nature, and in fact approximations to it are not uncommon. The optimal situation arises when the reflectivity function is a white noise sequence: that is, when the sequence of reflection coefficients is the same as a sequence of statistically uncorrelated random observations. As is well known, a white noise sequence has the property that its autocorrelation function is a spike. Thus if we compute the autocorrelation function of a reflectivity function, and find that this autocorrelation is a spike, then we know that the reflection

Page 90: Developments in Geophysical Exploration Methods

82 E. A. ROBINSON

seismogram (in the case of a spike source) will look the same as the reflectivity function (see Fig. 5).

Of course, the actual seismogram in the case of a non-spike source wavelet would be the one given in Fig. 5(c) convolved with the source wavelet. This ideal situation, or at least various approximations to it, actually occur in many geographic areas of the world where oil prospecting has taken place. The actual seismograms recorded in such areas do appear

a b c

FIG. 5. The optimal case which can occur in reflection prospecting. (a) White­noise reflectivity function. (b) Spike autocorrelation of the white-noise reflectivity function. (c) Spike-source seismogram (approximately equal to the reflectivity

function).

as a sequence of primary events which accurately depict the sedimentary layering. Such seismograms can be interpreted in the form in which they are recorded, and as a result do not require any seismic processing. Many of the great oilfields discovered in Texas, Oklahoma and other parts of the United States during the 1930s and 1 940s actually represented such idealised areas, and the seismograms recorded there were made up of a clear-cut series of identifiable primary events which were neither attenuated by transmission losses nor masked by multiples. Of course, a great many other areas produced seismograms that were so confused that they could not be interpreted by any known visual method. Of course, most seismograms fell between these two extremes, and their interpretation before the advent of seismic data processing in the 1960s required much careful and painstaking work.

3. THE CONVOLUTIONAL MODEL

The purpose of deconvolution is to take a non-ideal situation and turn it into an ideal one: that is, the purpose of deconvolution is to remove the effects of transmission losses and multiples from the observed seismogram

Page 91: Developments in Geophysical Exploration Methods

PREDICTIVE DECONVOLUTION 83

and thus yield the ideal seismogram (i.e. reflectivity function). In order to describe how the deconvolution works we must further develop our model of the earth as a stack of sedimentary layers.

As we have seen, a seismogram is made up of three types of entities: (1) the pure primary event which is the reflection coefficient of the given interface; (2) the product of two-way transmission coefficients of the direct path from the surface to the given interface and back to the surface; and (3) the myriad of multiples. In the case of a white (i.e. non-autocorrelated) reflectivity function, entities (2) and (3) cancel each other and we are left with entity (l), the reflection coefficients. In the general case however (i.e. an autocorrelated reflectivity function), we have all three entities. We will now show that it is possible to simplify this problem.

As we have noted, it is a physical fact that reflection coefficients occurring in sedimentary basins in the earth's crust are generally small in magnitude. In such a case the model for the observed seismogram can be considerably simplified. Specifically, two things happen: (1) The products of the two-way transmission coefficients disappear; (2) the many multiple events associated with the sedimentary interfaces are replaced for each interface by a common multiple train, namely the multiple train associated with the entire layered section. Admittedly the above simplification is just an approxi­mation, but it is one which makes clear how the mathematics of deconvolution works. Thus according to this simplification the observed seismogram consists of events, each event being made up of a reflection coefficient and a given train of multiples. This train of multiples, as we have seen, is the same for each reflection coefficient and is in fact the train associated with the entire layered section. We thus call it the section multiple train.

We have reduced the model for the reflection seismogram to a very simple structure, namely, a series of reflection coefficients to each of which is attached the same section multiple train. We recall that the series of reflection coefficients make up the so-called reflectivity function. The attachment of the same multiple train to each reflection coefficient represents the mathematical process of convolution. Thus in mathematical terms we have the following model for the reflection seismogram:

Seismogram = (reflectivity function)*(section multiple train)

where the asterisk indicates the mathematical operation of convolution. This representation of the seismogram is the so-called convolutional model. The important point to remember here is that the more general model of a seismic trace is time-varying. The convolutional model represents an

Page 92: Developments in Geophysical Exploration Methods

84

a

b

E. A. ROBINSON

SECTION MULTIPLE WAVEFORM / ATTACHED TO FIRST PIP

REFLECTIVITY FUNCTION SHOWING REFLECTION COEFFICIENTS AS PIPS

THE SAME SECTION MULTIPLE WAVEFORM ATTACHED TO EACH PIP

REFLECTION SEISMOGRAM (THE SUMMATION

------ OF THE WAVEFORMS IN THE ABOVE DIAGRAM)

FIG. 6. Convolutional model in the case of a spike source. (a) Reflectivity function. (b) Illustration of the process of the convolution of the reflectivity function with the geologic-section multiple-reflection wave train. (c) The seismogram given

by this convolution.

important simplIfication which holds in the case of small reflection coefficients.

In practice we generally assume that the convolutional model only holds over a specified time gate of the seismogram, rather than over the entire seismogram. Also we would convolve into the model a source wavelet. In our discussion here we assume that the source wavelet is a spike, so as not to complicate the general ideas of the model with additional conditions.

Let us now examine the section multiple train, i.e. the common wave shape that is attached to each primary reflection. The top part of Fig. 6 depicts a sequence of reflection coefficients (i.e. the reflectivity function); the middle part of Fig. 6 shows the common section multiple train (i.e. the response) attached to each reflection coefficient; and the bottom part of Fig. 6 shows the resulting reflection seismogram. It is the section multiple train which concerns us now. By definition, the section multiple train is the result of the reverberations occurring within the entire crustal section. First we must examine the types of (everberations which can occur. We add as many hypothetical layers (with zero reflection coefficients) as are necessary in our mathematical model so that the round-trip travel time in each layer is one time unit. A first-order reverberation is defined as one in which the round

Page 93: Developments in Geophysical Exploration Methods

PREDICTIVE DECONVOLUTION 85

trip travel time is one. That is, a first-order reverberation occurs within one layer. If there are N layers, it follows that there are N different first-order reverberations---one for each layer.

We now define a linked reverberation as one which involves only physically adjacent (i.e. connected) layers. A second-order reverberation is defined as one in which the round-trip travel time is two. Thus a second­order linked reverberation occurs within two adjacent layers. For Nlayers,

(0) (b) (e)

~\/W\I'v :V\Mfv

:VV\/\/\J (d) (e) (I)

1 \7\7'\ 2 \ 1\ 1\ ~~ :V\/'v :VV'v

FIG. 7. Reverberations occurring within the geologic-section sedimentary layers. (a), (b) and (c) First-order reverberations. (d) and (e) Second-order linked

reverberations. (I) Third-order linked reverberation.

there are N-l different second-order linked reverberations, namely those associated with the following pairs of layers: (1,2), (2, 3), (3, 4) ... (N-l, N). Similarly, for N layers there are N-2 different third-order linked reverberations, namely (l, 2, 3), (2,3,4), (3,4,5) ... (N-2, N-I, N). These ideas will become clear if we look at a three-layer sedimentary system, as depicted in Fig. 7.

Each of the above reverberations represents a negative feedback loop. All the first-order reverberations can be grouped together; then the quantity within the corresponding feedback box is the 'lag-one' autocorrelation coefficient of the reflectivity function. Similarly, all the second-order linked reverberations can be grouped together; then the quantity within the corresponding feedback box is the 'lag-two' autocorrelation coefficient. Likewise, the third-order linked reverberations yield the lag-three autocorrelation coefficient of the reflectivity function, and so on. The net result is that all the reverberations taken together can be described by a negative feedback system with the autocorrelation function (for lags 1 to N) of the reflectivity within the feedback box (see Fig. 8), which depicts the convolutional model of an observational reflection seismogram as a negative feedback system.

Page 94: Developments in Geophysical Exploration Methods

86

REFLECTIVITY FUNCTION (INPUT)

E. A. ROBINSON

}----~_---___,-~REFLECTION

AUTOCORRELATION (FOR LAGS>O) OF THE REFLECTIVITY FUNCTION

THE FEEDBACK BOX

SEISMOGRAM (OUTPUT)

FIG. 8. Convolutional model shown in the form of a negative feedback system.

4. MINIMUM DELAY

The problem of deconvolution in seismic data processing can be stated simply as follows. Given the reflection seismogram (which is observed at the surface of the earth), find the reflectivity function (which gives stratigraphic information as a function of depth). In order to find a solution to this problem we will make use of the convolutional model, which we assume holds over a given time gate of the seismogram.

As we have seen, the convolutional model is a pure feedback model. A pure feedback system is necessarily a minimum-delay system. More precisely, in the model

seismogram = (reflectivity function)*(section multiple train)

we can assert that the section multiple train is a minimum-delay wavelet. Thus one aspect of the seismic convolutional model is that it is a minimum­delay model.

At this point let us give some further discussion about minimum delay and its relationship to feedback systems. For example, suppose that the desired direction of a ship is set on the gyrocompass. A feedback mechanism indicates the error between the desired direction and the actual direction of the ship. The error activates the guidance system, which consists of power amplifiers which force the rudder in the direction that decreases the error. Because it takes time to supply the power for turning the ship, there is a time delay in the guidance system. Suppose that the ship is off course to the right. The feedback mechanism indicates an error to the right and the power amplifiers force the rudders to the left. Because of the time delay the ship overshoots the gyro direction to the left. Now the feedback mechanism indicates an error to the left, and the power amplifiers force the rudders to the right. Because of the time delay, the ship again overshoots the gyro direction: this time to the right. The feedback mechanism now indicates an error to the right, and because of the time

Page 95: Developments in Geophysical Exploration Methods

PREDICTIVE DECONVOLUTION 87

delay, a third overshoot is produced: this time to the left. These oscillations about the gyro direction may either increase in magnitude on each successive swing, or decrease. If they increase, the guidance system is unstable. If they decrease, it is stable. Clearly, the guidance system with minimum delay is the one which is stable.

Any causal linear system can be described by its gain and its delay. Its gain is a measure of the increase or decrease of the magnitude of the output as compared to the magnitude of the input. Delay is a measure of the time from the instant the input is activated to the instant that input is significantly felt at the output. As we expect, both gain and delay depend upon the frequency of the signal.

It is possible to have many different systems, each with the same gain, but with a different delay. In fact it is always possible to have systems with very great delays, as there is no theoretical limit to the greatness of the delay that can be incorporated into a system. On the other hand, there is a limit to the smallness of the delay that a system can possess. The reason is that it always takes some time for a system to respond significantly to an input. The system with the smallest possible delay for its gain is called a minimum­delay system.

Suppose that we have two causal systems A and B connected in tandem. The gain of the overall system is equal to the product of the gains of the component systems, whereas the delay of the overall system is equal to the sum of the delays of the component systems. Instead of considering the gain, we may consider the logarithm of the gain, called the log gain. The logarithm of a product is equal to the sum of the logarithms of the individual factors. Hence the log gain of the overall system is equal to the sum of the log gains of the component systems. In summary then, we have

log gain of overall system = log gain of A + log gain of B delay of overall system = delay of A + B

As we have just seen, a causal system can be described by its log gain and delay. In a figurative sense, let us think of the log gain of a system as an 'asset' and the delay of a system as the 'cost' of this asset. That is, log gain must always be paid for in delay. It turns out that the price that is paid is always larger than, or at best equal to, the amount of log gain received. The causal systems for which the price paid is equal to the amount of log gain received are the minimum-delay systems. Hence the minimum delay is the fair price for the log gain. All other causal systems, that is, the non­minimum-delay systems, have delays greater than the minimum. Hence for such a system the price paid for the log gain is greater than the fair price, and

Page 96: Developments in Geophysical Exploration Methods

88 E. A. ROBINSON

the extra amount paid is the difference between its delay and the minimum delay.

There are causal systems that have no log gain, and these systems are called all-pass systems. There are two kinds of all-pass systems. The first kind is the trivial all-pass system; this system has no delay. Since a trivial all­pass system has no log gain and no delay, we pay nothing for nothing, which is fair. The other kind is the non-trivial all-pass system, and this is the one with delay. Because a non-trivial all-pass system has no log gain but has delay, we pay something for nothing, which is unfair. We can now summarise:

Minimum-delay system: asset = log gain cost = minimum delay (fair price)

Non-minimum-delay system: asset = log gain cost = minimum delay plus

extra delay (unfair price) Trivial all-pass system: asset = none

cost = none (fair price) Non-trivial all-pass system asset = none

cost = delay (unfair price)

Of course, a trivial all-pass system is a system with the Dirac delta function or unit spike as its impulse response; such a system passes input to output with no change at all. That is, a trivial all-pass system is the identity operator.

From the above asset-cost tables, we may see the following. The overall system resulting from connecting a non-trivial all-pass system to a minimum-delay is non-minimum delay. Conversely, any non-minimum­delay system can always be decomposed into two tandem systems: a non­trivial all-pass system and a minimum-delay system. These results comprise the so-called canonical representation. 1

What does it mean to pay the fair price for an asset? It means that we can sell the asset, and by so doing come back to our original position. A minimum-delay system is a system for which the fair price (i.e. the minimum delay) has been paid for the log gain. Consequently, for each minimum delay system there is a realisable inverse system. If a signal is the input to a minimum-delay system, then we can recover this signal in its original form by passing the output ofthe minimum-delay system into the reverse system. The recovery of the signal is accomplished with no overall time delay. Hence in the transmission of information from input to output, a minimum-delay system neither destroys nor delays the information.

Page 97: Developments in Geophysical Exploration Methods

PREDICTIVE DECONVOLUTION 89

A non-minimum-delay system is a system for which an unfair price has been paid for the log gain. Consequently there is no causal inverse system for a non-minimum-delay system. Some non-minimum delay systems do not destroy information about the signal, but only delay the information. For these systems, the original signal can be recovered, at least approximately, but with an overall time delay. Other non-minimum-delay systems destroy information about the signal so that the original signal cannot be recovered, even with an indefinitely long time delay.

The convolutional model of a reflection seismogram, as we have seen, is a negative feedback system and is thus necessarily a minimum-delay system.

REFLECTION SEISMOGRAM (INPUT)

AUTOCORRELATION ( FOR LAGS> 0 ) OF THE REFLECTIVITY FUNCTION

THE FEEDFORWARD BOX

REFLECTIVITY FUNCTION (OUTPUT)

FIG. 9. The deconvolutional model shown in the form of a feedforward system.

We recall that the feedback box consists of the autocorrelation (for lags greater than zero) of the reflectivity function. The inverse system can be found by inspection. Specifically, the inverse system is the feedforward system with the feedforward box consisting of the autocorrelation (for lags greater than zero) of the reflectivity function. The inverse system is depicted in Fig. 9.

Let us now summarise the results we have obtained up to this point. The convolutional model states that the reflection seismogram is the convolution of the reflectivity function with the section multiple train. As we have seen, the section multiple train is a waveform made up of the reverberations from all the different combinations of the layers. As a result the direct system whose impulse response is the section multiple train is a negative-feedback system. The feedback box of this direct system is made up of the autocorrelation (for positive lags) of the reflectivity function. Such a feedback system is necessarily a minimum-delay system, and hence there is a causal inverse system. This causal inverse system is a feedforward system, and in fact its feedforward box is the same as the feedback box of the direct system. Thus the impulse response of this inverse system is made up of 1 (corresponding to the straight path) together with the positive-lag

Page 98: Developments in Geophysical Exploration Methods

90 E. A. ROBINSON

autocorrelation of the reflectivity function (corresponding to the feed­forward box path). This impulse response represents the operator with which we convolve the reflection seismogram in order to obtain the reflectivity function. That is, this operator converts the observed reflection seismogram to the desired reflectivity function, and hence is the required deconvolution operator.

In brief, the convolutional model states that the reflection seismogram is the convolution of the reflectivity function with the section multiple train. The section multiple train can be regarded as the impulse response of the direct system representing the action of the earth's sedimentary layers. The deconvolution operator is the impulse response, of the corresponding inverse system. Thus the deconvolution operator removes the effects of the section multiple train, and thus yields the reflectivity function. Thus we have the direct (or physical) system.

Reflectivity function -> / Section multiple train /-> Reflection seismogram

and the inverse (or data processing) system

Reflection seismogram -> Deconvolution operator -> Reflectivity function

The known (or observational) information is the reflection seismogram, and the desired information is the reflectivity function. We know theoretically that the deconvolution operator is made up of unity followed by the positive-lag values of the autocorrelation of the reflectivity function. Because we do not know the reflectivity function, we must find some way to estimate the deconvolution operator from the known data (i.e. from the seismic trace). In order to find a method, we must first introduce the random reflection coefficient model.

5. RANDOM REFLECTION COEFFICIENTS

As we discussed earlier, many great oilfields in Texas and Oklahoma discovered in the early days of seismic prospecting were in areas which produced textbook-type seismograms. These seismograms are recorded in the field showed beautiful primary reflections which accurately represented the sedimentary structure. The reason is that, in these particular areas, the sedimentary layers give rise to a sequence of reflection coefficients (i.e. the reflectivity function) which is of the nature of white-noise. Due to this

Page 99: Developments in Geophysical Exploration Methods

PREDICTIVE DECONVOLUTION 91

randomness in the sedimentary column, the mUltiples all interact with each other and the net effect is that the mUltiples cancel each other out, except at the times of the primaries, where they build Up and actually enhance the primary reflections.

When there are one or more strong reflecting layers in the sedimentary column, then the multiples from these layers build up and mask the primary events. For example, in marine exploration the water layer represents a non-attenuating medium bounded by two strong reflecting interfaces and hence represents an energy trap. A seismic pulse generated in this energy trap will be successively reflected between the two interfaces. Consequently, reflections from deep horizons below the water layer will be obscured by the water reverberations. As another example, a limestone layer at depth with strong reflecting interfaces can also produce multiple reflections which interfere with primary information on the seismogram.

Despite the presence of strong reflecting interfaces interspersed in the geologic column, there remain significant sections where the interfaces are characterised by reflection coefficients that are small and random. Hence on the corresponding sections of the reflection seismogram, the reflectivity function may be considered to be white. Thus by carefully selecting time gates on a reflection seismogram we are able to pick out sections where we may assume the reflectivity function is a white-noise function. We recall that the convolutional model states that the reflection seismogram is the output of a minimum-delay system. Its impulse response is the multiple train of the entire sedimentary section and its input is the reflectivity function. By proper selection of the time gate, we see that the input may be considered to be a white-noise series.

In this way we have specialised the convolutional model so that it can form the basis for a method to determine the required deconvolution operator. The specialisation states that:

(1) the earth acts as a minimum-delay system in producing the train of multiple events that appear on the reflection seismogram;

(2) the reflectivity function over a selected section of the sedimentary column is a white-noise function.

Thus this seismic model differs from any arbitrary convolutional model in that the seismic model is a minimum-delay system with a White-noise input. Because of these special features, the seismic model can be used as a basis for determining the deconvolution operator. In brief, the seismic model is a minimum-delay random reflection coefficient convolutional model.

Page 100: Developments in Geophysical Exploration Methods

92 E. A. ROBINSON

6. THE DECONVOLUTIONAL OPERATOR

The seismic convolutional model has two characteristic features within the time gate of interest:

(1) the statistical feature that the primary events are due to a reflectivity function (i.e. series of reflection coefficients) given within the time gate by a random white-noise series;

(2) the deterministic feature that the multiple wave trains attached to the primary events have the same minimum-delay wavelet shape. (Of course, this section multiple train is due to the entire sedimentary section, i.e. to reflection coefficients both within and outside the time gate.)

The observational data are in the form of the observed seismic trace recorded at the surface of the ground. Let us now discuss the computation procedure used to determine the deconvolution operator (steps (1) and (2) below) and then to carry out the deconvolution (step (3) below).

The computational procedure consists of the following steps:2

(1) The first step is to compute the autocorrelation function of that portion of the seismic trace within the specified time gate.

(2) The second step is to compute the coefficients of the prediction error operator corresponding to that autocorrelation. This cal­culation involves solving a set of simultaneous equations called the normal equations. Because of the symmetries involved in these equations, a highly efficient computational procedure, called the Toeplitz recursion, may be used. The prediction error operator is the required deconvolution operator.

(3) The final step (namely the deconvolution itself) is to convolve the deconvolution operator with the seismic trace. Note that the 'deconvolution' of the trace is accomplished by 'convolving' the trace with the 'inverse operator', i.e. with the deconvolution (or prediction error) operator. The result of the deconvolution is the prediction error series. The prediction error series approximates the required reflectivity function within the given time gate.

All of the above, of course, holds within the limitations of statistical errors imposed by noise, computational approximation and the finiteness of the data, and also within the limitations of specification errors imposed by the model. The success of the method of predictive deconvolution depends

Page 101: Developments in Geophysical Exploration Methods

PREDICTIVE DECONVOLUTION 93

largely upon the validity of the basic hypotheses as to the minimum-delay nature of the section multiple train waveform and to the random uncorrelated nature of the reflectivity function within the specified time gate. The power of the method of predictive deconvolution rests in the fact that it is a stable and robust method in which the only data required are the received seismic trace. The method of predictive deconvolution is used successfully on a day-to-day basis to deconvolve field records in all seismic environments, both land and marine. The general success of the method shows that the basic hypotheses are valid over this wide range of field situations and operating conditions.

7. THE SOURCE WAVELET

In our treatment to this point we have considered only the basic seismic model, and not the many other factors which are involved in practice. One of the important factors is the shape of the source wavelet. We have assumed in our previous development that the source wavelet is a spike. When the source wavelet is not a spike, then the actual seismogram is given by the convolution of the source wavelet with a seismogram produced by a spike source. Thus the removal of the source wavelet from an actual seismogram represents a deconvolution problem in addition to the deconvolution problem for the multiples.

In those cases when the source wavelet is minimum-delay, then the two deconvolution problems blend together. A deconvolution operator computed by the method described in Section 6 will actually remove both the multiple wave train and the minimum-delay source wavelet simul­taneously. However, in those cases when the source wavelet is not minimum-delay, the solution is not so simple.

We recall that a non-minimum-delay wavelet (or system) can be uniquely represented as the convolution of an all-pass system (which has a flat magnitude spectrum) and a minimum-delay wavelet (which has a magnitude spectrum with the same shape, or colour, as the given non-minimum­delay wavelet). This representation is called the canonical representation. Let us call anything that has a flat magnitude spectrum white, whereas anything that has a curved magnitude spe~trum coloured. The canonical representation states that a non-minimum delay wavelet is equal to the convolution of a minimum delay wavelet with the same colour and an all­pass system which is white.

Page 102: Developments in Geophysical Exploration Methods

94 E,· A. ROBINSON

The seismic convolutional model in the case of an arbitrary source wavelet is

seismic trace = source wavelet*section multiple wave-train*reflectivity

In this model, a non-minimum-delay source wavelet has both a coloured and a white component, i.e.

source wavelet = minimum delay wavelet*all-pass wavelet

We can thus write our seismic trace as

seismic trace = [minimum-delay wavelet*section multiple wave train] * [all-pass wavelet • reflectivity ]

In this equation there are two sets of brackets on the right. In the first set we have combined all the minimum-delay components (i.e. minimum-delay wavelet and the section multiple wave train). Because the convolution of two minimum-delay waveforms is itself minimum-delay, the first set of brackets represents a minimum-delay component. In the second set of brackets we have combined all the white components (i.e. the all-pass wavelet and the reflectivity function). Because the convolution of two white waveforms is itself white, the second set of brackets represents a white component.

The process of predictive deconvolution separates on the basis of minimum-delay and white. Thus the method of predictive convolution will remove the minimum-delay component and yield the white component. That is, if we deconvolve the seismic trace as given by the above model we will obtain as the output the white component:

[all-pass wavelet.reflectivity function]

Thus we do not obtain the desired reflectivity function, but obtain instead the reflectivity function passed through an unknown all-pass filter. If the source wavelet were close to a minimum-delay wavelet, then the action of the all-pass system would not be severe, and the above result would be acceptable. Otherwise the desired reflectivity function could be appreciably distorted by this unknown and unwanted all-pass system.

In the above discussion we have assumed that we do not know the shape of the non-minimum-delay source wavelet. However, in many land and marine situations we are in a position to measure the actual source wavelet (or signature) transmitted into the earth. In such a case we effectively know the all-pass system, and as a result it can be removed in the processing. In

Page 103: Developments in Geophysical Exploration Methods

PREDICTIVE DECONVOLUTION 95

actual practice, however, one usually removes the entire source wavelet by a signature deconvolution process before predictive deconvolution is applied. Signature deconvolution makes use of least-squares shaping or spiking filters for the design of the deconvolution operator.

8. CONCLUDING REMARKS

In this chapter we have discussed the conceptual aspects of deconvolution in a non-mathematical way. The reason for this approach is that most treatments of deconvolution are highly mathematical, and as a result many of the basic ideas tend to be obscured by the mathematical formalism. However, for those readers who would like to experiment with the deconvolution operations we have added a mathematical appendix. In this appendix we have worked out the mathematical system in the case of two sedimentary layers, and it turns out that many of the important points can actually be illustrated in such a simple case. For many layered systems, a computer would of course have to be used as the algebraic and arithmetic manipUlations become formidable even with the addition of a few more layers. It is our belief that once one grasps the ideas in a simple case, then the extension to more difficult cases becomes a pleasurable journey. At this point one can readily profit from the many articles and books written on deconvolution and related aspects of digital processing. In particular, the two-volume collection edited by Webster3 is highly recommended.

REFERENCES

1. ROBINSON, E. A., Random wavelets and cybernetic systems, p. 50. Charles Griffin and Co., High Wycombe, Bucks, England, 1962.

2. ROBINSON, E. A., Multichannel z-transforms and minimum-delay, Geophysics, 31, pp. 482-500, 1966.

3. WEBSTER, G. M., Deconvolution (2 vols.), Society of Exploration Geophysicists, Tulsa, Oklahoma, 1978.

4. O'DOHERTY, R. F. and ANSTEY, N. A., Reflections on amplitudes, Geophys. Prospecting, 19, pp. 430-58, 1971.

BIBLIOGRAPHY

ANSTEY, N. A., Seismic prospecting instruments, Vol. I: Signal characteristics and instrument specifications, Gebruder Borntraeger, Berlin, 1970.

Page 104: Developments in Geophysical Exploration Methods

96 E. A. ROBINSON

DOBRIN, M. B., Introduction to geophysical prospecting, McGraw-Hill, New York, 1976.

FITCH, A. A., Seismic reflection interpretation, Gebruder Borntraeger, Berlin, 1976. ROBINSON, E. A., Dynamic predictive deconvolution, Geophys. Prospecting, 23, pp.

779-97, 1975. SILVIA, M. T. and ROBINSON, E. A., Deconvolution 0/ Geophysical Time Series in the

Exploration/or Oil and Natural Gas, Elsevier, Amsterdam, 1979.

APPENDIX

In this appendix we make use of algebraic manipulations in order to justify some of the developments in the chapter. Let us look at a single horizontal interface between two sedimentary layers. Suppose that a down-going spike of unit amplitude strikes the interface. As we know from classical physics, some of the energy is transmitted through the interface and some is reflected back from the interface. The reflection coefficient c is defined as the amplitude of the resulting up-going reflected spike, and the transmission coefficient t is defined as the amplitude of the resulting downgoing

1 DDWNGOING UNIT SPIKE

C REFLECTED SPIKE

t' TRANSMITTED SPIKE

-----'\(;---------f:---INTERFACE

tIc' TRANSMITTED UPGOING REFLECTED SPIKE UNIT SPIKE SPIKE

FIG. 10. Reflection coefficient c and transmission coefficient t = 1 + c in the case of a down-going wave striking an interface and reflection coefficient c' = -c and transmission coefficient t' = I - c in the case of an up-going wave striking an

interface.

transmitted spike. This relationship is illustrated in Fig. 10. Note that in this diagram as well as others we are illustrating plane waves propagating at normal incidence to the interfaces, but ray paths are given a horizontal displacement to simulate the passage of time in the horizontal direction. From physical reasoning it can be established that a given up-going unit spike striking the interface from below gives rise to a down-going reflected spike of amplitude c' and an up-going transmitted spike of amplitude t'. The relationships between the coefficients c, t, c' and t' are

t = I + c c' = -c t' = I - c

Page 105: Developments in Geophysical Exploration Methods

PREDICTIVE DECONVOLUTION 97

Co I-co I-CB

V \ '- \ , INTERFACE Co

A " / \_j F,,,' 1 -co Itco loyer c, I-c, I-d

V ~ , \ , INTERFACE c,

A " / \_j 5",,1 -c, ltc, loyer C2 l-c2 I-c~

V \ , \ , INTERFACE C2

A " 7 \_j -C2 ItC2

~ '---y----' '---v----' REFLECTION TRANSMISSION TWO-WAY COEFFICIENTS COEFFICIENTS TRANSMISSION

COEFFICIENTS

FIG. 11. Illustration of the various reflection coefficients, transmission coefficients and two-way transmission coefficients.

and so actually there is only one independent quantity from which all the others can be derived. For convenience we pick c as this quantity; thus c characterises any given interface.

Figure II represents the various reflection and transmission coefficients for a two-layer system. Note that the two-way transmission coefficient is defined as

tt' = (1 + c)(l - c) = I - c2

Figure 12 depicts a two-layer system with equal travel times in each layer. The source is a down-going unit spike at the upper interface. The resulting down-going spike train through the lower interface is called the trans­mission response. The consecutive spikes in the transmission response are called the direct transmission, the first transmission, the second transmission and so on ad infinitum. The up-going spike train through the upper interface is called the reflection response. The consecutive spikes in the reflection response are called the direct reflection, the first reflection, the second reflection, etc. The reflection response is the reflection seismogram (or seismic trace) that we record at the surface of the ground. The sequence of reflection coefficients associated with the interfaces (in this case the sequence is simply co' c l' c2) is the reflectivity function.

With reference to Fig. 12, the direct transmission follows the path ABC, and so is the product of the transmission coefficients of the three interfaces. Because this product appears in all the transmission spikes, we designate the product by T, i.e.

Direct transmission = T = (1 + co)(1 + c1)(1 + c2 )

Page 106: Developments in Geophysical Exploration Methods

98 E. A. ROBINSON

REFLECTION SEISMOGRAM

UNIT 'DIRECT FIRST REFLECTON SECOND REFLECTION THIRD REFLECTION FOJRTH REFLECTION ' SOURCE REFLECTION (PRIMARY ONLY) (PRIMARYPLUSMULTIPLE) (MULTIPLES ONLY) (MULTIPLES ONLY)

I c,(I-C5) c2(1-cT)(I-C5)-(l-cijlcocf 2c2(1-d)(l-cij)(-coc,) +(I-c1jlcijcy

-¥--------~~--------~+---------~~-------..+--co

------~----------~--------~~--------~~------c,

------------~--------~~--------~~--------~---C2

DIRECT TRANSMISSION FIRST TRANSMISSION SEOONDTRANSMISSION T=(l+co)(l+c, )(1+C2) - T(coc,+c, C2) T[c5cf+cfc~+cocf c2-(l-cf )cOC2~

TRANSMISSION RESPONSE

FIG. 12. A two-layerlsystem with a spike source showing the transmission spikes (called the direct, first, second, etc.) which make up the transmission response and showing the reflection spikes (called the direct, first, second, etc.) which make up the reflection response (or reflection seismogram). The normalised transmission

response is called the geologic-section multiple-reflection wave train.

The first transmission is the result of two paths, namely ABDEF (giving - Tcoc l ) and ABCEF (giving - TC I C2). Thus the first transmission is

First transmission = -T(coc1 + C1C2)

Each of the transmission spikes has the factor T. If we divide the entire transmission response by the factor Twe get a spike train with leading spike equal to 1. As we will see, this normalised transmission response is what we have called the 'section multiple waveform' in the chapter.

The direct reflection is the reflection off the upper interface, so it is simply co. The first reflection follows path ABD, so it is the product of the two-way transmission coefficient 1 - c~ and the reflection coefficient C I' That is, the first reflection is

First reflection = C 1 (1 - c~)

The second reflection is the result of two paths, namely the primary path

Page 107: Developments in Geophysical Exploration Methods

PREDICTIVE DECONVOLUTION 99

ABCEG, which contributes c2(1 - ci)(1 - c~), and the multiple path ABDEG, which contributes -(1 - c~)coci. That is,

Second reflection = c2 (1 - ci)(l - c~) - (l - c~)coci

Let us now return to the transmission response. The first-lag autocorrelation coefficient <Pl of the reflectivity function Co, cl , C2 is

<Pl = COCl + C1C2

We can immediately write the first transmission in tenps of <Pl as follows:

First transmission = - T(cOc l + C1C2 ) = - T<Pl

Thus we have the important result that the first transmission is equal to - T<Pl' that is, the negative of the product of the direct transmission with the first-lag autocorrelation of the reflectivity.

The second transmission

Tc~ci + Tcic~ + TCocic2 - T(1 - ci)(COc2)

is made up of four paths, as seen in Fig. 13. The first three paths in this figure are labelled 'physical first-order reverberation paths'. However, at first glance one might say that these paths are in the first-order reverberation loops depicted in Fig. 7. Likewise the fourth path in Fig. 13 is labelled 'physical second-order reverberation path'. Similarly one might say at first glance that this path is in the second-order reverberation loop of Fig. 7. However, such a correspondence between Figs. 13 and 7 would not be correct. The reason is that the reverberation paths of Fig. 7 are not physical but contrived, because in Fig. 7 no account is taken of transmission factors. Let us now show the relationship between the physical and the contrived. In Fig. 13, the direct transmission factor Toccurs for each reverberation path, so it is harmless and we can leave it alone. However, the two-way

j ~ , , --+-\M"---'--+--\ --+",-'----------T-\ --.-/\ -co

-----L--'-----+-\ --+W\---.--.-------''---t--r\A-------+-+--V 1--\ -c,

------~~2---L~1}·2-----L~'~-2--~~'~-2-----C2 Tcoc, TC,C2 Teoc,C2 -T(l-c, )cOC2

~---------y-----------' ~

PHYSICAL FI RST - ORDER REVERBERATION PATHS

PHYSICAL SECOND-ORDER REVERBERATION PATH

FIG. 13. The four physical paths which make up the second transmission spike and which represent the physical transfer of energy.

Page 108: Developments in Geophysical Exploration Methods

100 E. A. ROBINSON

transmission coefficient (l - cD which occurs for the physical second-order reverberation path in Fig. 13 must be split up for the contrived representation. Thus we write the physical second-order reverberation path as

-T(l - cDcoc2 = - TCOc2 + Tcocic2

The first term on the right represents a contrived second-order reverberation path (in the sense of Fig. 7), whereas the second term on the right represents a contrived first-order reverberation path (in the sense of Fig. 7). Thus the second transmission is now written as

Tc~ci + Tcic~ + 2Tcocic2 - TCOc2

which is made up of the four contrived paths (in the sense of Fig. 7). The contrived paths, as advertised, contain no two-way transmission coefficients. The physical picture given in Fig. 13 and the contrived picture given in Fig. 14 are equivalent. We now must identify the second transmission with the autocorrelation of the reflectivity function. The first­lag autocorrelation, as we recall, is ¢1 = COc1 + C1C2 ' whereas the second­lag autocorrelation is ¢2 = COc2 • The second transmission is therefore

T(c~ci + 2cocic2 + dcD - TCOc2 = T(¢i - ¢2)

In a similar manner, we must do the same thing for the third transmission as we did for the second. We find that the third transmission is

Third transmission = 1- T¢1 (¢i - 2¢2)

We could continue this process indefinitely and find expressions for all the higher transmissions in terms of ¢1 and ¢2.

~ ~ ~ ~ Co

VV\ ~ V\ \ !\ c,

\ W\ v\ C2 '4, j ~ Tc~c~ Tc~d 2Tcoc, C2 - TCOC2

'--y--'

CONTRIVED FIRST - ORDER CONTRIVED SECOND-ORDER REVERBERATION PATHS REVERBERATION PATH

CONTAIN NO TWO-WAY TRANSMISSION COEFFICIENTS

FIG. 14. The four hypothetical (or non-physical or contrived) paths which also make up the second transmission spike and which have the mathematical property

that no two-way transmission coefficients appear.

Page 109: Developments in Geophysical Exploration Methods

PREDICTIVE DECONVOLUTION 101

We recall that our time unit is chosen such that the two-way travel time in each layer is one unit. Let z be the unit time-delay operator. From Fig. 12 we see that the first transmission is delayed one time unit from the direct transmission, the second transmission is delayed two time units from the direct transmission and so on. Hence the entire transmission response can be represented by the z-transform:

Transmission z-transform

= (direct transmission) + (first transmission)z

+ (second transmission)z2 + ... = T - T<Plz + T(<pi - <P2)Z2 - T<pI(<pi - 2<p2)Z3 + ... = T[l - <PIZ + (<Pi - <P2)Z2 - <PI(<pi - 2<P2)Z3 + ...

The expression in brackets is an infinite summation that has a simple expression. The expression is

Transmission z-transform = T(l + <P I Z + <P2Z2) - I

which is recognised as a pure feedback system. Except for the constant factor Tin the numerator, this feedback system is the one shown in Fig. 8 in the chapter.

In the seismic prospecting we measure the reflection response and from it try to compute the reflectivity function. In the case of two layers shown in Fig. 12, the reflectivity function is the sequence co,C I ,C2. We see immediately that the direct reflection gives us co, however, the first reflection is cl(l - C&), which is the desired reflection coefficient Co multiplied by the two-way transmission factor (l - c6). The second reflection is even more contaminated. It is made up of the desired reflection coefficient c2 multiplied by the two-way transmission factors (l - cD (l - C6) plus an additive multiple reflection - (l - c6)coci. The direct path for the second reflection is ABCEG, whereas the multiple path is ABDEG. Thus two things are working against us in our quest for C 2: namely the two­way transmission factors and the multiple reflections. Thus the reflection coefficient is like Ulysses between Scylla (transmission factors) and Charybdis (multiple reflections). Our strategy is to have Scylla and Charybdis destroy each other, thus giving us the reflection coefficients.

Let us write the z-transform of the reflection response. It is

Reflection z-transform = (direct reflection) + (first reflection)z + (second reflection)z2

+ (third reflection)z3 + ... = Co + [cl(l - C&)]z + [c2(l - ci)(l - c6) - (1 - c6)coci]Z2 +1",

Page 110: Developments in Geophysical Exploration Methods

102 E. A. ROBINSON

We now manipulate this expression to split up two-way transmission coefficients. We obtain

Reflection z-transform

= Co + [- CO(COc1 + c1CZ) + (c 1 + COc1CZ)]z

+ {cz - (COc1 + C1CZ)(C 1 + COc1CZ)

+\CO[(COc1 - c1CZ)Z - cocz]}ZZ + ...

= [co + (c 1 + COc1CZ)z + czzZ]{1 - (COc1 + C1C2)Z +1[(COC1 - C1CZ)2 - cocz]ZZ + ... }

We have already defined the autocorrelation of the reflectivity function Co, c1, Cz as

Thus we have

Reflection z-transform

= [co + (c 1 + COc1CZ)z + czzZ][1 - 4>l z + (4)i - 4>z)ZZ + ... ]

A mathematical theorem states that multiplication of the z-transforms of two functions corresponds to the convolution of the functions. Thus the above equation is equivalent to the equation

Reflection seismogram = [co,c1 + cOC 1 CZ ,c2 ]*[1, -4>l,4>i - 4>2'···]

where the asterisk denotes the mathematical operation of convolution. The first set of brackets on the right encloses the sequence

[CO,c1 + COC1CZ,CZ]

Under the small reflection coefficient hypothesis, the product of three reflection coefficients will be negligible in comparison to anyone ofthem, so the above sequence can be approximated by

[co, C1 , cz] = reflectivity function

The second set of brackets encloses the sequence

[I, -4>1' 4>i - 4>z,···]

which is the normalised transmission function, or in other words what we call the 'section multiple train'. Thus the above convolutional equation is

reflection seismogram = [reflectivity function]*[section multiple train]

Page 111: Developments in Geophysical Exploration Methods

PREDICTIVE DECONVOLUTION 103

~--------~---------T---------'r---------r-cO=02

-----*--------~~--------~--------~------C1=04

1344 o 0054 o

FIG. 15. The optimal case in reflection prospecting (with a spike source). An approximately white reflectivity function (reflection coefficients on the right), the spike-like transmission response (values on down-going arrows at bottom), and the

reflectivity-like reflection response (values on upgoing arrows at top).

which is the convolutional model of the reflection seismogram. We recall that

1 - ¢lZ + (¢i - ¢Z)ZZ - ¢l(¢i - 2¢Z)Z3 + ... = (1 + ¢l + ¢ZZZ)-l

Thus the reflection z-transform is

Reflection z-transform = [co + (cl + COclCZ)z + czzz1/(l + ¢lZ + ¢zZZ)

or (approximately)

reflection z-transform = (reflectivity z-transform)/{l + ¢ 1 Z + ¢zZZ)

This equation corresponds to Fig. 9. The denominator on the right corresponds to the feedback filter, the numerator on the right corresponds to the input and the left-hand side to the input of the filter.

A three-term reflectivity function cannot truly be white. However, let us choose one that does exhibit whiteness as near as possible. In Fig. 15 we illustrate a case where we have chosen the reflectivity function Co = 0·2, cl

= 0'4, Cz = -0·2. Such reflection coefficients are larger in magnitude than those that would usually occur in nature, but even so we will see that the

Page 112: Developments in Geophysical Exploration Methods

104 E. A. ROBINSON

small reflection coefficient hypothesis is stilI upheld here. The autocor­relation of the reflectivity function is

which, except for <P2' is zero. The resulting reflection response and transmission response are shown in Fig. 15. The transmission response is

(I. 344, 0, 0'054, 0, ... )

which is essentially a spike corresponding to the direct transmission. Except for this spike very little energy is transmitted through the lower layer. The reflection response is

(0'200,0,384, -0'192,0·028, ... )

which is approximately equal to the desired reflectivity function

(0,200,0,400, -0'200,0'000, ... )

Thus, two-way transmission effects (Scylla) and multiple reflections (Charybdis) have destroyed each other to yield a safe passage for the reflectivity function (Ulysses).

Note that the expression for the second reflection (see Fig. 12) is

c2(l - cf)(l - c6) - (l - c6)coci = -0'2(0'84)(0,96) - (0,96)(0'2)(0'16)

= -0'161 - 0·031 = -0·192

Note that the direct reflection path resulted in the reflection coefficient -0'2 being diminished by the two-way transmission factor (0,84)(0,96). However, this loss was offset to a large extent by the addition of the multiple ( - 0'96)(0·2)(0· I 6). That is, the multiple from a shallower layer (i.e. interface 1) compensates for the transmission loss to a deeper layer (i.e. interface 2). This action is comparable to a train that leaves San Francisco for Boston via Salt Lake City, Houston and Washington. (This path corresponds to the primary reflection.) At Salt Lake City, some cars are taken off and sent on another train to Chicago and then to Washington. (This path is the interbed bounce of the shallow multiple.) At Washington, these cars rejoin the original train and proceed to Boston. At Boston, we look at the whole train. Only some of the cars reached Houston (i.e. the deep reflector), but as far as we are concerned the entire train made the trip (i.e. both deep primary and shallow multiple contribute to the reflected event).

Finally, let us compare our approach with that of O'Doherty and Anstey. 4 They consider a basic thin plate, defined between interfaces having

Page 113: Developments in Geophysical Exploration Methods

PREDICTIVE DECONVOLUTION 105

reflection coefficients of opposite sign. In Fig. 16 we show the primary path in a two-layer system on the left and the paths of two interbed multiples on the right. They make the assumption that the basic thin plate is thin enough so that these two interbed multiples arrive at virtually the same time as the primary, so that essentially all three can be added together to give

(1 - c~)(l - cDc2 - 2cOct(l - c~)(l - cDc2 = (1 - c~)(l - ci)(l - 2cOc t )c2

Thus the reflection coefficient C2 is multiplied by the two-way transmission factor (1 - c~)(l - cD and the basic thin plate factor (1 - 2cOct). Because

BASIC {CO THIN PLATE

C1--~--+---------L--+--~-----+--T-~L-

TARGET HORIZON c2 ____ ~ ______________ -lL ________ __"_ ____ _

PRIMARY MULTIPLE MULTIPLE

O'DOHERTY-ANSTEY MULTIPLES

FIG. 16. Primary path and the two O'Doherty-Anstey reinforcing multiple paths.

the basic thin plate is defined as a layer with reflection coefficients of opposite sign (so coc t is negative), the basic thin plate factor is greater than unity. The two-way transmission factor is of course less than unity, so the two factors tend to cancel each other out, leaving the desired reflection coefficient c2 •

Our approach on the other hand makes use of multiples, not from the target horizon c2 , but from the shallower horizon Ct. Such shallower multiples arrive at exactly the same time as the primary; they never touch the target horizon as the O'Doherty-Anstey multiples do. Our approach is illustrated in Fig. 17. The total response of the primary plus multiple is what we have called the second reflection in Fig. 12, namely

second reflection = c2 (1 - ci)(l - c~) - (1 - c~)coci

which from the convolutional equation is

second reflection = C2 - cPl(Ct + COC1C2 ) + co(cPi - cP2)

Here cPt and cP2 are the autocorrelations of the reflection coefficient series (i.e. the reflectivity function). Under the white reflection hypothesis, these autocorrelation coefficients are approximately zero (as a purely random

Page 114: Developments in Geophysical Exploration Methods

106 E. A. ROBINSON

\ / v w PRIMARY OUR MULTIPLE

FIG. 17. Primary path and our reinforcing multiple path under the hypothesis of a white reflectivity function.

white-noise series has zero autocorrelation for non-zero lags). That is, <Pl ~ 0 and <P2 ~ 0 under the white reflection coefficient hypothesis, and so

second reflection ~ C2 - 0(c1 + COC1C2 ) + c1(0 - 0) = C2

Much seismic processing is done with a sampling unit of 4ms, and the random reflection hypothesis is more compatible with such gross spacing than the more refined spacing by O'Doherty and Anstey.4 However, as we have seen there is no conflict between the two approaches, as they are each concerned with different physical hypotheses, and both are instructive in regard to our final goal of a better understanding and utilisation of the seismic method.

Page 115: Developments in Geophysical Exploration Methods

Chapter 5

EXPLORATION FOR GEOTHERMAL ENERGY

G. V. KELLER

Group Seven, Inc., 777 South Wadsworth Boulevard, Lakewood, Colorado 80226, USA

SUMMARY

The growth in importance of geothermal energy as an alternative to other energy resources in recent years has led to the development of new geophysical, geological and geochemical exploration techniques which are particularly suited to the problem. Often, geothermal reservoirs have little or no direct expression, and indirect exploration methods must be used. Measurement of temperature and heat flow in holes drilled to shallow or moderate depth is a primary approach, but other techniques which are also finding application are large-scale electrical surveys, passive seismic surveys, analysis of magnetic data to detect the depth to Curie-point effects, and soil and water geochemical studies.

1. INTRODUCTION

Heat energy from the interior of the earth-geothermal energy-has been used by man since prehistoric time to provide comforts. Even early in this century, underground steam was used to produce electricity, at Lardarello in northern Italy and at the Geysers in northern California. However, until the development of a series of crises in the supply of oil during the 1970s geothermal energy had to be considered an exotic energy source, contributing only a negligible portion of the total energy supply. During the 1970s geothermal energy became one of several alternative energy sources considered to displace the use of oil. It was realised that, in order for geothermal energy to play any significant role in the total energy supply, it

107

Page 116: Developments in Geophysical Exploration Methods

108 G. V. KELLER

would be necessary to find man y more geothermal systems than those which have obvious surface manifestations such as hot springs and fumaroles. This has led to the application of various geological, geophysical and geochemical methods of exploration to a relatively novel type of exploration, that of defining a producable geothermal system.

The first aspect of defining a geothermal system is the practical one of how much power can be produced. In most cases, the principal reason for developing geothermal energy is to produce electric power, though an alternative reason can be the use of geothermal heat in process applications or space heating. The typical geothermal system used for electric power generation must yield approximately 10 kg of steam to produce one unit (kWh) of electricity. Production oflarge quantities of electricity, at rates of hundreds of megawatts, requires the production of great volumes of fluid. Thus, one aspect of a geothermal system is that it must contain great volumes of fluid at high temperatures or a reservoir which can be recharged with fluids that are heated by contact with the rock. A geothermal reservoir should lie at depths that can be reached by drilling. It is unreasonable to expect a hidden geothermal reservoir at depths shallower than 1 km; it is undesirable to search for geothermal reservoirs at the present time which lie deeper than 3 or 4 km.

Experience has shown that each well drilled in a geothermal field must be capable of supporting 5 MW of electrical production; this corresponds to a steam production of 10 tonnes/h. To accomplish this, a well must penetrate permeable zones, usually fractures, which can support a high rate offlow. In many geothermal fields, wells are spaced to produce 25-30 MW km - 2. At a few locations where the reservoir consists of a highly fractured and shattered rock and interference between wells is not important, production rates may reach several hundred megawatts per square kilometre over small areas.

The geological setting in which a geothermal reservoir is to be found varies widely. The major geothermal fields that have been developed around the world occur in rocks that range from limestone to shale, volcanic rock and granite. Volcanic rocks are probably the most common single rock type in which reservoirs do occur. Rather than being identified with a specific lithology, geothermal reservoirs are more closely associated with heat flow systems. Many of the developed geothermal reservoirs around the world occur in convection systems in which hot water rises from deep within the earth and is trapped in reservoirs where a cap rock has been formed by silicification or precipitation of other mineral elements. Therefore, with respect to geology, the factors which are important in

Page 117: Developments in Geophysical Exploration Methods

EXPLORATION FOR GEOTHERMAL ENERGY

ml

o r·"-'"'.....,..-,~~~;;;;:.=::::~

3

4

5

, /

\

\ '.s: \ oo~ 6Q o.

109

km

o

2

3

5

6

8

9

'-L-~==~'~m'~ __________________ L-__ ~~~W-____________ ~ ______ ~ 10

FIG. I. Diagrammatic cross sections of hypothetical geothermal systems. The system on the left is not closely associated from an intrusion, but results from higher than normal thermal gradients through a sequence of thermally resistant rocks. The

system on the right has an intrusion as the heat source. 1

identifying a geothermal reservoir are not rock units, but rather the existence of tectonic elements such as fracturing, and the presence of high heat flow.

The common locations for high heat flow that give rise to geothermal systems include rift zones, subduction zones and mantle plumes where, for unknown reasons, high heat is transported from the mantle to the crust of the earth.

Geothermal energy can also occur in areas where thick blankets of thermally insulating sediment cover basement rock with relatively normal heat flow. Geothermal systems based on the thermal blanket model are generally of lower grade than those with volcanic origin.

An important aspect of a model for a geothermal system in a volcanic area appears to be the existence of roots for the geothermal system; that is, the existence of an intruded hot rock mass beneath the area where the shallower reservoirs are expected to occur. The roots of a geothermal system may consist of a pluton, or a complex of dykes, depending upon the rock type injected (see Fig. I).

Page 118: Developments in Geophysical Exploration Methods

110 G. V. KELLER

An exploration programme for geothermal energy can be based on a number of effects that would be associated with the intrusive model for a geothermal system. The model system that we hypothesise needs roots in the form of intrusions which have occurred so that excess heat still exists; that is, which have been intruded in no more than half a million to one million years ago. The region above the roots of the geothermal system must be fractured by tectonic activity, fluids must be available for circulation in a convection cell, and precipitation of a cap rock must have taken place. All of these things provide a target for the application of geological, geophysical and geochemical prospecting techniques. Because of the high levels of temperature involved, both in the geothermal reservoir and in the roots for the geothermal system, it is expected that major changes in physical, chemical and geological characteristics of the rock will occur; all of these can be used in the design of an exploration system. Also, heat is not easily confined in small volumes of rock. Rather, heat diffuses readily, and a large volume of a rock around a geothermal system will have its properties altered. Therefore the rock volume in which anomalies in properties are to be expected will generally be large. Exploration techniques need not possess a high level of resolution. Rather, in geothermal exploration we seek an approach which is capable of providing a high level of confidence that geothermal fluids will be recovered on drilling.

An evaluation programme for a regional geothermal base begins with a review and coordination of the pertinent existing data. All heat flow data that have been acquired for various reasons should be re-evaluated, regridded, smoothed, averaged and plotted out in a variety of forms in an attempt to identify areas with higher than normal average heat flow (a heat flow map for the western USA is shown in Fig. 2). Similarly, the volumes of volcanic products with ages younger than 106 years should be tabulated on a similar base to provide a longer-range estimate of anomalous heat flow from the crust. Because fracturing is important, levels of seismicity should be analysed, averaged and presented on a common base (a seismicity map for the western USA is shown in Fig. 3). All information on thermal springs and warm springs should be quantised in some form and plotted on the same base. Comparison ofthese four sets of data which relate directly to the characteristics of the fundamental geothermal model listed above will yield a pattern which shows the favour ability of an area for the occurrence of specific geothermal reservoirs. These areas should then be tested further by many geophysical, geological and geochemical techniques designed to locate specific reservoirs from which fluids can be produced.

In the next step of exploration, techniques should be used which have

Page 119: Developments in Geophysical Exploration Methods

EXPLORATION FOR GEOTHERMAL ENERGY III

••

HEAT FLOW IN WE5TE~N U.S .... A¥.rg.g.t>d ov.r CI~ ~, 2·,QdIlU : cOfltolJr.d In 1IIIIIIVoQlti pII' IqlOor, III.f.,

..,. !

FIG. 2. Heat flow in the western United States, based on approximately 600 determinations reported in the literature. The data have been smoothed over circles

of 2 0 radius .

considerable capability for detecting the roots for the geothermal system. The reason for this is that the various physical property changes associated with the upper reservoir are often simulated by combinations of geological conditions other than the occurrence of a geothermal reservoir. However, the physical properties effects associated with the presence of a molten intrusive in the crust are nearly unique. The methods in geophysical exploration that show promise in this area are the magnetotelluric method, the p-wave delay method and the Curie point method. With the

Page 120: Developments in Geophysical Exploration Methods

112 G. V. KELLER

----.- .

stlSMJC [Ht..cl' FUJ)(

.. wtST[fItH U_5 A . PM It

FIG. 3. Seismicity of the western United States based on approximately 9000 magnitude 3 or larger events located between 1964 and 1979. Contours are spaced at logarithmic intervals for energy release per unit area, but units are arbitrary. Data

have been smoothed over a 2 0 grid.

magnetotelluric method, the fact that molten or near-molten rocks have extraordinarily low resistivity is used as a criterion in exploration. The depth at which rocks become conductive because of thermal excitation can be determined with relatively good reliability using the magnetotelluric method. Experience around the world has shown a remarkably good correlation between the depth to a thermally excited conductor and regional heat flow, as indicated in Fig. 4. If thermally excited rocks occur at depths as shallow as 10 to 20 km in the crust, it is almost certain that a

Page 121: Developments in Geophysical Exploration Methods

EXPLORATION FOR GEOTHERMAL ENERGY 113

partially molten intrusive is present; the normal depth at which thermally excited conductive rocks are found ranges from fifty to several hundred kilometres.

The Curie point method has the potential for providing confirmation of the existence of a hot rock mass in the crust. When rocks are heated above temperatures of a few hundred degrees Centigrade, they lose their ferromagnetism. Under favourable circumstances, the depth to this demagnetisation level can be determined with reasonable accuracy.

10r-------,,--------,---------~------__,

I.Of--------+----=f---- --fI"i--!-+-----l

:;)

~ 0.1 '----------:'::-------:-':-,:------____ ::-'-:-:-------1 10 100 1000

DEPTH TO CONDUCTOR, km FiG. 4. Observed correlation between the depth to thermally excited conductive zone in the crust or mantle (based on magnetotelluric soundings) and heat flow. 2

Further confirmation can be obtained by p-wave delay and shear wave shadow studies. (When an anomalously hot mass of rock is present in the ground, the compressional (p) waves from earthquakes are delayed in transit, while the shear (s) waves are reduced in amplitude). To detect such an effect, an array of seismograph stations is set up in the vicinity of an anomaly which has been located. The seismograph stations are operated over a sufficiently long time that a few tens of teleseisms are recorded. The wave speeds for various ray paths through the suspected anomalous zone are then computed; if the rock is partially molten, the p-wave velocities will be reduced by as much as 20 to 30 per cent from normal values.

A group of prospect areas should be defined with reference to regional data and reconnaisance surveys. These areas may range in size from a few hundred to a thousand km2 • In rare cases, in the case of extensive thermal systems, they may be even larger. With the lack of resolution characteristic of the reconnaissance studies, it is unlikely that a prospect can be localised to an area of less than 100 km 2 • It is necessary to carry out detailed

Page 122: Developments in Geophysical Exploration Methods

114 G. V. KELLER

geophysical, geological and geochemical studies to identify drilling locations once a prospect area has been defined from reconnaissance.

The objective of the more detailed studies is to recognise the existence of a producible reservoir at attractive temperatures and attractive depths. Geochemical surveys provide the most reliable indications of reservoir temperatures if thermal fluids are escaping to the surface. In any event, all springs and other sources of ground water should be sampled and various geothermometer calculations carried out. It is to be expected that some prospect areas will have much more positive geochemical indicators than others. This may reflect only the difference in the amount of leakage from subsurface reservoirs, but it provides a basis for establishing priority for further testing; those geothermal reservoirs that show the most positive indications from geochemical thermometry should be the ones that are first studied with other geophysical techniques.

The sequence in which geophysical methods are applied depends to a considerable extent on the specific characteristics of each prospect. It is probably not wise to define a progression of geophysical surveys that would be applied to every potential reservoir. In some cases, where a subsurface convection system is expected, various types of electrical survey can be highly effective in delineating the boundaries of the convecting system. In other cases, where large clay masses can be present in the prospect area, electrical resistivity surveys can be deceptive. The particular type of electrical resistivity survey used at this stage is a matter of personal preference. Schlumberger sounding, dipole-dipole surveys, dipole map­ping surveys and electromagnetic soundings can all be used to good effect. To some extent the choice between these methods depends upon accessibility. The dipole-dipole traversing method and the Schlumberger sounding method are much more demanding in terms of access across the surface. The dipole mapping method and the electromagnetic sounding method can be applied in much more rugged terrain.

The objective of carrying out electrical surveys is to outline an area of anomalously low resistivity associated with a subsurface geothermal reservoir. When such an area has been identified, it is still necessary to confirm that the resistivity anomaly is the result of temperature and to locate areas within the anomaly where fracture permeability is likely to be high. Confirmation of subsurface temperatures is best done at this stage by drilling one or more heat flow holes. These heat flow holes need be only a few hundred metres deep if the area is one in which surface ground water circulation is minimal. However, in volcanic areas where ground water circulation takes place to great depths, reliable heat flow data can be

Page 123: Developments in Geophysical Exploration Methods

EXPLORATION FOR GEOTHERMAL ENERGY 115

obtained only by drilling to one or two kilometres depth, and in such a case the heat flow hole becomes a reservoir test hole.

The number of heat flow holes that need to be drilled on a given prospect can vary widely; a single highly positive heat flow hole may be adequate in some cases while in other cases several tens of heat flow holes may be necessary to present convincing evidence for the presence of a geothermal reservoir at greater depth.

Once the probable existence of a geothermal reservoir has been established by a combination of resistivity studies and heat flow determinations, it is advisable to search for zones of fracture permeability in the reservoir before selecting a site for a test hole. The simplest procedure to follow in searching for open fractures is that of soil geochemistry. A dense sampling in the prospective geothermal area with samples being tested for mercury, boron, helium and other similar traces can provide a great deal of information about the location of surface traces of open fractures.

Microseismic surveys are a widely used tool for studying activity on fracture zones in a prospect area. Surveys may require many weeks of observation in a given area. The accuracy with which active faults can be located using micro-earthquakes is often not good enough for the control of drill holes in some cases, although in some cases it is adequate. A potentially valuable by-product of a micro-earthquake survey is the determination of Poisson's ratio and related rock properties along various transmission paths through the suspected geothermal system. Poisson's ratio and attenuation of seismic waves can be strongly affected by fracturing. The identification of anomalous areas of Poisson's ratio and p-wave attenuation can provide encouraging evidence for high permeability zones in the reservoir.

The most effective technique to employ in studying a potential reservoir before drilling takes place is the seismic reflection method. It can be used where there is a bedded structure to the subsurface to allow the recognition offaults by the disruption of the continuity of the bedding. The seismic reflection technique is extremely expensive, and a survey over a geothermal prospect rna y cost a significant fraction of the cost of a test well, but the results obtained with the seismic reflection method are usually much more definitive than the results obtained with any other geophysical technique.

All of these geophysical surveys which are intended to define the essential characteristics of the geothermal reservoir can also be supplemented with other types of geophysical surveys that assist in understanding the regional

Page 124: Developments in Geophysical Exploration Methods

116 G. V. KELLER

geology and the local geological structure in a geothermal prospect. A self­potential survey is useful in understanding the ground water movement in an area. A gravity survey can be used to study the depth of fill in intermontaine valleys, and to locate intrusive masses of rock. Magnetic surveys can be used to identify the boundaries to the flows in volcanic areas. Once all these detailed geophysical surveys have been carried out a convincing set of data should be provided before the decision to locate a drill hole is made. There must be evidence for heat, there must be evidence for permeability, and the conditions for drilling must be established. Once these are done, the decision to drill a deep test can be undertaken.

It is surprisingly difficult to determine a true bottom hole temperature during the course of drilling a well. Mud is circulated through the well and removes much of the excess temperature as drilling progresses. In a closely controlled drilling programme, the temperature and volume of mud supplied to the well and recovered from the drilling operation should be monitored closely. Differences in the temperature between the mud going in and the mud coming back to the surface can be used to estimate bottom hole temperatures in a crude fashion. With the development of a mathematical model for the loss of heat from the rock to the drilling mud, it is conceivable that an even more precise temperature estimate can be made.

The best temperature estimates made during the course of drilling are obtained by sending maximum-reading thermometers to the bottom of the well at times when the bit must be removed from the well. Several thermometers should be used in the event that one breaks or provides a false reading. These should be carried to the bottom of the hole on a heavy weight so that they get as close as possible to the undisturbed rock at the bottom face of the borehole.

In the following sections, we shall review the various geophysical techniques used in exploration for geothermal energy in more detail, with particular emphasis on the requirements for data acquisition, handling, processing and interpretation.

2. GEOCHEMICAL THERMOMETERS

A most important aspect of evaluation of a geothermal prospect is the potential temperature at which fluids can be produced from the subsurface. In some areas the detection of trace elements in unusual amounts in ground water has provided excellent information about subsurface reservoir temperatures. 3 - 10 Generally, the geochemical methods of thermometry

Page 125: Developments in Geophysical Exploration Methods

EXPLORA TION FOR G)':OTHERMAL ENERGY 117

are based on the fact that temperature and pressure affect the equilibrium concentration of any reactive solutes in ground water. In order that a given chemical equilibrium be useful as a geothermometer, the reaction rate must be sufficiently slow that further equilibration does not occur as fluids escape from the geothermal reservoir to the surface where they can be sampled. Chemical geothermometers are probably most effective in areas where there is rapid leakage of geothermal fluids to locations where they can be

E a. a.

1500

Temperature °C

FIG. 5. Solubility of silica in water as a function of temperature and the crystaIline form of silica.6

sampled. Moreover, those elements which are used in geothermometry must be ubiquitous to geothermal systems, so that no question arises about the failure of a geothermometer because a particular mineral was not present for equilibration to take place.

Perhaps the most reliable method of geothermometry is the measure­ment of silica gel content in spring waters. In hydrothermal areas silica can occur in various forms, including quartz, chalcedony, cristobalite and amorphous silica. Each of these mineral forms of quartz is characterised by a different reaction rate with water, so that the solubilities of various forms of silica depend both on temperature and on the mineralogical form, as shown in Fig. 5. The use of silica content has provided quite accurate estimates of reservoir temperature from water samples taken from deep walls. For water samples taken from surface springs, it is found that silica temperatur€s provide a minimum value for the subsurface temperature

Page 126: Developments in Geophysical Exploration Methods

118 G. V. KELLER

because dilution may have taken place between the thermal water and cooler surface waters during the rise to the spring, or alternatively re­equilibration may have taken place if movement of the water is too slow. At temperatures above 250°C, re-equilibration takes only a few hours. Thus silica tempetatures provide a conservative or lowest reasonable estimate of subsurface temperature, provided that the sequence through which the water has travelled does not contain an undue amount of volcanic rock with amorphous silica or forms of silica other than quartz: The relationship between temperature in the source fluid and silica content in spring discharge, assuming adiabatic isoenthalpic cooling,6 is

tCC) = 1533·5/(5·768 -logSi02 (ppm)) - 273·15

Another chemical geothermometer is one based on the relative amounts of sodium and potassium in solution in ground water. The sodium-potassium geothermometer is based on an exchange reaction:

K + + Na feldspar = K feldspar + Na +

in which the conversion of sodium feldspar to potassium feldspar is temperature dependent. Thus, in ground water in which sodium and potassium are derived from solution of feldspar, the Na/K ratio is indicative of temperature. For geothermal waters that contain relatively little calcium in solution, the Na/K ratio has given reasonable reservoir temperatures over the range from 180 to 350°C.

Fournier and Truesde1l8 have given the following formula for calculating temperatures from the Na/K ratio:

tCC) = [855'6/1og(Na/K) + 0'8573] - 273·15

If the reservoir temperature is below 180°C, the cation exchange reaction between sodium and potassium feldspars may not control the sodium/ potassium ratio. Fournier and Truesdell, 8 in correlating sodium/potassium and calcium concentrations in various types of ground water, suggest the following formula for the relationship between temperature and solution concentrations:

where

t(°C) = 1647 - 273.5 log (Na/K) + plog(Ca/Na) + 2·24

P = 1 for Ca 1/ 2/Na> 1

P = t for Ca 1/2 /Na < 1

and

and

Page 127: Developments in Geophysical Exploration Methods

EXPLORATION FOR GEOTHERMAL ENERGY 119

As with an y of the chemical geothermometers, the assumptions of: (I) the use of relatively simple source for the ions and (2) no re-equilibration between the reservoir and the surface, are required. Experience with the alkali ion geothermometer has shown that in some cases it works well but in other cases, where the results appear to be in error, the temperatures are overestimated. Thus a comparison of silica geothermometer temperatures and alkali ion geothermometer temperatures provides some idea of the relative reliability of the two measures, in that, if the ground water does not properly reflect reservoir temperature, the two estimated temperatures will diverge.

In addition to the two geothermometers described here, various other geothermometers have been suggested but have not yet found wide use. In particular, the measurement of isotopic ratios, particularly those of oxygen, hydrogen and sulphur, show some promise of providing information on reservoir temperatures.

In addition to using geochemical geothermometers, geochemical surveys are used in a more qualitative sense in the search for geothermal reservoirs. It has been noted empirically that some trace elements are more abundant in the vicinity of geothermal reservoirs than in other areas, and may serve to draw attention to the possible location of a reservoir. The elements which have been most widely used as tracers include mercury, arsenic, boron and helium. All of these are elements that can be liberated from rocks at relatively low temperatures, to migrate to the surface. All are elements which are relatively easy to detect in small amounts in soil. The amount of a trace element which escapes to the surface is a complicated function of three factors, including the concentration in the rock which is being heated, the availability of permeable paths to the surface, and the mobility of the element. Thus the trace elements are more likely to be found along fractures and faults which penetrate to geothermally heated rocks, and there is not usually a one-to-one relationship between chemical concentrations and subsurface reservoirs.

Often chemical data on ground waters are available without there being a need to carry out a sampling program specifically for geothermal development. When ground waters are tested for potability, the alkali ions and silica are usually recorded. The number of such determinations is very large, providing a data base for analysis of anomalous values that may be of interest in geothermal prospecting. In many countries, the number of such analyses which are already available is so great that the only feasible way of handling the data base is by entry into a computer. Even though the determinations of temperatures from the simple formulae given above are

Page 128: Developments in Geophysical Exploration Methods

120 G. v. KELLER

straightforward, the number of data involved and the book-keeping problem in managing these data are such as to require a computer facility.

3. SUBSURFACE TEMPERATURE AND THERMAL GRADIENT SURVEYS

The most direct method for studying geothermal systems is through the use of subsurface temperature measurements. 11 -15 Measurements can be made in holes which are as shallow as a few metres, but the preference at the present time is to make temperature surveys in wells which are at least 100 m deep. Temperatures measured a short distance beneath the surface of the earth are strongly affected by cyclic changes of temperature on the surface of the earth. Variations contributed by the diurnal temperature cycle pene­trate only a few tens of centimetres in soils. The annual temperature cycle can contribute significant temperature changes at depths of many metres. Long-term climatic changes in temperature can conceivably cause barely detectable temperature effects at depths of 100 m. In order that subsurface temperatures represent heat flow from the interior of the earth, it is necessary that, either temperature gradients are measured at a depth beyond which the contribution from surface temperature changes is insignificant, or measurements are made in such a way that the surficial effects can be removed. For example, if temperatures were measured at the bottoms of shallow holes over a period of one year, the annual temperature would be averaged out. Alternatively, if measurements were made at depths beyond which the annual wave penetrated significantly, the normal heat flow from the interior of the earth could be detected in a matter of a few days. The question as to which approach is more effective remains to be answered.

The objective of thermal gradient measurements in boreholes is twofold. The first objective is to detect areas of unusually high temperature, and the second objective is to determine quantitatively the component of heat flow along the direction of the borehole. Detection of unusually high temperatures can be a direct indicator of geothermal activity. More quantitative results are obtained when thermal gradients are converted to heat flow through the use of Fourier's equation:

grad T = $/ K

or

Page 129: Developments in Geophysical Exploration Methods

EXPLORA TION FOR GEOTHERMAL ENERGY 121

where !l.T/tlZ is the vertical gradient in temperature, K is the thermal conductivity and <1>= is the thermal flux in the z direction. The advantage of converting temperature gradients to values of heat flow is that the dependence on the thermal conductivity of the rock type is eliminated. In this way, minor differences in temperature over a series of prospect holes can have added significance if it is known that the differences are due not to a change in rock type, but to a change in the total amount of heat being supplied from beneath.

Determination of temperature in a test hole is not as easy as it might seem. In deep test holes which must be drilled with a circulating fluid such as mud, a considerable disturbance of the normal temperature environment will take place during drilling. This is particularly true if the gradient is relatively high and the temperature change over the well interval is relatively large. As a rule of thumb, one must wait a period of time comparable to that involved in drilling the well before the well temperatures return to within 10

u o

140

130

120

z 110

LoJ

~ 100 I­« 5 90 !l. ~ LoJ I- 80

70

x 1250 metre level • 610 metre level

60+------+------~----~~----~ o o o

N

log (/s)

o <i

FIG. 6. Extrapolation of borehole temperatures to equilibrium temperature at two depths in a borehole drilled at the summit of Kilauea volcano (Hawaii). The parameter s is the duration of circulation in the well following first penetration by the drill bit, and Tis the total time elapsed from first penetration to measurement of

tern pera ture.

Page 130: Developments in Geophysical Exploration Methods

122 G. V. KELLER

per cent of their undisturbed state. Drilling a well to several hundred metres depth may take some days. In order to measure temperatures accurately to obtain thermal gradients it is necessary to record down-hole temperatures for periods several times longer than the duration of drilling. Fortunately there are several methods available for predicting stabilisation tempera­tures in a well. One such method consists of measuring temperatures at a given depth several times following completion of drilling. Temperatures are then plotted as a function of time on a linearising scale, defined as log TI(T - s), where Tis the total time since the drill first opened the borehole at the depth where the temperature is being measured, and s is the time since circulation in the well was halted. When temperatures are plotted against this linearised time scale, they fall along a straight line, provided the assumptions of the solution of Fourier's equation are valid: that is, that the rate of drilling was linear and that a consistent removal of heat by the drilling mud from the bottom of the hole took place. The final temperature can be estimated quite accurately by extrapolating the linear relationship to zero linearised time, which corresponds to the temperature after an infinitely long re-equilibration process. An example of the use of the method is shown in Fig. 6.

The principal difficulty in measuring heat flow in drill holes is that, in many geothermal fields, convection as well as conduction contribute significantly to total heat flow. Where convection is rapid, Fourier's simple equation cannot be used to compute heat flow.

4. ELECTRICAL METHODS

Various methods for measuring electrical resistivity have been used in geothermal exploration. 16 The use of these methods is based on the fact that temperature affects the electrical properties of rocks. At the lower end of the temperature scale, up to the critical temperature for water, the effect of temperature is to enhance the conductivity of the water in the pores ofthe rock. In such rocks, electrical conduction takes place solely by passage of current through the fluid in the pores, since almost all rock-forming minerals are virtual insulators at these temperatures. The maximum enhancement in conductivity is approximately sevenfold between 350 and 20°C for most electrolytes. 17 ,18,19

Temperature is not the only factor affecting the conductivity of rocks. An increase in the water content or an increase in the total amount of dissolved solids can increase the conductivity by large amounts. Both phenomena are

Page 131: Developments in Geophysical Exploration Methods

EXPLORA nON FOR GEOTHERMAL ENERGY 123

~ 10-8

'" 0 .<= E ::!: a:: 10-9 w I->-l-s; i= U

10.10 :::> 0 z 0 u u «

10. 11

10-12L....-.L_-'-_...L-_..L-_~-...I"-'~

o 0·001 0·002 0·003

INVERSE ABSOLUTE TEMPERATURE

FIG. 7. Summary of data relating electrical conductivity of dry rocks to reciprocal of the absolute temperature.

sometimes associated with geothermal activity. As a result, it is not unusual to see an increase in conductivity by an order of magnitude or more in a geothermal reservoir in contrast with rocks at normal temperatures removed from the reservoir.

At temperatures approaching the melting point of a rock, even more significant changes in electrical properties take place. 20 - 24 At normal temperatures for the surface of the earth, silicate minerals have very low conductivity, generally less than 10- 6 n- 1m -1. As the temperature increases, this conductivity increases: slowly at first, and then much more rapidly at temperatures near the melting point. A typical set of curves of the conductivity against the reciprocal of the absolute temperature is shown in Fig. 7. At temperatures within about 100°C of the melting point, the conductivity becomes high enough to become comparable with con­ductivities in water-saturated rocks.

Such high conductivities provide a target for geophysical exploration, but the very high temperatures associated with the roots of a geothermal system occur at depths too great to be considered for exploitation. In a

Page 132: Developments in Geophysical Exploration Methods

124 G. V. KELLER

typical geothermal system, one would expect to find a deep anomaly in electrical conductivity associated with thermal excitation of conduction in the massive crystalline rock comprising the basement. Shallower in the section, one would expect to find an anomaly in electrical conductivity associated with the reservoir filled with hot geothermal fluids.

Techniques for studying subsurface electrical structure include the magnetotelluric method, the Schlumberger sounding method, the dipole-dipole traversing method, the bipole-dipole mapping method, and the time domain electromagnetic sounding method. All of these techniques have as their objective the mapping of electrical structure at depths that are meaningful in terms of geothermal exploration. These depths must be at least several kilometres in the case in which the anomaly in conductivity associated with reservoir rocks is being sought, and several tens of kilometres in the case in which the thermally excited conductive zone associated with the roots of a geothermal system is being sought.

4.1. The Magnetotelluric Method The magnetotelluric method has been used extensively in a reconnaissance role in geothermal exploration, and to a lesser extent in detailed follow-up exploration. In the magnetotelluric method, the natural electromagnetic field of the earth is used as an energy source to probe the earth. 2 •25 ,26 The natural electromagnetic field contains a very wide spectrum of frequencies, including the very low frequencies that are useful in probing to depths of several tens of kilometres. These low frequencies are generated by ionospheric and magnetospheric currents that arise when plasma emitted from the sun interacts with the earth's magnetic field. These currents give rise to time-varying magnetic fields in the frequency range from 0·1 Hz downwards, which are termed micropulsations. Micropulsations in turn induce eddy currents in the earth, with the eddy current density being controlled by the local conductivity structure. The subsurface structure can be studied by making simultaneous measurements of the strength of the magnetic field variations at the surface of the earth and the strength of the electric field component at right angles in the earth. Because the direction of polarisation of the incident magnetic field is variable and not known beforehand, it is common practice to measure at least two components of the electric field and three components of the magnetic field variation to obtain a fairly complete representation. For surveys that are intended for the study of electrical structure tens of kilometres in depth, the range of frequencies needed to achieve penetration is from a few tens of hertz to a few hundred micro hertz. Inasmuch as the natural noise field is not

Page 133: Developments in Geophysical Exploration Methods

EXPLORATION FOR GEOTHERMAL ENERGY 125

particularly well structured, but consists of an unpredictable assemblage of impulsive waveforms, it is necessary to analyse the natural field over a time span which is long compared to the period of the lowest frequencies being studied. Thus, if the lowest frequency desired in a survey is 500 11Hz (period of 2000 s), it is necessary to analyse the field over a time duration at least 10 times as great, or 20 000 s. This comprises the single largest disadvantage of the magnetotelluric method: the method provides slow coverage of a prospect area and is therefore costly.

In most systems for carrying out magnetotelluric surveys today, the five field components are converted to digital form and are either stored for later spectral analysis or converted immediately to spectral form before being stored. The accuracy with which the data are converted to digital form is important; a dynamic range of 16 bits is desirable in order that weak spectral components be recognisable in the presence of other, stronger spectral components. In order to obtain spectra at frequencies as high as several tens of hertz, a maximum sampling rate of 100 Hz is required.

Once the various spectra have been calculated they must be converted to values of apparent resistivity as a function of frequency. In this reduction it is assumed that at any given frequency there is a linear relationship between the electric field vector and the magnetic field vector

H=ZijE

where Z is the impedance of the electromagnetic field. The impedance is a tensor which is characterised by having the antidiagonal elements maximum and the principal diagonal elements zero, when measurements are made with the sensor axes parallel to and perpendicular to the structure of a two-dimensional earth. In the case of a three-dimensional earth, the tensor elements go through maximum and minimum values as the axes of the tensor are rotated. By convention, the observed fields are rotated to find the maximum antidiagonal impedances. An apparent resistivity is then computed from a simple formula:

Pa = (i/(!1w))Z2

for both the maximum value of impedance and the value of impedance in the direction orthogonal to the maximum. In this expression, !1 is the magnetic permeability, usually assumed to be the value for free space, and w is the frequency in rad s - 1. The direction in which the maximum impedance is measured is used to characterise the directional properties of the impedance tensor, and thus of the earth.

Other quantities in addition to the maximum and minimum resistivities

Page 134: Developments in Geophysical Exploration Methods

126 G. V. KELLER

and the tensor direction are computed from the data to yield information about the reliability of the tensor impedance and its meaning. Coherence is computed as the cross-correlation between the electric and magnetic fields. If the fields are linearly related, coherence is unity; if there is noise in any of the field components which produces a spectrum that does not obey the fundamental equation above, the coherence will be reduced. When coherence drops below reasonable values (0·85 to 0'90), it is common practice to discard the apparent resistivities that are calculated.

The computation of an impedance tensor representing only vectors in a horizontal plane is based on a preliminary assumption of a plane wave and planar earth structure. With non-planar electromagnetic waves, or in a real earth, a measurable vertical magnetic field component can also be present. To test this, the correlation between the vertical magnetic field component and the resultant horizontal field is computed. This produces a quantity called the tipper, which indicates the directions towards a concentration of current in the earth, and a tipper angle that characterises the proximity to a lateral discontinuity in electrical properties.

In recent years, attempts have been made to eliminate uncorrelated signals that appear on one or more of the field components.27 ,28 At the present time, the approach receiving the most attention is one in which two or more magnetotelluric soundings are recorded simultaneously at different sites. Magnetic fields are correlated between the two or more receiver stations, with uncorrelated portions being considered as noise in the magnetic field detection, and removed for later processing. In areas where uncorrelated noise has been a problem in obtaining magnetotelluric soundings, this procedure has resulted in significant improvements in the quality of the data. Over a portion of the frequency range where noise is a particular problem (from 0·1 to 10Hz), the multiple-station approach has permitted data to be obtained where previously it had been impossible.

As might be expected, the spectral analysis of a long data series, combined with the need for extensive tensor rotation and testing of the spectral values, result in a volume of processing that is as time-consuming and costly as the original acquisition of data. The current trend is to compute the spectral analyses and the rotation of the tensor impedance in the field. This is highly desirable in that the magnetotelluric method does not always provide useful results, even after measurements have been made with reliable equipment. If the natural electromagnetic field strength is unusually weak during a recording period, or if there is some phenomenon which precludes an effective analysis of the field, it may be necessary to repeat the measurements at a more favourable time. When the analysis is

Page 135: Developments in Geophysical Exploration Methods

EXPLORA TION FOR GEOTHERMAL ENERGY

"-'- '-.10 ,.

R 19

..... CNETOTtLl.UAIC SURV£'t Otp11\. t. aO .. ~elW CC/I'IiIuUor.

S, ",,_ Itw .. 1Hof • St"t_ L. ,,'_

·'7 N •• MJ~ ,.' ~----~----r-~

, " .

, . '.

127

FIG. 8. Results of a magnetotelluric survey in the vlcmity of The Geysers geothermal field (California , USA). Magnetotelluric sounding curves were interpreted using a three-layer model with the third layer being most conductive.

Depths to the third layer are contoured.

done in the field, decisions about re-occupying stations and in placing additional stations can be made in a timely manner that will reduce overall operating costs.

The magnetotelluric method has found application in geothermal exploration primarily because of its ability to detect the depth at which rocks become conductive because of thermal excitation. In areas of normal heat flow, this depth ranges from 50 to 500 km, but in thermal areas the depth may be 10 km or less. An example of the detection of anomalously shallow depths to conductive rocks is shown in Fig. 8, which shows the results of a magnetotelluric survey in the vicinity of the Geysers geothermal field, California, USA. Conductive rocks are found at depths less than 10 km, a few miles north of the main producing part of the field.

Page 136: Developments in Geophysical Exploration Methods

128 G. V. KELLER

4.2. The Direct Current Method The direct current resistivity method comprises a set of techniques for measuring earth resistivity which are significantly simpler in concept than the magnetotelluric method. The magnetotelluric method is an induction method in which the depth of penetration of the field is controlled by the frequency of the signals analysed. The direct current methods achieve control of the depth of the penetration by regulating the geometry of the array of equipment used. 29,30

Three principal variations of the direct current method have found use in geothermal exploration, though there has been some controversy in the literature over the relative merits of these techniques. The best tested of the techniques is the Schlumberger sounding method. With the Schlumberger array, electrodes are placed along a common line and separated by a distance which is used to control the depth of penetration. The outer two electrodes drive current to the ground, while the inner two, located at the midpoint between the outer two, are used to detect the electric field caused by that current. The outer two electrodes are separated by progressively greater distances as a sounding survey is carried out, so that information from progressively greater depths is obtained. In a survey of a geothermal area, the spacings between electrodes'will be increased incrementally from distances of a few metres or tens of metres to distances of several kilometres or more.

The Schlumberger method has several limitations, including the relatively slow progress with which work can be carried forward in deep sounding, and the fact that in areas of geothermal activity the lateral dimensions of the areas of anomalous resistivity may be considerably smaller than the total spread required between electrodes.

In order to detect the presence of lateral discontinuities in resistivity, the bipole-dipole and dipole-dipole techniques have come into use. In the dipole-dipole technique, four electrodes arrayed along a common line are again used, but in this case the outer two electrodes at one end of the line provide current to the ground while the outer two electrodes at the other end of the line are used to measure the voltage caused by that current. In a survey, the receiving electrodes and transmitting electrodes are separated progressively by increments equal to the separation between one of the pairs, in the direction along which they are placed. The separation between the two dipoles can be increased from one dipole length to as much as 10 dipole lengths. When this has been done the current dipole is advanced by one dipole length along the traverse and the procedure repeated. The process is continued with the entire system moving along a profile.

Page 137: Developments in Geophysical Exploration Methods

EXPLORA nON FOR GEOTHERMAL ENERGY 129

As the final product, a pseudosection is compiled in which each value of apparent resistivity is plotted on a cross section beneath the midpoint of the array with which the measurement was taken, and at a depth beneath the surface which is proportional to the separation between dipole centres. The result is a contoured section of apparent resistivity values which sometimes shows a good correlation with the actual distribution of resistivity in the earth. The dipole-dipole method has the advantage of portraying the effects of lateral changes in resistivity clearly, but suffers from the disadvantage of being a cumbersome method to apply in the field.

Another direct current method is the bipole-dipole mapping method. In this, current is driven into the earth with a fixed pair of electrodes at a source bipole. The behaviour of the current field over the surface of the earth is then surveyed by making voltage measurements with orthogonal pairs of electrodes (dipoles) at many locations around the source. Values for apparent resistivity are computed and contoured. In some cases a simple relationship exists between contours of apparent resistivity and the subsurface electrical structure, but in many cases the relationship between the contoured apparent resistivities and the subsurface structure is difficult to determine.

An important modification of the bipole-dipole method, which has been used in recent surveys to improve the meaningfulness of the results, is the use of two orthogonal bipole sources. The two sources are energised separately, and at a receiver site two electric fields are determined, one for each source. By combining these two electric fields in various proportions, apparent resistivity is computed as a function of the direction of current flow at the receiver station. The result is an ellipse of apparent resistivity drawn as a function of the direction of current flow. These ellipses provide considerably more insight into the nature of the subsurface than do the single values of apparent resistivity obtained with the single-source bipole-dipole method.

The results of an extensive bipole-dipole survey in the vicinity of the Geysers, California are shown in Fig. 9. Apparent resistivity values were obtained at about 6000 locations, using more than 50 bipole sources. Over much of the area, overlapping coverage was provided from several sources. In the presentation shown here, all resistivity values calculated for cells of one square mile area were averaged together and the results were contoured. The isoresistivity map shows a close association with known geology; the areas of lowest resistivity are found where the sub-outcrop consists of sands and shales of the Great Valley sequence, while areas of highest resistivity correspond to sub-out'crops of the Franciscan

Page 138: Developments in Geophysical Exploration Methods

130 G. V. KELLER

24r-------.--------,~~~~--~r_--._----__.

20

10

- -_._-- .-- --------~ 39·

o N

10 N

FIG. 9. Results of extensive bipole-dipole resistivity surveys of The Geysers area (California, USA). Measurements at about 6000 locations, based on about 50 transmitter sites, have been merged and averaged on a one-mile grid. Contour interval is 2Qm below 20Qm and 5Qm above. The Geysers producing field is

shown by the stippled pattern.

formation. The Geysers Steam Field is situated in a zone of moderate resistivi ty.

4.3. The Time-domain Electromagnetic Sounding Method Like the magnetotelluric method, the time-domain electromagnetic (TO EM) sounding method depends on a variation in frequency of the observed field to obtain variations in the depth of penetration. 31 With the time-domain electromagnetic method, a controlled field is generated by passing a square wave of current through a length of grounded wire. Then, at a location where resistivity is to be determined, the transient electromagnetic induction is detected and recorded. The duration and shape of the transient are characteristic of the subsurface electrical structure. In geothermal exploration where depths of several kilometres are

Page 139: Developments in Geophysical Exploration Methods

EXPLORA nON FOR GEOTHERMAL ENERGY 131

to be probed, the total duration of the transient process ranges from half a second to several tens of seconds, again depending on the electrical structure. In normal operating practice, the signal is enhanced by transmitting many consecutive signals, with the corresponding transient magnetic induction signals being synchronously added.

The product of a time domain electromagnetic sounding survey is a curve relating apparent resistivity to the time following the beginning of the transient coupling between source and receiver. It is presumed that, as this time becomes progressively larger, the eddy currents giving rise to the transient coupling occur at greater and greater depths beneath the receiver. In a complete interpretation, the response observed in the field is modelled for reasonable earth structures in much the same manner as is done with any of the other electrical methods.

4.4. Modelling and Inversion Apparent resistivity values are calculated directly from observations as a convenience in presenting raw data, but often these apparent values are not closely related to the actual resistivity distribution in the earth. This actual resistivity distribution is analysed by a modelling process, or where feasible, by an inversion process.

With either direct modelling or inversion, the nature of the com­putational process is controlled by the mathematics involved in the forward solution; that is, by the computations required to compute field effects from a postulated subsurface structure. If the earth is one-dimensional, such as is the case when it consists of a set of horizontal layers, the forward solution is expressible in an analytical form which can be evaluated either exactly or numerically. The usual form of expression for a one-dimensional problem is a Hankel transform integral: 30

where Fl is a function which contains all the information about the electrical properties of the layers comprising the section, (Ji and Zi are the conductivity and depth to each layer, indicated by the index i, m is a dummy variable of integration, r is the separation between source and receiver, G(r) is the observed field strength and Ja is a Bessel function of the first type of order zero or unity. In the case of the magnetotelluric method the forward solution is even more simple in that only the function Fl must be

Page 140: Developments in Geophysical Exploration Methods

132 G. V. KELLER

evaluated and no Hankel transform is necessary. The favoured procedure for computing the Hankel transform at the present time is by a logarithmic transformation of variables followed by a convolution operation.

In the case of a two-dimensional or three-dimensional distribution of resistivity in the subsurface, the approach used in interpretation is numerical. The finite difference (or the finite element) method is often used. In the finite difference method, the earth is subdivided into a mesh of points at which the boundary conditions and differential equations representing the behaviour of the field components are simulated by finite difference equations. Solution for the behaviour of the field on the free surface of the earth where actual measurements are made requires the simultaneous solution of an array of algebraic equations in which the number of unknowns is comparable to the number of mesh points into which the earth is subdivided. In two-dimensional problems, it is usually barely adequate to represent the earth by 100 to 200 mesh points, while in three dimensions, 1000 mesh points or more are required to obtain even a roughly realistic result.

An alternative numerical approach consists of an integral equation analysis. In this approach, some solution to the governing differential equations is expressed in terms of an intergal of a Green's function. The surface and volume of the region with anomalous resistivity are subdivided into cells and subareas, and the Green's function evaluated. The number of equations which must be solved simultaneously is comparable to the number of points used to approximate the anomalous region.

With forward modelling, human interaction is important. The first approximation of the subsurface electrical structure is likely to be highly unrepresentative. Successive models are hypothesised in an attempt to bring the computed field effects into close agreement with those actually observed. This can be a lengthy process because of the complicated way in which changes in parameters for the subsurface model affect the calculations. If the forward model requires only a relatively small amount of computer time, a process termed inversion can be used to make use of information from the forward model calculations to derive a better fit to the observed data with successive calculations.

In the forward calculation, the forward solution can be simply represented by a non-linear function Fthat relates the calculated values (C) with the corresponding vector of parameters (Pi):

Cj = Fj(Pl,P2 .. ,PM)

where Pi is the set of M hypothetical parameters associated with the

Page 141: Developments in Geophysical Exploration Methods

EXPLORATION FOR GEOTHERMAL ENERGY l33

mathematical model, and C is the set of calculated values for this given set of parameters.

Inversion methods are designed to find as much information as possible about the model parameters that define the earth model when a set of N observations (data) is given. Given a set of observations (0) and a hypothetical earth model described by the vector of parameters Pi' the accuracy of approximation to the real model is usually specified in terms of a squared error E:

E = L(O} - C)2

where N is the number of observations which are compared with computed values.

The best solution for an inversion in the least-squares sense can be obtained when the error function E is minimised. The result of minimisation should be an overdetermined system of N equations with M unknowns. The conditions for minimum error are determined by forming a derivative of the error with respect to each of the model parameters, and setting these derivatives all equal to zero simultaneously. The derivatives are calculated by a finite difference formulation in which the model parameters are perturbed by a small amount, one at a time to find how much the error is changed by such perturbation. The number of observations N is necessarily larger than M, the number of parameters used in building the model. When the set of derivatives is set equal to zero, M equations in N unknowns are written, so that the problem is over­determined. The singular value decomposition method is often used in the solution of this set of equations. 32 - 35 Such solutions are stable only in a few cases. Instability is caused by combinations of parameters to which the observed data are insensitive. This problem has been attacked in two ways: by elimination of some of the parameters and by use of the ridge regression technique.

The generalised inverse approach has been very effectively used to find inversions for assumed one-dimensional distributions of resistivity for magnetotelluric, direct current and electromagnetic sounding data. 36 -40

Generally, a good solution is found in two or three iterations of the process as described. This involves the computation of a few hundred to a few thousand forward solutions to obtain the necessary derivatives, and does not involve an extraordinary amount of computer time. However, in one­dimensional problems the number of parameters involved in describing the model is relatively small, ranging from one or two to several tens. The number of forward solutions needed to compute derivatives is generally less

Page 142: Developments in Geophysical Exploration Methods

134 G. V. KELLER

than 100. In two- and three-dimensional problems, where the model is described by hundreds or even thousands of parameters, the number of forward solutions involved rises to many thousands or even tens of thousands. The use of generalised inversion with two- or three-dimensional models does not seem to be practical at the present time and perhaps the best approach to a solution is to find a better method for estimating the number of parameters required in describing a two- or three-dimensional earth in an optimum way.

5. THE SELF-POTENTIAL METHOD

Self-potential surveys are a form of electrical survey but, when self­potential surveys are carried out, only the naturally existing voltage gradients in the earth are measured. These voltages have a variety of causes,

,...----r-"""7"-------------,--,..,-,c------r-T 19·30'

FIG. 10. Results of a self-potential survey of a small area along the East Rift zone on the Island of Hawaii. Contour interval is IOOmV. Areas of low voltage are indicated by interior ticks on the contour. The filled circle indicates the location of well HGP-A, a successful geothermal wildcat well drilled by the University of

Hawaii. 45

Page 143: Developments in Geophysical Exploration Methods

EXPLORA nON FOR GEOTHERMAL ENERGY 135

including the oxidation or reduction of various minerals by reaction with ground water, the generation of Nernst voltages where there are concentration differences between the waters residing in various rock units, and streaming potentials which occur when fresh waters are forced to move through a fine pore structure, stripping ions from the walls of the pores.41

The self-potential method has been used in mineral exploration to find ore deposits by observing voltages generated as ore minerals oxidise. The method has also been used very extensively in borehole surveys to determine the salinity of pore fluids through the voltage generated by the Nernst effect. In geothermal areas, very large self-potential anomalies have been observed, and these are apparently caused by a combination of thermo-electric effects and streaming potentials where the temperature has caused an unusual amount of ground water movement.42 - 47

A self-potential survey is carried out by placing a pair of half-cells in contact with the ground with a separation of tens of metres to several kilometres. One of the half-cells is held fixed at a reference point while the other half-cell is moved about over the survey area to determine the distribution of potential over the region. In areas with strong geothermally related self-potential anomalies, variations of as much as several volts can be observed over distances that amount to a few hundred metres to a few kilometres. An example of a self-potential contour map of a thermal area in Hawaii is shown in Fig. 10.

6. MAGNETIC SURVEYS

Surveys of the spatial changes in the strength of the magnetic field over the surface of the earth have been used as a method for geophysical exploration for many years. Except in rare cases, these changes in magnetic field strength are controlled by the presence of varying amounts of magnetite and related minerals in the rock. The magnetic method has come into use for identifying and locating masses of igneous rocks which have relatively high concentrations of magnetite. Strongly magnetic rocks include basalt and gabbro, while rocks such as granite, granodiorite and rhyolite have only moderately high magnetic susceptibilities.

The magnetic method is useful in mapping near-surface volcanic rocks that are often of interest in geothermal exploration, but the greatest potential for the method lies in the ability to detect the depth at which the Curie temperature is reached. Ferromagnetic materials exhibit a pheno­menon characterised by a loss of nearly all magnetic susceptibility at a

Page 144: Developments in Geophysical Exploration Methods

136 G. V. KELLER

critical temperature called the Curie temperature. Various ferromagnetic minerals have differing Curie temperatures, but the Curie temperature of titano-magnetite, the most common magnetic mineral in igneous rocks, is in the range of a few hundred to 570°C. 48 The ability to determine the depth to the Curie point would be an ability to determine the depth to the Curie point isotherm as well.

F or magnetic field 0 bservations made at or above the surface of the earth, the magnetisation at the top of the magnetic part of the crust is characterised by relatively short spatial wavelengths, while the magnetic field from the demagnetisation at the Curie point in depth will be characterised by longer wavelength and lower amplitude magnetic anomalies. This difference in frequency characteristics between the magnetic effects from the top and bottom of the magnetised layer in the crust can be used to separate magnetic effects at the two depths and to determine the Curie point depth. 49 - 52

The magnetic field of the earth is complicated in character because of its dipolar nature. The inducing magnetic field has a dip angle which varies from place to place over the surface of the earth, and this introduces a complexity into the patterns of anomalies which are recorded. This problem has long been recognised in the analysis of magnetic data, and a procedure has been developed to recompute the magnetic profile map for a vertical inducing field using the actual observed magnetic map. This first step in processing magnetic data is termed the conversion of the magnetic map 'to the pole', or to the form it would have for a vertical inducing field. 53

Another difficulty met in dealing with observed magnetic fields is that the specific character of the anomaly depends on the size and shape of the magnetic bodies in a complicated way. For simple bodies with well defined tops and bottoms, such as a pluton with magnetisation disappearing at a specific depth, the magnetic anomaly can be computed numerically either in the space domain or the spatial frequency domain. The fact that a real body has lateral extent affects the spectrum which is computed, meaning.that a set of curves must be computed for prismatic bodies which are reasonably representative of a given anomaly in order to determine the depth extent of magnetisation by comparison of the spectra computed from an observed magnetic map with the spectra computed for a prismatic body. In interpreting a magnetic map, the procedure which is most straightforward is that of selecting a relatively simple magnetic anomaly on the map which can be represented by a body of simple geometric shape. 54 A series of spectra is computed for various depths to the Curie point, and the spectra

Page 145: Developments in Geophysical Exploration Methods

EXPLORA TION FOR GEOTHERMAL ENERGY 137

computed from the field data are compared with these to make an estimate of the depth to the Curie point.

7. PASSIVE SEISMIC METHODS

It has been observed by many researchers that geothermal systems occur mainly in areas characterised by a relatively high level of microseismic activity. 55 - 58 However, in detail there does not appear to be a one-to-one relationship between the locations of micro-earthquakes and of geothermal reservoirs. Locating micro-earthquakes in a prospect area serves primarily as a means for recognising modern tectonic activity which may be controlled by the same factors that control the emplacement of a geothermal system, and in particularly favourable circumstances, the recognition of microseismic activity can serve as a guide to drilling into fractured rocks in a geothermal reservoir where production levels are expected to be high.

In order for the location of micro-earthquakes to serve as an effective exploration tool, it is necessary that a relatively large number of events be recorded over a reasonable recording period in a survey area. The procedure normally used at present is to deploy a number of highly sensitive seismograph units in a prospect area. The number may range from 6 to 12 or more. The distance between any pair of seismographs should be no more than 5-10km.

The range to which an earthquake can be recognised depends strongly on the energy released by that earthquake. The usual measure of energy release used by seismologists is the Richter M factor, which is a logarithmic scale indicating the amplitudes of seismic waves. A magnitude 4 earthquake is normally felt by a few people within several kilometres of the epicentre. Such earthquakes occur relatively rarely, perhaps only a few times a year, even in areas with a high level of seismicity. However, almost all sets of earthquakes for which magnitudes have been measured follow an inverse linear relationship between the logarithm of the number of earthquakes and their magnitudes. A decrease in magnitude of one Richter unit is normally accompanied by an increase of a factor of 10 in the number of events that occur within a given time. This relationship is the rationale for using micro­earthquakes rather than larger size earthquakes to estimate seismicity in a prospect area. If the threshold of the field system for detecting a micro­earthquake can be improved to the point where events of magnitude - 1 or - 2 can be recognised, the number of events which will be recorded in a

Page 146: Developments in Geophysical Exploration Methods

138 G. v. KELLER

given time can be increased by several orders of magnitude. In such a case, over a recording interval of 30 to 60 days one would reasonably expect to record hundreds of events in an area of seismic activity. In aseismic areas, on the other hand, few if any events would be recorded even at such a low threshold for detection.

When arrival times for a micro-earthquake have been recorded at several stations in an array, the problem remains of determining the location at which that earthquake occurred. In carrying out this calculation, it is usually assumed that the earthquake energy was released from a point source, and that the seismic waves travelled through a uniform earth to be recorded at each seismometer. This assumption is not obviously valid, but programs to determine epicentres when there are lateral variations in wavespeed have not yet come into general use.

In locating an epicentre, the unknowns to be determined are the x, y, and z coordinates of the origin, the time at which the event occurred, and the subsurface wave speed distribution. Given a constant wave speed assumption, the location of an epicentre can be found from only four p arrival times. However, the equations are not always stable because the wave speed may not be the same for all travel paths. Likewise, if there are variations in velocity from point to point in the medium, the assumption of straight line travel from the origin to each receiver location is not valid and the solution may not work. An alternative procedure which may work better involves the use of s arrival times as well as p arrival times. In this case, a solution can be obtained using p and s arrivals recorded at three stations. A Wadati diagram, which is a cross-plot between the p arrival times and the s - p arrival time differences, is constructed (see Fig. 11 for an example). If the trend of p versus p - s times is projected to zero p - s difference, the origin time for the earthquake is specified. Once an origin time is known, only three arrival times are needed to obtain a solution for the coordinates of an earthquake.

In addition to determining epicentres of local earthquakes in a geothermal prospect, information about the geology and tectonics can be obtained from fault plane solutions and first motion studies of these earthquakes. The directions of first motion for p waves can be analysed by plotting the sense of motion on an uppeK hemisphere stereographic projection centred on the epicentre. In this diagram compressional first arrivals are denoted by solid circles and dilations by open circles. Generally, the sense of first motion forms a pattern on the stereo net projection which can be bounded by the traces of two planes. One of these planes is the fault plane itself, and the other is an auxiliary fault plane, that is, a plane

Page 147: Developments in Geophysical Exploration Methods

EXPLORATION FOR GEOTHERMAL ENERGY

WADATI DIAGRAM

! / 99 /. 6~ 1. 5-+---- POISSON'S .. 9' ~. ~O. 3_"_+

-(f)

w ~ 1·0;----~­I-0-

I en

0·5-+---/

~ 0·5

COMPUTER GENERATED ORIGIN TIME

OSh 10m DAY 006.1975

1·0 1·5

PARRIVAL TIME (S)

2·0

139

FiG. II. Example ofa 'Wadati plot' for a microseismic survey of the East Rift zone on the Island of Hawaii. Arrivals of p and s waves recorded on a close-spaced array of seven seismograph stations. The sloping lines are the loci of points with the same

Poisson's ratio. (From D. Butler, Microgeophysics Corp.).

perpendicular to the actual fault plane in whose pole is the pole of motion. Because two planes are needed to form a pattern that encloses the first arrivals, ambiguity as to which one is the actual fault plane exists unless other information is available to make the selection.

The use of fault plane solutions is valuable in determining whether the earthquake activi ty in a prospect area is anomalous or typical of the region. An example of such an application is shown in Fig. 12, where the fault plane solution observed for earthquakes within a small prospect area shown at the top of the map is compared with earlier more general fault plane solutions in the same area. The direction of tension as indicated by the fault plane solution is the same in the prospect area as in the region, indicating that

Page 148: Developments in Geophysical Exploration Methods

140 G. V. KELLER

6 ..... , . ... 0 .....

.. " _(J..' , ~- ,

7 , • , +

o 100

KM

, ,

, , , , ' .~

Ii- 38. ,

.. , i

FIG. 12. Comparison of first motion diagrams based on teleseismic data from earthquakes in Nevada and one based on local earthquake data from a microseismic survey in the Black Rock Desert of northern Nevada, USA (group D in insert). Directions of tensional stress are uniform, indicating that events detected during the

microseismic survey are probably not anomalous. 56

the local earthquake distribution is controIled by the same crustal phenomena.

Another potentially valuable determination which can be made is that of Poisson's ratio. Poisson's ratio is the ratio of lateral strain to strain in the direction of stress when a cylinder of rock is subjected to uniaxial stress. Poisson's ratio is also definable in terms of the ratio of compressional to shear wave velocities. Poisson's ratio can be important in geothermal exploration inasmuch as it seems to be an indicator of the degree of fracturing in a rock. Both experimental and theoretical analyses have

Page 149: Developments in Geophysical Exploration Methods

EXPLORATION FOR GEOTHERMAL ENERGY

0.30 0.35

6 STATION o EPICENTRE

141

FIG. 13. Results of determinations of Poisson's ratio for micro seismic survey carried out on East Rift zone of Kilauea Volcano, Hawaii. Values for Poisson's ratio are plotted along apparent straight-line travel paths from epicentre to seismograph

(from D. Butler, Microgeophysics Corp.).

indicated that extensive fracturing of a fluid-filled rock causes Poisson's ratio to be higher than normal. This causes a minor reduction in the p-wave velocity and a significant reduction in s-wave velocity.

Poisson's ratio is determined from Wadati plots as described previously; that is, from plots of p - s arrival times as a function of p arrival times. Figure 13 shows a presentation of Poisson's ratio determinations on a plan map. Each value is written along the assumed ray path from the epicentre to the receiver location, although there is no certainty that straight line wave propagation took place. There is a significant anomaly in Poisson's ratio characterised in these data, which corresponds to the location of a successful geothermal well that was subsequently drilled.

In summary, passive seismic methods are most effective when locations of the epicentres for small earthquakes in a prospect area are located. Additional information available from such micro-earthquake locations includes determinations of Poisson's ratio which should be indicative of fracturing, and the direction of first motions and fault plane directions. In addition to these applications of passive seismology, several other

Page 150: Developments in Geophysical Exploration Methods

142 G. V. KELLER

approaches have also been suggested. These include the detection ofp-wave delays and the observation of ground noise. If an increase in temperature results in the reduction of p-wave velocity over a large volume in the crust, the measurement of delay times from teleseisms, or distant earthquakes, might be used to locate large hot bodies that serve as the roots for geothermal systems. This technique has been the subject of considerable research in recent years. However, a practical difficulty is faced by the method as it is presently being used. Teleseisms that occur at great enough distances to be useful in p-wave delay surveys occur with magnitudes sufficient for accurate p-wave arrival times only rarely, perhaps a few times a month. In order to obtain a detailed picture of subsurface structure, many hundreds of p-wave arrivals must be recorded. This can only be done by using a few stations over a long period of time or a very large number of stations for a shorter period of time. Each approach represents a relatively large investment.

8. ACTIVE SEISMIC METHODS

Both seismic reflection and seismic refraction surveys have been used in geothermal exploration. Seismic refraction surveys have been used to a limited extent because of the amount of effort required to obtain refraction profiles giving information at depths of 5 to 10 km, and the problems evoked by the normal high degree of complexity of geological structure in areas where geothermal systems are being sought. On the other hand, standard seismic reflection surveys have often yielded surprisingly useful results, even in areas where it was thought that seismic reflections would be difficult to obtain. The primary requirement for the use of a seismic reflection technique is that the subsurface be laminar in acoustic properties so that reflectors can be traced horizontally and interruptions in reflectors can be used to identify faults where displacement has taken place.

Seismic reflection surveys can be done using either dynamite exploded in shot holes as an energy source or through the use of some non-explosive source such as the Vibroseis. Considering that most geothermal prospect areas are areas where volcanic rocks occur either in the subsurface or near the subsurface, experience has indicated that drilling shot holes is probably not a practical approach to obtain seismic reflection data. The Vibroseis technique seems to be more generally applicable in such difficult areas. A typical Vibroseis source system will use from one to five truck-mounted vibrators, usually having masses of the order of 15 tons. The vibrators are

Page 151: Developments in Geophysical Exploration Methods

EXPLORATION FOR GEOTHERMAL ENERGY 143

operated in unison to increase the intensity of the signal transmitted to the ground, and occasionally are spaced in such a way as to cancel some of the surface travelling vibrations that are a noise source at the receiver. In the Vibroseis approach, an oscillatory sound wave, which varies in frequency over the duration of a single transmission, is transmitted through the ground. Frequency is normally swept from a high of 60-80 Hz to a low of 6-8 Hz, with the duration of the sweep being 8-10 s. In order to obtain recognisable signals at an array of geophones a short distance away, many transmissions are stacked. In most seismic surveys carried out today, the common depth point stacking array of geophones is used. In this, an array of geophones is laid out at some distance from the vibrator location. The vibrator is moved successively away from the geophone spread so that the point where the acoustic waves are reflected from layers in the subsurface also moves away. Then the geophone spread is moved a short distance along the profile and the process repeated. In analysing the ray paths, it can be seen that reflections are obtained from the same point in the subsurface with different separations between the vibrator and the geophones. Data quality can be enhanced by synchronously adding reflections which arrive from the same reflection point on a subsurface interface, but in so doing corrections must be made for the difference in travel times that is associated with the difference in separation from transmitter to receiver.

It is often thought that good-quality seismic sections cannot be obtained in volcanic terrain. This is not always a severe problem; an example of a seismic section obtained in an area where basalt flows are interbedded with gravels and shale or alluvium is shown in Fig. l4(the area is in south central Colorado near the mineral hot springs). The prominent reflectors seen on the seismic time section are the tops and bottoms of the basalt flows. They can be traced quite well, with faulting being readily apparent, as well as thinning out of the basalt flows towards the right-hand side of the section. The value of the seismic method is shown by the cross section in Fig. 15, which was compiled from an interpretation of the seismic time section as shown in Fig. 14 and other electrical data obtained with the Schlumberger sounding method. An area of relatively low resistivity which is believed to represent the reservoir feeding the mineral hot springs is seen to be associated with a down-drop fault block which is traced by the surface of the basalt flows.

Procedures for acquiring and processing seismic reflection data are well developed. Extensive computer facilities must be available to permit reduction offield observations to time sections such as are shown here, and to enhance the events which serve to trace the marker horizons in the

Page 152: Developments in Geophysical Exploration Methods

1000

20

00

D

rP

'Ii

~'"

~

E

1ka.

-1

Hi

on , z ~ l­ e:: ~

FIG

. 14

. Se

ism

ic r

efle

ctio

n pr

ofile

in

the

vici

nity

of

Min

eral

Hot

Spr

ings

(C

entr

al C

olor

ado,

USA

). E

vent

s B

l an

d B

2 ar

e vo

lcan

ic f

low

s, C

is

flow

bot

tom

, D

is

prob

ably

Pal

aeoz

oic

carb

onat

e se

quen

ce.

Nea

r-ve

rtic

al l

ines

are

fau

lts.

..... t p :<: 1:; t'" 1;; ~

Page 153: Developments in Geophysical Exploration Methods

EXPLORA TION FOR GEOTHERMAL ENERGY 145

10000

9000

8000

7000 2000

6000

1500 5000

4000

1000 3000

FIG. 15. Resistivity cross section interpreted from Schlumberger soundings along same profile as seismic section shown in Fig. 14. Reflective horizons from Fig. 14 are

shown.

section. An added merit of modern data processing as used in seismic reflection analysis is that, by varying the separation between vibrator and geophone spread, it is possible to determine the average acoustic wave speed to the depth of a reflector. When a series of reflectors is apparent on a seismic section, a seismic velocity profile can be extracted from the measurements. Seismic velocities themselves are useful in recognizing anomalies caused by temperature in relatively shallow rocks.

CONCLUSIONS

Experience has shown that many geophysical methods are effective in locating geothermal fields that have strong expression, particularly in terms of visible surface manifestations. Usually such geothermal systems can be located without recourse to expensive exploration efforts, but once development has begun it is often worthwhile to employ various geochemical and geophysical surveys to help site development wells.

Geophysical and geochemical explorations are more necessary when relatively well hidden geothermal systems are to be located. Such systems are hidden because they occur at relatively great depths, because they are less intense, or because they occur in areas of highly complex geological structure. In such cases, no one exploration technique is likely to be universally effective in defining a geothermal reservoir. Some methods lack

Page 154: Developments in Geophysical Exploration Methods

146 G. V. KELLER

the maturity of development to be used effectively under difficult conditions, while others become less useful for deep exploration because of lack of sensitivity. Considering the limitations of various methods, it is probably necessary to use an integrated geophysical approach, employing a wide variety of techniques. The objective is not only to get mutually supporting evidence for high heat flow, but also to obtain as much insight as possible into subsurface structures and conditions.

REFERENCES

1. GROSE, L. T. and KELLER, G. Y., Geothermal energy in the Basin and Range province, The basin and range symposium, ed. G. W. Newman and H. D. Goode, Rocky Mountain Assoc. of Geol. and Utah Geol. Assoc., Utah, pp. 361-70, 1979.

2. KAUFMAN, A. A. and KELLER, G. Y., The magnetotelluric method, Amsterdam, Elsevier, in press, 1980.

3. ELLIS, A. J., The chemistry of some explored geothermal systems, Geochemistry of hydrothermal ore deposits, ed. H. L. Barnes, Holt, Rinehart and Winston, New York, pp. 466--514, USA, 1967.

4. ELLIS, A. J., Chemical and isotopic techniques in geothermal investigations, Geothermics, 5, pp. 3-17, 1977.

5. ELLIS, A. J. and MAHON, W. A. J., Natural hydrothermal systems and experimental hot-water/rock interactions, Geochim. et Cosmochim. Acta, 28, pp. 1323-57, 1964.

6. ELLIS, A. J. and MAHON, W. A. J., Chemistry and geothermal systems, Academic Press, New York, 1977.

7. FOURNIER, R. 0., Chemical geothermometers and mixing models for geothermal systems, Geothermics, 5, pp. 51-61, 1977.

8. FOURNIER, R. O. and TRUESDELL, A. A., An empirical Na-K-Ca geothermo­meter for natural water, Geochim. et Cosmochim. Acta, 37, pp. 1255-75,1973.

9. TRUESDELL, A. H. and FOURNIER, R. 0., Calculation of deep temperatures in geothermal systems from the chemistry of boiling spring waters of mixed origin, Proc. 2nd UN Symp. Dev. and Use of Geothermal Res., San Francisco, ca., vol. 1, pp. 837-44, 1976.

10. TRUESDELL, A. H. and NATHENSON, M., The effects of subsurface boiling and dilution on the isotopic compositions of Yellowstone thermal waters, J. Geophys. Res., 82 (No. 26), pp. 3694-704., 1977.

II. BECK, A. E., Techniques of measuring heat flow on land, Terrestrial heat flow, ed. W. H. K. Lee, AGU Monograph 8, pp. 24-57, 1965.

12. JAEGER, J. c., Application of the theory of heat conduction to geothermal measurements, Terrestrial heatflow, ed. W. H. K. Lee, AGU Monograph 8, pp. 7-23, 1965.

13. LACHENBRUCH, A. H. and SASS, J. H., Heat flow in the United States and the thermal regime ofthe crust, The Earth's crust, its nature and physical properties, ed. J. G. Heacock, AGU Monograph 20, pp. 626--75, 1977.

Page 155: Developments in Geophysical Exploration Methods

EXPLORA TION FOR GEOTHERMAL ENERGY 147

14. KAPPELMEYER, O. and HAENEL, R., Geothermics with special reference to application, Gebruder Borntraeger, Berlin, 1974.

15. SCLATER, J. G., JAUPART, C. and GOLSON, D., The heat flow through oceanic and continental crust and the heat loss of the earth, Rev. Geophys. and Space Phys., 18 (No.1), pp. 269-321,1980.

16. KELLER, G. V. and RAPOLLA, A., Electrical prospecting methods in volcanic geothermal environments, Physical vulcanology, eds. L. Civetta, P. Gasparini, G. Luong and A. Rapolla, pp. 133-66, Elsevier, Amsterdam, 1974.

17. QUIST, A. S. and MARSHALL, W. L., Electrical conductances of aqueous solutions at high temperatures and pressures: 3, the conductances of potassium bisulfate solutions from 0 to 700 0 at pressures to 4000 bars, 1. Phys. Chem., 70, p. 3714, 1966.

18. QUIST, A. S. and MARSHALL, W. L., Electrical conductances of aqueous sodium chloride solutions from 0 to 800 0 and at pressures to 4000 bars, 1. Phys. Chem., 72, p. 684, 1968.

19. QUIST, A. S. and MARSHALL, W. L., The electrical conductances of some alkali metal halides in aqueous solutions from 0 to 800 0 and at pressures to 4000 bars, 1. Phys. Chem., 73, p. 978, 1969.

20. PRESNALL, D. c., SIMMONS, C. L. and PORATH, H., Changes in electrical conductivity of a synthetic basalt during melting, 1. Geophys. Res., 77 (No. 29), pp. 5665-72, 1972.

21. PARKHOMENKO, E. I., Electrical properties of rocks, Plenum Press, New York, 1967.

22. VOLAROVICH, M. P. and PARKHOMENKO, E. I. Electrical properties of rocks at high temperatures and pressures, Geoelectric and Geothermal Studies, ed. A. Adam, pp. 321-72, Akademia Kiado, Budapest, 1976.

23. HUTTON, V. R. S., The electrical conductivity of the earth and planets, Rep. Prog. Phys., 39, pp. 487-572, 1976.

24. KELLER, G. V., Electrical characteristics of the earth's crust, Electromagnetic probing in geophysics, pp. 13-75, Golem Press, Boulder, Co., USA, 1971.

25. HOOVER, D. B., FRISCHKNECHT, F. C. and TIPPENS, C. L., Audiomagnetotelluric sounding as a reconnaissance exploration technique in Long Valley, California, 1. Geophys. Res., 81 (No.5), pp. 801-9, 1976.

26. HOOVER, D. B., LONG, C. L. and SENTERFIT, R. M., Some results from audiomagnetotelluric investigations in geothermal areas, Geophysics, 43 (No. 7), pp. 1501-14, 1978.

27. GOUBAU, W. M., GAMBLE, T. D. and CLARKE, J., Magnetotellurics using lockin signal detection, Geophys. Res. Lett., 5 (No.6), pp. 543-6, 1978.

28. GOUBAU, W. M., GAMBLE, T. D. and CLARKE, J., Magnetotelluric data analysis: removal of bias, Geophysics, 43 (No.6), pp. 1157-66, 1978.

29. KELLER, G. V. and FRISCHKNECHT, F. c., Electrical methods in geophysical prospecting, Pergamon Press, Oxford, 1966.

30. KOEFOED, 0., Geosounding principles: 1, Resistivity sounding measurements, Elsevier, Amsterdam, 1979.

31. KELLER, G. V. and RAPOLLA, A., A comparison of two electrical probing techniques, IEEE Trans. Geosci. Electron., GE-14 (No.4), pp. 250-6, 1976.

32. INMAN, J. R., Resistivity inversion with ridge regression, Geophysics, 40, pp. 789-817, 1973.

Page 156: Developments in Geophysical Exploration Methods

148 EXPLORA TION FOR GEOTHERMAL ENERGY

33. LANCZOS, G., Linear differential operators, Van Nostrand, Princeton, 1961. 34. JuPP, D. L. B. and VOZOFF, K. Stable methods for the inversion of geophysical

data, Geophys. J. R. Astron. Soc., 42, pp. 957-76, 1975. 35. TWOMEY, S., Introduction to the mathematics of inversion in remote sensing and

indirect measurements, Elsevier, Amsterdam, 1977. 36. DANIELS, J. J., Interpretation of electromagnetic soundings using a layered earth

model, PhD Thesis T-1627, Colorado School of Mines, Golden, USA, 1974. 37. GLEN, W. E., RYu, J., WARD, S. H., PEOPLES, W. J. and PHILLIPS, R., The

inversion of vertical magnetic dipole sounding data, Geophysics, 38, pp. 1109-29, 1973.

38. INMAN, J. R., RYu, J. and WARD, S. H., Resistivity inversion, Geophysics, 38, pp. 1900-2108, 1973.

39. JOHANSEN, H. K., A man/computer interpretation system for resistivity soundings over a horizontally stratified earth, Geophys. Prospecting, 25, pp. 667-92, 1977.

40. RODRIGUEZ, J-c., Inversion of TDEM (near-zone) sounding curves with catalog interpolation, Quart. J., Colo. School of Mines, 73 (No.4), pp. 57-70, 1978.

41. NOURBAHECHT, B., Irreversible thermodynamic effects in inhomogeneous media and their application in certain geological problems, PhD. thesis, Massachusetts Institute of Technology, Cambridge, Mass, 1963.

42. RAPOLLA, A., Natural electric field survey in the southern Italy geothermal areas, Geothermics, 3 (No.3), pp. 118-21, 1974.

43. ZABLOCKI, C. J., Mapping thermal anomalies on an active volcano by the self potential method, Kilauea, Hawaii, Proc. 2nd UN Symp. Dev. and Use of Geothermal Res., pp. 1299-312, 1976.

44. ZABLOCKI, C. J., Self-potential studies in East Puna, Hawaii, Geoelectric studies in the East Rift, Kilauea Volcano, Hawaii, Island, Univ. of Hawaii Report HlG 77-15, pp. 175-93, 1977.

45. ZABLOCKI, C. J., Streaming potentials resulting from the descent of meteoric water-a possible source mechanism for Kilauean self-potential anomalies, Geothermal Resources Council Transactions, 2, pp. 747-8, 1978.

46. ANDERSON and JOHNSON, Application of the self potential method to geothermal exploration in Long Valley, California, J. Geophys. Res., 1 (No.8), pp. 1527-32, 1976.

47. CORWIN, R. F. and Hoover, D. B., The self potential method in geothermal exploration, Geophysics, 44 (No.2), pp. 226-45, 1979.

48. NAGATA T., Rock magnetism, Maruzen, Tokyo, 1961. 49. BHATTACHARYYA, B. K., Two-dimensional harmonic analysis as a tool for

magnetic interpretation, Geophysics, 30 (No.5), pp. 829-57, 1965. 50. BHATTACHARYYA, G. c., Some general properties of potential fields in space

and frequency domain: a review, Geoexploration, 5, pp. 127-43, 1967. 51. BHATTACHARYYA, B. K. and LEU, L. K., Analysis of magnetic anomalies over

Yellowstone National Park: Mapping of Curie Point isothermal surface for geothermal reconnaissance, J. Geophys. Res., 80 (No. 32), pp. 4461-5, 1975.

52. BHATTACHARYYA, B. K. and Chan, K. c., Computation of gravity and magnetic anomalies due to inhomogeneous distribution of magnetization and density in a localized region, Geophysics, 42 (No.3), pp. 602-9, 1977.

Page 157: Developments in Geophysical Exploration Methods

G. V. KELLER 149

53. BARANOV, V., and NAUDY, H., Numerical calculation of the formula of reduction to the magnetic pole, Geophysics, 29 (No.1), pp. 67~79, 1964.

54. BYERLY, P. E. and STOLT, R. H., An attempt to define the Curie Point isotherm in northern and. central Arizona, 1977.

55. WARD, P. L., Microearthquakes: prospecting tool and possible hazard in the development of geothermal resources, Geothermics, 1 (No. I), pp. 3~ 12, 1972.

56. KUMAMOTO, L., Microearthquake survey in the Gerlach-Fly Ranch area of north western Nevada, Quart. J. Colo. School of Mines, 73 (No.3), pp. 45~64, 1978.

57. SANFORD,A.R., MOTT,R. P.,JR.,RINEHART, E. J.,CARAVELLA, F.J., WARD,R. M. and WALLACE, T. c., Geophysical evidence for a magma body in the crust in the vicinity of Socorro, New Mexico, The Earth's crust, its nature and physical properties, ed. J. G. Heacock, AGU Monogram 20, pp. 385~403, 1977.

58. STEEPLES, D. W. and PITT, A. M., Microearthquakes in and near Long Valley, California, J. Geophys. Res., 81 (No.5), pp. 841~7, 1976.

Page 158: Developments in Geophysical Exploration Methods

Chapter 6

MIGRATION

P. Ha(m

Geophysics Research Branch, British Petroleum Co. Ltd, London EC2Y 9BU, UK

SUMMARY

Seismic migration is one of the most rapidly changing fields in data processing. During the last twelve years three major methods and a host of minor methods have appeared on the scene, each with its own range of applicability. In this article we examine in detail the three mainstream migration methods, i.e. the diffraction stack, F-K migration, and finite difference migration; we scrutinise their strengths, weaknesses and relative merits in terms oJpractical migration problems. We also look at some oJthe new techniques which have been discussed in the literature and which are, potentially, the migration methods oj the future-these include hybridfinite difference/ Fourier methods, direct velocity inversion techniques, and stack enhancement by partial migration.

NOTATION

c Velocity of seismic wave propagation cH Horizontal velocity c Frame velocity CMP Common midpoint, i.e. point half way between the shot and

geophone D Down-going wave f Temporal frequency

151

Page 159: Developments in Geophysical Exploration Methods

152 P. HOOD

h Offset coordinate, i.e. 2h = Xg - Xs

kh Wavenumber in the h direction kx Wavenumber in the x direction k z Wavenumber in the z direction n Unit normal vector to a surface P Pressure amplitude (assuming that this can be obtained from the

particle velocity (geophone) or is recorded (hydrophone)) P';,k Discrete representation of the pressure at coordinate Ul1x, kl1z,

nl1t). P Fourier transform over time or pseudodepth, i.e.

S Pexp (-iwt)dt or S Pexp( -ikAdd

P Fourier transform spatially over x or z, i.e.

p Q R R(x, z) T

To t

t' u x y z

(xo,zo) (xg, Zg)

(xs' zs) (Xr

(Xt

e

S Pexp (ikxx)dx or S Pexp(ikzz)dz

Time retarded version of P, i.e. P' = P exp ( - iwz/C) Snell's law parameter = sin ejc Input pressures convolved with a shaping operator Vector with magnitude [(z - ZO)2 + (x - XO)2 + (y - YO)2P /2 Earth reflectivity map Time shift in shifting equation Apex time on a hyperboloid surface in 3D migration Time coordinate Maximum recorded time on a section Time of travel for a wave from a point scatterer to the surface geophone Time coordinate of downward continued data Up-going wave Horizontal coordinate along dip line Horizontal coordinate along strike line Depth coordinate measured downwards from the surface of the earth Coordinates of a buried scatterer Coordinates of a geophone Coordinates of a shotpoint Angle of dip of a plane layer on the migrated depth or earth reflectivity section Angle of dip of a plane layer on a time section Perturbation parameter used in Cohen and Bleistein's theory

Page 160: Developments in Geophysical Exploration Methods

MIGRATION

(J Parameter used in a skewed finite difference formula

p Density

o Forward difference in z 0·5 Crank-Nicholson central difference in z 1·0 Backward difference in z

r Two-way time coordinate measuring vertical travel time w Angular velocity in rad s - 1 .

1. INTRODUCTION

153

Seismic migration is one of the last of the processes to be applied in the data processing sequence. Its purpose, briefly stated, is to transform a seismic wave field recorded at the earth's surface (time section) to an earth reflectivity map (depth section). Up to the late 1960s this was achieved by manual methods on a few picked horizons using ray tracing and timing calculations. Then around 1970 the first of the 'diffraction stack' migration methods became commercially available. Again, this was based on ray tracing concepts and the scalar diffraction theory of Huygens and Fresnel,l but in this case the method could be applied to complete common midpoint (CMP) sections. In the 1970s several major developments took place. One was the use of wave rather than ray theory. The key figure in this movement was Professor Jon Claerbout at Stanford University, who currently runs a major project called the 'Stanford Exploration Project'. This is financed by the oil industry and it aims to look into new exploration techniques. Another development was the understanding that the diffraction stack method could be improved by referring to Kirchhoff integral theory rather than the ray theory which approximates to it. This led in turn to a better application of processing parameters in the method.

Over the past decade three main processing techniques have emerged: these are known as diffraction stack (or Kirchhoff) migration, finite difference (or wave equation) migration and F-K (or wavenumber) migration. The use of these epithets is however a little confusing, since people tend to refer to all three of the methods by the single term 'wave equation migration'. This is because all the methods are based on solutions to the scalar wave equation. There have been some further developments recently which also look promising: one of these is the direct inversion of the surface wavefield to obtain the velocities and structure. This represents a further move away from ray theory to a complete wave theoretical

Page 161: Developments in Geophysical Exploration Methods

154 P. HOOD

description of migration process. Until recently migration of data followed either from a velocity field which had been derived from stacking velocities, or occasionally from velocities based on ray tracing studies of a two­dimensional earth model, and relating the predicted with the observed wavefields. These latest inversion methods offer the interesting possibility of circumventing the iterative loop of migration from model-derived velocities. So far we have mentioned the 'wave equation' without being at all specific. In practice the following equation is the most generally used:

(1.1)

where P(x,y, Z, t) = pressure amplitude at coordinates (x,y, z) and time t; c(x, y, z) = propagation speed of the acoustic wave. This equation describes the spatial and temporal evolution of the pressure field (but not the displacement or the particle velocities). Equation (1.1) is known as the scalar wave equation. It is assumed that, although the velocity can vary, the density of the medium is a constant which does not enter the calculation. This assumption is a reasonable one; modelling of well borehole data shows that it is the sonic velocities which largely determine the shape of the synthetic seismogram; the densities normally reinforce rather than alter the picture. There is of course a further problem in that the densities are not generally available. In the case where the density is known, the wave equation is modified by the presence of an extra density term as follows:

(1.2)

This equation is rarely used in geophysical exploration; however, one example of its potential use is in the direct inversion of one-dimensional velocity and density profiles. 2

Further complexity may be added into the migration picture if the earth's elastic constants are known. Such an ideal state of affairs has not been given serious attention until recently. 3 With increasing emphasis being placed nowadays on recording shear wave data, then perhaps some progress along this path may take place in the future. As it is, eqn. (1.1), which is valid for fluid media, can be used to model the usual diffraction and refraction (Snell's law bending) effects of either shear or compressional waves separately, but will not of course predict mode conversions between the two types of wave, nor the correct variation of reflection coefficient with angle of incidence, for which purpose it is necessary to solve the elastic wave equation. Currently, the most important limitations remaining on correct migration of seismic data via eqn. (1.1) lie inthe imprecise knowledge of the velocity c, and the band limitation and noise corruption of the recorded

Page 162: Developments in Geophysical Exploration Methods

MIGRATION 155

data. The errors in velocity of course lead to errors in the migration of data; where dips are steep, such errors can be significant both in terms of migration and CMP mis-stacking. The band limitation of the data affects the ultimate quality and resolution of the migrated output, or, in a complete inversion scheme, manife~ts itself in a certain non-uniqueness of the inverted solution. Noise corruption, if large, tends to sabotage efforts to obtain the velocities accurately, and after migration will obscure signal underneath 'noise smiles'. To gain as complete a picture as possible from seismic data, modern methods offer no panacea for poor field acquisition; indeed, best results will always be obtained from wide-bandwidth, noise­free data.

In this article we start off in Section 2 with a review of some of the fundamental concepts of migration; in Sections 3-5 the mainstream migration techniques will then be covered. In Section 6 some of the most recent developments will be discussed. Then in Section 7 we give an overview of the various migration methods available and make specific recommendations in the choice of processing methods. No great effort has been made towards originality of material; however, some of the sources are obscure and this article will hopefully give these the attention they deserve. Some of the material is new, and particular thanks must go to certain contracting companies for permission to use their as yet unpublished diagrams. It would not be appropriate to single out individual companies here, and so in the final section full acknowledgement is given to each company.

2. FUNDAMENTAL CONCEPTS

There are a number of concepts and assumptions made in migration theory which are fundamental to a clear understanding of present day practice; these are discussed in this section. It is assumed that the reader already has a fair knowledge of migration, and so little time will be spent in discussing what migration is. It was mentioned earlier that migration was a mapping from surface recorded acoustic data to an earth reflectivity section. This process is sometimes referred to as 'depth migration'. However, the reason for the title is not the nature of the final section, but the fact that the migration process has tracked the wavefield in depth taking full account of reflection curvature and of refraction and diffraction effects. More often than not, the resulting reflectivity section is convolved with a wavelet, since imprecise knowledge of the original recorded wavelet prevents perfect

Page 163: Developments in Geophysical Exploration Methods

156 P. HOOD

deconvolution. Another common presentation of migrated data is in terms of a time section. In this case the earth reflectivity section can be converted to a time section using a suitable velocity field for the conversion, or, in the case of a 'time migration', then time coordinates are the most natural output coordinates of the migration. In 'time migration' diffraction effects are considered, but not those refraction effects which are due to lateral changes in velocity.4,64

2.1. The Earth Model Until comparatively recently, seismic data was shot and recorded along single lines. These lines tended to follow, as far as possible, either the dip or the strike of the structure. There is less reluctance nowadays to shoot areal surveys where the underlying structure is truly three-dimensional. Nevertheless, for the majority of cases it is sufficient to consider the earth structure as locally two-dimensional. Mathematically this simplification is not required; the main trouble always arises in moving from one to two dimensions. But, having derived the algebra in two dimensions, extension to three or more dimensions is trivial. Furthermore the numerical techniques developed in two dimensions can just as easily be extended to three. However, the number of calculations involved in a correct 3D migration means that there is considerable advantage to be obtained in splitting the migration down into a series of 2D migrations. For this reason most of the discussion in this article will centre around a 2D earth model in which the earth does not vary in the direction normal to the survey line (y axis).

Another point which is well worth bearing in mind is that, if a 2D survey has been shot at an angle to the dip line, or if the structure is plunging along the y axis, it is still not necessary to use a full 3D migration procedure. As French5 has pointed out, 2D migration in such cases is quite sufficient, provided that the velocities have been adjusted by a simple scaling factor.

There are some other assumptions which are frequently made regarding the earth model. The most useful of these goes under the name of the Born approximation. Essentially this approximation means that propagation is in a single direction from source to receiver and back again, i.e. there are no multiple reflections. Multiple reflections can theoretically be removed by solution of the wave equation during migration; however, there are no commercially available programs to achieve this. Another approximation made relates to the source. This is often assumed to be two-dimensional (line source) rather than three-dimensional (point source) in nature. Where this is so, the times on the migrated section will be correct although the

Page 164: Developments in Geophysical Exploration Methods

MIGRATION 157

amplitudes will be in error. Recorded data can however be 'corrected' to simulate cylindrical rather than spherical divergence.

2.2. Ball Bearing Model The ball bearing model6 is a useful model in that it offers a simple pictorial analogue to the diffraction stack process of migration. In this model a reflecting horizon is assumed to consist of a number of ball bearings spaced extremely close together (Fig. 1). The main response on the time section due to a single ball bearing embedded in a homogeneous medium lies along a 'diffraction' hyperboloid. The apex of the hyperboloid lies at the two-way time and position of the migrated 'time section'. If several ball bearings are placed side by side in a plane the reflected waveforms interfere constructively at short distances and destructively at greater distances. The result of this is, in the limit, that a plane can be considered as the sum of closely spaced point scatterers. This concept indicates that migration may be achieved by summing amplitudes on the time sections over individual hyperboloids and placing the resultant sum at the respective apexes. In essence this is the basis of the diffraction stack method of migration.

2.3. Up-going and Down-going Waves The idea of separation of the seismic wavefield into up- and down-going components is a useful one, as we shall see later. Down-going waves refer to those waves which either emanate directly from the shot or are generated by multiple reflections, and which consequently propagate in the downwards direction into the earth (Fig. 2). Up-going waves refer to those waves which ha ve been generated from the upward reflection of the down-going waves by changes in acoustic impedance, and which travel towards the surface of the earth where they are recorded. In both cases the significant contribution to the energy spectrum comes from waves with small angular deviation from the vertical (z axis) direction. This is because most horizons have small dip, and significant energy normally comes from specular rather than diffuse reflection; furthermore, geophone or shot pattern response will effectively eliminate waves with a large horizontal wavenumber from the time section. These, it must be emphasised, are generalities; steep fault planes or salt domes with dips up to or exceeding 90 0 can be encountered. Even so, the concept of up-going and down-going waves is useful, although in this case the categorisation into up-going or down-going waves depends on the overall trend of wave movement. Some confusion appears to reign when considering reciprocity and it is perhaps appropriate to deal with this point here. With some qualifications,65 this important principle says that if a

Page 165: Developments in Geophysical Exploration Methods

158 P. HOOD

DEPTH SECTIONS CORRESPONDING TIME SECTIONS

r---------- X -+ X

z

(a) widely spaced ball bearongs

r-----------------~X

. ,

'\-\+t

z (b) dasely spaced ball bearlY.]s I-'_l Ll ~ \ 1

t

r---------------~~X r-------------~~--~ X

z (c) can rinuum of txJ II bean n9 s

FIG. 1. Reflection as a sum of diffractions-ball bearing model. 6

Page 166: Developments in Geophysical Exploration Methods

MIGRATION

SOURCE RECEIVER

7i~777777777777~7

[)(MINGOING WAVES

----../

--------

~~ ~

UPGONG WAVES

------------------- REFLECTOR

FIG. 2. Separation of wavefield into up- and down-going wavefields.

159

wave is generated at A and recorded at B (Fig. 2), the same recording would be received had the wave been initiated at B and recorded at A. Note that, in a practical application of this principle, source and receiver directivity effects and differences in ground coupling are neglected. Reciprocity is used in some methods of migration effectively to extend the notional area of surface coverage in the sense that it is possible to determine the seismogram, which would be recorded at the shot points, from data actually recorded at the geophones. In fact this trick increases neither the fold of cover nor the line extent. The confusion lies in the treatment of this rearranged shot point recorded data as down-going wave energy. It is of course notionally recorded up-going wave energy and must be treated as such.

2.4. Downward Continuation and Datumming For a wavefield recorded on the earth's surface, application of Huygen's principle or solution of the wave equation will permit the reconstruction of the seismogram at a different datum level in the earth. This process of moving data from one level to the next is known as downward continuation. Mathematically, it is the derivation of the wavefield P(x, z + ilz, t') from the known wavefield P(x, z, t). There is a slightly more general process described by Berryhill,7 known as datumming. In this case the surfaces on which the data are recorded may be irregular, i.e. z = z(x), and similariythe datum to which the data is to be downward continued may also be irregular.

2.5. Imaging Conditions If we pursue the idea of up- and down-going waves further, this leads to the

Page 167: Developments in Geophysical Exploration Methods

160

H

Wove

z

P. HOOD

REFLECTOR Q

/

Recorded Section

tmcDl

b FIG. 3. Derivation of imaging conditions: (a) earth model; (b) depth~time

trajectories.

idea of depth-time trajectories and eventually to Claerboufs imaging principle. 8 For this purpose we consider a vertically travelling down-going plane wave D and a single horizontal reflector (Fig. 3(a». It will be assumed that the earth is laterally homogeneous. After striking the reflector the down-going plane wave creates an up-going plane wave U and continues its motion downwards. The resulting depth-time trajectory is shown in Fig. 3(b). The x coordinate is not displayed since velocities are laterally invariant. If {max represents the last time recorded on the section, then H' represents the deepest level about which we can deduce information, and clearly that part of (z, t) space lying to the right of the curve G'H' will be of no interest. Similarly the line OHH' defines the left most extremum of interest, since no acoustic energy can travel faster than the medium velocity. Migration is thus concerned exclusively with the 'shaded' region bounded by OHH'G'G.

The process of downward continuation may be used to extrapolate the

Page 168: Developments in Geophysical Exploration Methods

MIGRATION 161

up-going wavefield recorded at z = 0 back into the earth (and earlier in time) until it meets the down-going wave trajectory OHH'. Similarly, if the shot waveform is known, downward continuation of the down-going wavefield on the path OHH' may be achieved. At H the down-going wavefield is time- and space-coincident with the up-going wavefield. The wavefield is said to image the reflector at this point and the division of U(x, z, t,)/ D(x, z, t) will yield an estimate of the reflection coefficient R(x, z). If the shot waveform is unknown, and an impulsive down-going wavefield of unit amplitude is downward continued, this ratio will yield instead the reflection coefficient convolved with the shot waveform.

We are now in a position to set forth Claerbout's imaging principle. This states that 'reflectors exist at points in the ground where the first arrival of the down-going wave is time coincident with an up-going wave'. This imaging principle really applies to the situation envisaged above where both shot and received wavefields are downward continued. Both wavefields are required if multiply reflected waves are to be removed during migration; this is almost exclusively relevant in terms of plane wave theory. In more usual applications, the imaging condition is not used as it stands, since only up-going waves are downward continued. The imaging condition used depends entirely on the model.

One of the most important models used in migration today is the exploding reflector model.

2.6. The Exploding Reflector Model The idea of the exploding reflector model was first introduced by Loewenthal et al., 9 and for this reason it is sometimes called the Loewenthal model. Instead of the sources being on the surface, each reflector is considered to be composed of a series of point sources (Fig. 4(a)); the magnitude of each source equals the value ofthe reflection coefficient at that point. All the sources are set off at zero time, and eventually their emanations are received at the surface. There are two variations on this theme: one is the original Loewenthal model in which a zero offset section is considered. In ray theoretical terms this corresponds to a single ray being traced up to the surface from each source point, with a starting angle normal to the reflecting surface. If the travel times are doubled (or the velocities are halved), then the result is similar to a zero offset section. Migration consists of the inversion of this forward model. Downward continuation proceeds and the reflectors are imaged at time t = 0 in the exploding reflector time coordinates, since this is the time of initiation of the shots.

Page 169: Developments in Geophysical Exploration Methods

162 P. HOOD

" ....... ---"'.- REFLECTOR

a

REFLECTOR

b FIG. 4. Exploding reflector model in which each reflector is considered to be made up of a number of point sources: (a) zero offset case-a single ray path is considered, starting at right angles to the reflector; (b) Non-zero offset-twin families of ray paths are considered, the first set travel up to the actual geophone locations and the

other set travel up to the shot point locations.

The other variation of the exploding reflector model arises in the application to migration before stack; in ray theory this model would correspond to tracing twin families of rays from each source point on the reflector up to the surface (Fig. 4(b)). One set of rays travels up to the geophones, and the other travels up to the 'notional' geophones located at the surface position of the actual shots (using reciprocity); account can therefore be taken of variations in earth velocity on both up- and down­going ray paths, and the multibranching focusing effects (Fig. 5) which

Page 170: Developments in Geophysical Exploration Methods

MIGRATION 163

FIG. 5. In the zero offset case ray path B is the only path considered, whereas in non-zero offset case ray paths of types A and B are considered.

are ignored in the original Loewenthal model. Imaging is therefore conceptually more difficult to grasp. The total travel time t for two typical rays may be expressed as

(2.1) where:

ts = travel time from the exploding reflector to the notional geophone (actual shot) position

tg = travel time from the exploding reflector to the actual geophone position.

If the wavefields, received on both shot and geophone gathers, are downward continued using the wave equation, the image again occurs at a time t = 0 since this was the time of the reflector explosion. Fromeqn. (2.1), it may be noted that this implies that ts = tg = 0, since ts and tg are both greater than or equal to zero. The mechanics of this type of migration will be discussed later on.

lt must be mentioned that there are yet other possibilities in the before­stack exploding reflector model : for instance line or plane wave rather than point sources might be used. The general recipe for imaging in every case is obtained from travel time considerations and spatial coincidence of the up­going waves with the exploding reflector. Finally, it is clear that since downward continuation and imaging in the exploding reflector model concerns itself purely with up-going waves, no account can be taken of multiple reflections, since to do so would require treatment, in addition, of down-going waves.

Page 171: Developments in Geophysical Exploration Methods

164 P. HOOD

3. FINITE DIFFERENCE MIGRATION

3.1. Introduction Finite difference or 'wave-equation' migration was introduced in the early 1970s by Claerbout in a remarkable series of papers. 10,11,8,12 The technique uses the downward continuation process in a numerical solution of the scalar wave eqn. (1.1). This process is based on the finite difference method. These early concepts have been extensively developed, and in this section we will be looking at the latest te~hniques for the migration of both stacked and un stacked data. There is a need to consider migration before stack since velocity inhomogeneities, if overlooked, tend to cause mis­stacking just as much as incorrect migration.

Finite difference methods have several strengths and some weaknesses. In early applications, so many approximations were made to the wave equation that the final equation behaved very badly when applied in a heterogeneous medium. Latterly, the approximations which caused the trouble have been identified,13 and the tendency now is to make very few approximations to the wave equation. With these better approximations, it is possible to migrate data correctly in regions with quite severe lateral velocity variations; indeed in this situation the technique currently performs better than any other method in the same price range, provided that the dips are not excessive (e.g. > 50°). At very steep dip, finite difference methods do not perform well, and there are problems with both attenuation and dispersion of steeply dipping waves. On the migrated section these appear as a weakened main event and several nearby 'ghosts' .

3.2. One-way Wave Equations It is quite standard practice to develop specialised wave equations which propagate energy within a small angle about a given axis, usually upwards or downwards. The reasons for doing so are largely mathematical and arise from the less stringent nature of the boundary conditions demanded by a unidirectional wave equation (Dirichlet BC) relative to those for the full wave equation (Cauchy BC). The price to be paid for developing unidirectional equations is that the transmitted refracted wave at an acoustic interface, although positionally correct, will not be diminished in energy with respect to the incident wave and so will incur an error in amplitude (unless unidirectional waves travelling in the opposite direction are correctly coupled). The full wave equation has been used in modelling by Alford et aU4 and in migration by Deregowski;15 in the latter case

Page 172: Developments in Geophysical Exploration Methods

MIGRATION 165

special initial conditions were used and only those parts of the total solution which were of interest were considered.

Historically, one-way wave equations have been derived in a number of ways. Splitting matrices are one method used in underwater acoustics, 16 whilst in seismic work three methods have been used. In the first method, due to Claerbout and Johnson,12 the unidirectional equation is derived from the wave equation by transforming to a coordinate frame moving at the speed of sound in a given direction. Certain terms can be dropped from this equation which are 'small' for waves travelling in this direction but 'large' for waves in the opposite direction. The effect of dropping the 'large' terms is to annihilate waves travelling in the opposite direction, while effectively leaving the waves propagating in the given direction undamaged. Another approach, also due to Claerbout,ll starts by taking a Fourier transform over the time coordinate of eqn. (1.1). This is then

82 p/8x2 + 82 p/8z2 = _W2/C 2 (3.1)

where

P = f~oo Pexp( -iwt)dt (3.2)

Claerbout derives a one-way wave equation governing propagation in the direction of the z axis by taking the square root of eqn. (3.1):

(3.3)

This equation is an exact one-way wave equation, and in the jargon is termed a 90 0 equation, since it propagates off-axis energy exactly in all directions right out to the full 90° limit. Unfortunately equation (3.3) is only valid if c is a constant; for a space-variant velocity such a decomposition is not correct. Certain approximations have to be made to the square root term in eqn. (3.3) in order to obtain a solution. In the time domain these correspond to similar approximations which are made to the 82 P/8z2 or Pzz term. One obvious way in whicheqn. (3.3) might be tackled is to approximate the square root by means of the binomial expansion. A less obvious way has been proposed in which the square root is approximated by means of the continued fraction expansion:

X 2 S = (1 + X2)1/2 = 1 + ---=

2 + X 2

2 + X2 ...

(3.4)

Page 173: Developments in Geophysical Exploration Methods

166 P. HOOD

This expression is developed, for example, by Lapidus l ? and Muir l8 in the Stanford Exploration Project (see also the work by Clayton and Engquist I9). Truncation of the expansion gives rise to approximate one­way wave equations. For example,

(3) _ I X 2 _ 2X2 S - + (2 + X2)/2 - I + 4 + X2 (3.5)

generates an equation which is called a 45 0 approximation (see Appendix 1). The angles describing approximations are those beyond which the effective velocity of wave propagation is in error by more than 1 %~ assuming no discretisation error. The discrete step sizes in x, z and t introduce further errors which can reduce this approximation angle significantly.

It is not an uncommon practice to substitute more general coefficients into these approximate equations. For instance, eqn. (3.5) might be replaced by

(3.5)'

where rx, f3 and yare selected so that the desired wave propagation properties are obtained. For example, effort might be directed towards an increase in accuracy in the 45-60 0 range of dips, 20.21 with a slight but insignificant loss of accuracy at lesser angles.

Buchanan22 has introduced yet a third way of producing one-way wave equations for seismic work based on Dirac spinor theory. Unfortunately the method involves expressing the boundary conditions in terms of a summation, which makes application of his method somewhat difficult. Nevertheless it represents a novel approach to the problem.

3.3. Finite Difference Approximations In the finite difference method, differential equations such as eqn. (1.1) are solved by approximation of the partial derivatives by means of difference equations. Thus, for example, if we have a regular gridwork of values of the pressure P(j/).x, k/).z, n/).t) == r;,k' where (/).x, /).z,!J.t) represent the sample spacing in the (x, z, t) directions respectively, we can express derivatives like 8P/8z as follows:

8P/8z ~ (Pj,k+ 1 - Pj,k)/M (3.6)

Similarly

(3.7)

Page 174: Developments in Geophysical Exploration Methods

MIGRATION 167

o .. II

I

4 III FIG. 6. Synthetic zero offset time section consisting of a number of dipping planes with dips in the range 0--50° in 10° intervals. The earth velocity is constant at

104 ft s -1.

Substitution of expressions like eqns. (3.6) and (3.7) into the governing differential equation will yield a set of equations for the unknown values of pressure at a new grid position in terms of the initially recorded (or previously calculated) pressures. The finite difference method therefore provides a convenient technique for downward continuation of the surface wavefield.

The approximations made in replacing partial derivatives by their corresponding difference operations, as in eqns. (3.6) and (3.7), lead to errors in the numerical solution. These errors normally increase with frequency and wavenumber, so that steeply dipping beds will incur a relatively large migration error. This gives rise to a dispersion effect where low frequencies are separated from the high-frequency components, and each bed will be broken up into a main low-frequency event and its associated high-frequency ghosts (see Figs. 6 and 7(d». The steeper the dip, the wider the separation of the ghosts from the true position; whilst at zero dip there is normally zero separation.

The user may have two parameters at his disposal which he can manipulate to reduce this dispersion. The first of these is the step size ~z (more usually Llr == 2~z/c) used in the downward continuation. This size is normally set at about Llr = 24-40 ms; if this parameter is chosen larger than 40 ms there may be some loss in angular accuracy of the one-way wave

Page 175: Developments in Geophysical Exploration Methods

168 P. HOOD

o = I-- r--- .-- -~= --------------- F-- .--- = ~-- :.

-~- -- ------ , --- - --=-~

1-=-- r--= 1-:- --4_ - -~-

~-.:..~-~:- t-~ - --. -- .. -

~ ! - . - --- ,--2

f- .-~~-

3

-= 4

r--- -o b d

FIG. 7. Migrated version of Fig. 6 by the finite difference method using a 45 0

equation, with various values of (): (a) 1·0; (b) 0·75; (c) 0·6; (d) 0·52.

equation. The second parameter which can be used is sometimes called a () parameter. 21 This parameter effectively allows a bias away from the centred finite difference approximation normally applied (see glossary). () values in the range 0·5- 0·52 are common. Values of () less than 0·5 lead generally to unstable formulations, whilst with values of () greater than 0·5 the process acts as a dip filter and can attenuate dipping events rather severely (see Fig. 7). Note that () = 0·5 corresponds to an unbiased or central difference formulation.

A quite novel approach to dispersion errors has been discussed by Whittlesey and Quay23 and also by Stolt. 24 In this method the finite difference representation of an approximate wave equation is cast in terms of generalised parameters. A minimisation or least-squares procedure is then set up whose aim, quite simply, is to make the finite difference approximation as close as possible to the exact wave equation at certain selected propagation angles and frequencies. This procedure thus takes two sources of error into account: the error produced by solving an approximate rather than a correct wave equation, and the error caused by the finite difference representation of the partial derivatives. Unlike more conventional approaches to finite difference migration this approach can handle steep dips relatively well, although this is at the price of slight errors at zero dip. Probably most of the better programs use this type of

Page 176: Developments in Geophysical Exploration Methods

MIGRATION 169

? _____ ----;..t

/" P(X,i,to)

(0) (b)

FIG. 8. Migration may be achieved by propagation of energy: (a) in depth­known as downward continuation; or (b) temporally-known as wave tracking. The initial plane P(x, z, tmax) is set to zero. As wave tracking proceeds surface

recorded data P(x, 0, t) get fed in as boundary values.

algorithm, which should not take significantly more computer time than normal finite difference methods.

3.4. Migration of Common Midpoint Stacked Sections There are two ways in which downward propagation of the surface wavefield may be carried out. In the first the wavefield is propagated from the surface (defined as the plane z = 0) to the next surface (lying on the plane z = ilz) and so on: a process which is called downward continuation. The second method considers wavefields at a given time instant to + ilt and propagates this to the next time instant to, in a process which may be termed wave tracking (see Fig. 8). Each plane in this method represents a 'snapshot' of the wave at a given instant in time. The two procedures are, of course, physically equivalent if the propagation velocity is assumed to be constant.

The procedure for migration of zero offset data by downward continuation is rather simpler than that for wave tracking. In essence the velocities are halved (see Section 2.6) and, after the finite difference approximations have been inserted, eqns. (ALl I) and (ALlO) are alternately applied to lower the effective recording plane. After each step ilz some of the data will have moved to zero time; these data are then fully migrated. To avoid an excessive number of depth steps ilz is chosen somewhat larger than eM/2, where M is the time sampling interval. Consequently an interpolation procedure is required to obtain the data on the t = 0 plane (see Fig. 9). This

Page 177: Developments in Geophysical Exploration Methods

170

1

P. HOOD

~~------------------~t

l

P(X'~O+A~.t)r t:-nAt

FIG. 9. Interpolation is required in downward continuation, due to the finite depth steps used, to obtain data at intermediate points on the z axis. Data which are propagated to times less than zero are slightly over-migrated; after the interpolation

procedure, samples at negative times are not considered further.

can be achieved by linear velocity interpolation assuming a constant velocity within the interval Llz. Because velocity varies laterally it is desirable that Llz should be small.

The output migrated section will lie on the (x, z, 0) plane. Sometimes interpreters prefer a time to a depth section so that data do appear compressed at early and stretched at late parts of the section. This may be achieved by dividing the depth axis by a function c(z)--an arbitrary function of the depth coordinate only. Ideally this should be chosen to be representative of the mean in the x direction of the average velocity so that times on the migrated and unmigrated sections will be close.

Another way of looking at migration is in terms of rejuvenation of constant time slices. Data at a fixed time is propagated to earlier times by means of the wave equation. The operations involved are the exact counterpart to those used in downward continuation; however, here it is the 02 Pjot2 or Pit term which causes difficulty rather than the Pzz term. A quite radical approach to this problem has been proposed by Deregowski,15 in which the PII term is retained and so the full wave equation is solved. The starting point for the derivation is the acoustic wave eqn. (1.1), in which the velocity is halved, i.e.

(3.8)

Page 178: Developments in Geophysical Exploration Methods

MIGRATION

Define a coordinate transformation as follows:

x' =X

t' ;= t

IZ2 " = t + -=dz

o C

171

(3.9)

where c is a frame velocity which is independent of x. This coordinate transformation defines a new coordinate" in place of depth, which moves with a 'frame' velocity c/2. In other words, if an observer is moving in the direction of the negative z axis at frame velocity c/2 theJ! the time " is stationary. When the frame velocity is close or equal to the acoustic wave velocity some interesting changes occur in the transformed wave equation. Substituting eqn. (3.9) into (3.8) and dropping the primes gives

4 a2p a2p (1 1 )a2p 8 a2p a (l)ap c2 at2 = ax2 + 4 c2 - c2 a,,2 -c2 at a" + 2 az ~ ih (3.10)

In making the substitution it was assumed that cwas independent of x. Even when c is a function of x, Deregowski justifies the derivation of eqn. (2.10) provided that the depth steps A" are small. In the case where cis a constant, the last term in eqn. (3.10) disappears and the equation may be split into 'diffracting' and shifting or 'refracting' parts respectively as follows:

a2 Plat a" = -t(a 2 Plat2) + c2 18(a2 Plax2 )

aplat = t[(C2jC2) - 1] apia" (3.lla)

(3.llb)

To solve eqn. (3.lla) by conventional methods would involve dropping or approximating the Pit term to obtain a one-way 15 0 or 45 0 equation. After substituting in the finite difference approximations, eqn. (3.lla) could be solved in the usual manner. Similarly, eqn. (3.lIb) could be integrated to yield a solution which is the direct counterpart to eqn. (Al.8). Migration by means of wave tracking could therefore be carried out in a manner precisely analogous to downward continuation.

Deregowski approached the problem rather differently, and it is outside the scope of the article to discuss this method fully. However, he introduced some useful concepts which can be summarised here. First of all he discovered that the Pit term could be retained in eqn. (3.lla) without approximation. Leaving this term in the finite difference equation means that, for a solution, two or more wave tracking planes are required to initiate the time marching scheme. For the first step only of the process, the

Page 179: Developments in Geophysical Exploration Methods

172 P. HOOD

Ptt term was dropped, and so only one wave tracking plane was required as initial data. On the second and subsequent steps of the wave tracking procedure, there were therefore at least two planes of wave tracked data available so that the Ptt term could be retained without approximation. Since wave tracking starts at the maximum recorded time on the section and continues through to zero one-way time, any down-going waves generated by the algorithm are automatically dropped from the calculation as it proceeds, because these waves evolve by moving to later rather than earlier times. Furthermore, down-going wave energy is rapidly dispersed by the finite difference calculations in a retrogressive coordinate frame. The other interesting feature of Deregowski's work was that the finite difference grid sampling size was relatively coarse in the t direction (i.e. greater than the recording sampling interval M) and fine in the r direction. This arrangement meant that, at each step of the wave tracking procedure, several new samples of recorded data (rather than a single sample) were interpolated onto the top of the wave tracking frame. After the data had been tracked to the time t = 0, the migrated wavefield could be obtained without interpolation since the z step chosen was effectively:

~z = (c/2)M

Deregowski's work thus gainsays much current thinking on grid sampling size in the t direction, and on the retention of all the terms in the wave equation.

3.5. Absorbing Boundary Conditions The surface recorded data is generated from reflection points which may

not lie within the domain bounded by the extreme shot point and geophone point positions. Downward continuation in such a case will cause data to image outside the extremities of the computational domain. Unless the computational domain is extended to include these imaged positions, then data may be artificially reflected from the side boundaries as downward continuation or wave tracking proceeds, and may destroy useful images within the domain. As implied by the previous sentence, extra zero filled traces can be added to the computational domain and will generally prove sufficient to tackle the problem. It is possible, however, to develop specialised equations at the domain boundary so that most of the outgoing energy is absorbed by the boundary rather than reflected, as Clayton and Engquist 19 have shown.

Between the extremes of a simple-minded padding out of the domain with dead traces and the use of a modified boundary equation lies a method

Page 180: Developments in Geophysical Exploration Methods

MIGRATION 173

discussed by Deregowski. 25 His method consists of predicting the extreme boundary values at time t - KM from the gradient and values of the wave surface in two previous wave tracking frames, i.e. P(x, z, t) and P(x, z, t + Kl1t). These predicted values are then used as Dirichlet conditions on the new wavefield. His method is moderately successful and can be used to reduce the number of extra traces required by the computation.

3.6. Finite Difference Migration before Stack-An Introduction Migration before stack is necessary when dips are sufficiently steep or velocities are so rapidly varying that the assumptions of a flat layered earth model patently fail. In this case the common midpoint stack will be a poor approximation to a zero offset section. There are a number of definite levels on which this problem can be tackled. The first level may be termed a partial pre-stack migration. The aim here is to map data recorded at a finite offset to zero offset. All the resulting zero offset sections may then be summed together to obtain a corrected CM P stack, before a conventional post-stack migration is applied. This method will be covered in Section 6 under the heading of stack enhancement. The next level on which this problem may be approached is in terms of migration of constant offset sections. A finite offset section can be pictured as being created by an exploding reflector model based on an elliptic wavefront. 26 This elliptic wavefront has a constant interfocal distance equal to 2 h (the offset) and a semimajor axis of t vI. The semimajor axis therefore expands at the same rate as the radius of the circular wavefronts used in the zero offset exploding reflector model. All the offset sections are migrated together using a generalised wave equation-commonly called the dou ble square root equation (Section 4.4). To migrate the offsets independently requires a diffraction stack (Section 5.4) rather than a downward continuation procedure, unless the latter approach is limited in accuracy to 15 0 type equations. If higher-order terms are included in the finite difference equations, these imply a coupling between the offsets,27 with the result that downward continuation of each offset cannot be achieved independently from the others. The 15 0 equations are developed in Claerbout28 and appear as eqn. (11-3-19) in the latter publication. It is understood that some contractors have obtained steeper angle equations than this, under the assumption that the velocities are weakly laterally variable, and that the double square root equation can be approximated by a single square root equation-the one-way wave equation at zero offset with a special velocity. In this case, normal move-out corrections are applied to the non-zero offset sections, and a migration velocity is developed which would collapse diffraction pseudo-hyperbolae

Page 181: Developments in Geophysical Exploration Methods

174 P. HOOD

to a 'focus' at each offset. 29 Since in fact the response at a finite offset to a point scatterer is somewhat flattened as compared with a hyperbola,30 whereas in this finite difference procedure a true hyperbola is implied, then the foci will be diffuse. Each constant offset section is migrated, therefore, with its own migration velocity using a steep dip wave equation. The process is, by virtue of the approximations made, a 'time' migration rather than a 'depth' migration. We would not expect that this method would yield any significant improvement over pre-stack partial migration followed by a 'depth' migration and will not discuss it further.

Slant stack migration is a rather different approach to the problem; it is included in the category of pre-stack migration since, although some partial stacking is involved in the formation of the slant stacks, the main stacking occurs after migration. This method is discussed in some detail in Section 3.7. The main drawback of the method is that it is restricted to sections where the velocity is a function of depth only.

The final approach which will be mentioned is migration of shot and geophone gathers in a downward continuation process. This technique represents one of the best methods available for migration in a variable velocity medium. Not only can velocity variation within a spread length be included but the differences in velocity on both up- and down-going wave paths can be considered. It is, unfortunately, also one of the most expensive methods, which is a strong deterrent against general use. A discussion ofthe method appears in Section 3.8.

3.7. Slant Stack Migration The idea of slant stacking was first introduced by Claerbout in Stanford Exploration Project reports as early as 1974; it was made public in a paper by Schultz and Claerbout. 31 The idea is to simulate plane wave rather than point sources. This is achieved by summing the output, at a single geophone, produced by a series of closely spaced equi-amplitude point sources. It is assumed that the sources are regularly spaced and extend to infinity. In essence this is rather like the ball bearing model for reflectors discussed earlier (Section 2.2). Obviously the assumptions made are not realised, and various truncation and aliasing effects arise in practice since the shot points are spaced some distance apart, and are of uneven amplitude, and the spread length is finite.

Using the methods outlined by Schultz and Claerbout we might sum all the traces in a common geophone gather without any time delay between traces. This is called a vertical stack and approximates to the trace which would be obtained from a plane wave travelling vertically downwards into

Page 182: Developments in Geophysical Exploration Methods

t

MIGRATION

---+ AN TI +-­ALIASING

WINDOW

175

FIG. 10. A common geophone gather showing summation lines for a typical slant stack. Data within the anti-aliasing window is summed together and placed at the shot-geophone zero offset trace at the time given by the intersection of the dashed

line on the time axis.

the earth. If the traces from each shot are given an arbitrary delay then any shape of wavefront can be simulated. In particular, if the time delay is linear with respect to the shot distance coordinates then a plane wave travelling at an angle to the vertical is simulated (Fig. 10). The traces derived in this way from each common geophone gather may be displayed side by side and the section so obtained is called a slant stack. In this display the time coordinate is shifted so that arrival times from a horizontal bed appear at the same time on each trace. This time transformation is

(3.12)

where:

I = time measured from the instant a plane wave at angle () to the horizontal arrives at the origin (Fig. II);

I' = time coordinate in the slanted frame; Xg = horizontal coordinate of each common geophone location; p = effective horizontal slowness (l/cH) of plane wave.

Page 183: Developments in Geophysical Exploration Methods

176 P.HOOD

The time coordinate t' is in fact the natural output coordinate from the stack over each gather, and the parameter p is more commonly known as the Snell's law parameter. We can form a number of stacks in which the incident wavefront has differing arrival angles. These may be characterised by the Snell's law parameter p, since this is related to the wave angle by the expression

p = sin ()jc (3.13)

Velocity analysis can be carried out using p gathers, as Schultz and Claerbout point out, but this is outside the scope of this article.

To migrate slant stacked data using downward continuation and imaging, it is necessary to construct first of all the imaging condition. In Fig. 11 the travel time t for a plane wave to reach the point scatterer is given by

iZOCOS()(Z) d t= z+xoP

o c(z) (3.14)

Defining a further time coordinate 1 which measures time from the instant the plane wave strikes the scatterer gives a total travel time

IZOCOS()(Z) t = () dz + XoP + 1

o c z (3.15)

When we migrate, the imaging time is given by 1 = 0, i.e. by eqn. (3.14). Transforming eqn. (3.15) to the slant frame time at the geophone (using eqn. (3.12)) gives

iZOCOS()(Z) t' = ( dz + (xo - xg)p + 1

o c z) (3.16)

After downward continuation of the slant stacked data to the time 1 = 0, the geophone will be spatially coincident with the scatterer, i.e. Xo = Xg in eqn. (3.16). Thus the imaging conditions in the slant frame time coordinates are given by

Xo = Xg

t = z , 1z0 cos ()(z) d

o c(z) (3.17)

and by eqn. (3.13):

(3.18)

Page 184: Developments in Geophysical Exploration Methods

MIGRATION 177

~~ ~~"'I FIG. 11. Geometry of wavefront propagating into the earth, shown in real time. The slant stack section transforms real time to a new coordinate in which horizontal beds will all appear at the same time on each trace. Also shown is a point scatterer at

(xo, zo) which reflects energy back to the surface diffusely.

The imaging condition in eqn. (3.17) can be calculated by numerical integration. After this the transformation from the image at the scatterer to a surface related coordinate is achieved by a simple skewing of the data, given by

[Zo X = Xo - Jo tan8(z)dz

[Zo pc(z)dz = x 0 - J 0 ~('-I -_=--p-z:';-c-';;Z-( Z'-)'-)~I/-;;-Z (3.19)

Migrated sections with different p values can be superimposed after this transformation has been done.

Although the imaging conditions and post-imaging transformation are more complicated than usual in the slant frame, no special methodology is required in the downward continuation. Standard techniques as outlined in Appendix 1 may be used, with a transformation to slant frame coordinates as appropriate.

The relations developed in this section are applicable only when the earth model is depth stratified, i.e. c = c(z). Although the slant stack procedure still remains valid in a medium with lateral velocity variation, the imaging conditions have to be determined by a ray tracing procedure, or by downward continuing a unit amplitude plane wave impulse, and imaging at the point of time and space coincidence of this down-going plane wave and the downward continued up-going wave. The latter method has been extended to include removal of multiple reflections by Estevez3Z for the

Page 185: Developments in Geophysical Exploration Methods

178 P. HOOD

case of a constant velocity medium during the migration. Note that this method should not be cQnfused with multiple removal by deconvolution techniques, which exploit the constant step-out timing relationships on slant stacked data.

More generally, it is considered that slant stack migration is a good but expensive method in a depth stratified medium, and is a potential wave equation technique for multiple removal in this case. Once the medium has significant lateral velocity variations the method has nothing to recommend it.

3.8. Migration of Shot and Geophone Gathers We now come to one of the best techniques in existence for migration of data in a medium with strong lateral velocity variations. The procedure, which is described by Schultz and Sherwood,3 3 consists of alternately downward-continuing shot and geophone gathers back into the earth. The use of geophone gathers requires that reciprocity of shot and receiver positions be valid. After each ~z step downwards, the data is regathered into shot and geophone gathers and the process continues (Fig. 12) until the data is imaged on the plane (xs = xg , Z, t = D)-see Section 2.6. This is the zero offset plane and data is simply collected here without any stacking being necessary.

Since the processes involved correspond to an actual rather than a conceptual physical experiment, downward continuation is possible using the unmodified wave eqn. (1.1). Thus we solve alternately

V2 P _ ~ 02 Pg = 0 g c2 ot2

(3.20)

and

(3.21)

where P g and Ps are the pressure amplitudes recorded on common shot and geophone gathers respectively. Unidirectional wave equations may be developed to solve this pair of equations along the lines discussed in Appendix 1.

The only drawback of this method (apart from cost) is that data may be under sampled in the horizontal direction if shots are widely spaced, with consequent aliasing problems. The main advantage is its ability to handle velocity variations on both paths, from scattering point to shot and to receiver.

Page 186: Developments in Geophysical Exploration Methods

MIGRATION 179

c • • • • • •

• • • • • • X9-

• • • • • • • • • • •

• • • • • • A B

FIG. 12. (xs, Xg) plane showing arrangement of geophone points. The shots are set off on the line Xs = Xg and the recorded positions are indicated by dots. Common shot gathers lie on lines paralleled to AB, while common geophone gathers lie on

lines parallel to BC.

4. F-K MIGRATION

4.1. Introduction Up to the early 1970s migration was conventionally performed in the 'time domain' or in (x, z, t) space, but there were of course no conceptual reasons why the same operations could not have been achieved in wavenumber (spatial frequency) or (kx' k z, ())) space. Indeed, as early as 1972, Maginness34 applied Fourier reconstruction methods to ultrasonic imaging. However, the method proposed by Maginness, although it could be applied to a depth stratified media, was costly computationally since it involved forward and inverse Fourier transforms at each step in the propagation, since the interest was in reconstruction of the total wave field at a remote plane. This differs from the seismic imaging objective, which is concerned with only the wavefield at zero time on the remote planes (see Section 2.6). Interest in Fourier methods was awakened as a result of an excellent paper by Stolt,24 similar work was published in the context of holographic

Page 187: Developments in Geophysical Exploration Methods

180 P. HOOD

reconstruction at around the same time by Booer et al. 35 Stolt's method consisted of a double forward Fourier transform, a modification of the phase and amplitude of each Fourier component, followed by an inverse Fourier transform. Although Stolt's method was extremely fast com­putationally, its natural application was to constant velocity media, where it was and still remains the best method available for migration. In extension to media in which the velocity was a slowly varying function of position, Stolt had to make several assumptions; these led to a significant loss in migration accuracy and made the general application of his method doubtful.

There have been further developments of Fourier techniques by Gazdag36,3 7 ,38 which permit accurate migration in a depth stratified media and fairly accurate migration in a heterogeneous media. Some of Gazdag's proposals are discussed in Section 4.3, and his hybrid finite difference/Fourier methods are discussed in Section 6. Another area of interest is in migration before stack, and Phinney and Frazer39 have produced an article on this subject which is discussed in Section 4.4.

In general, because of their economy and very low computationally generated noise, Fourier methods are attractive as a preliminary migration tool. Application of these methods in heterogeneous media is not yet free from problems, because they naturally work best in a constant velocity medium. If, as is anticipated, these problems can be overcome, then Fourier methods will almost certainly supersede finite difference techniques as the major migration tool in the future.

4.2. Geometrical Interpretation of Fourier Migration Methods We start this development by looking at ray theory. Suppose that we consider a single ray with wavenumber k, wherelkl = 2n/.Ie, this vector has components along the x and z axes of kx and kz respectively (see Fig. l3). This ray may be considered typical of the many rays which can be constructed from a plane wave front in the direction AB. In seismic terms, if we have an earth reflectivity series then the double Fourier transform of this series would yield the spectral decomposition into its planar components, and correspondingly this can be pictured in terms of the rays normal to each plane. Ifwe look at the recorded time section, a double Fourier transform will again decompose this into its spectral planar components (and hence 'rays' normal to each plane). One of the earliest migration techniques­known as a swinging arm technique-exploited the relation which exists between dip on the time section and dip on the migrated depth section or earth reflectivity series. By picturing the Fourier transformation as nothing

Page 188: Developments in Geophysical Exploration Methods

MIGRATION 181

k~ ---------

A

~~------------~----------~x k;l:

B

FIG. 13. Ray vector k is composed of two components kx and k z.

more than a decomposition into the normals to each plane on the section, it can be seen that the swinging arm technique can be applied in the Fourier domain. 40 In Fig. 14 we show a semi-infinite plane in a constant velocity medium, dipping at an angle IXr on the reflectivity section and IXt on the time section. Assuming an acoustic wave speed e, point P maps into point P' on the zero offset time section, at a time t given by

t = zo/cos IXr

By trigonometry OS = O'S' = Xo + Zo tan IX" and

S'P' Z e(tan IXt) = c· -- = 0 = sm IX, (4.1)

O'S' cos IX,(Xo + Zo tan IX,)

Equation (4.1) relates the dips on the reflectivity and time sections. In the Fourier transform domain, OP has spectral components kx' k z related by

tan IX, = kxfkz

sin IX, = kxf(k; + k;)1/2

(4.2)

(4.3)

The normal to O'P' has spectral components k~, w which are related by

ctanlXt = ck~/w (4.4)

Page 189: Developments in Geophysical Exploration Methods

182 P. HOOD

0' s' ~-'--------~r-------~

pi (0)

t

O~""' _____ ---TS _____ +-x

(b)

i!

FIG. 14. (a) A dipping plane O'P' on the time section is generated by (b) a dipping plane OP on the depth section.

Thus the migration mapping maps the Fourier components by virtue of eqns. (4.1), (4.3) and (4.4) as follows:

where

'" , ::: 2 2 1/2 P(kx,O,w)~P(kx'«w/c) -kJ ,0)

P(kx,z,w) = Sf P(x,z, t)exp [i(kxx - wt)]dxdt

P(kx, kz' t) = Sf P(x, z, t) exp [i(kzz - wt)] dzdt

(4.5)

This mapping, due to Booer et al. 35 is illustrated in Fig. 15. The surface recorded data is double Fourier transformed over x and t and lies on the plane ABeD. Data is projected out to the dispersion curve of the medium, which is a cone defined by

(4.6)

Page 190: Developments in Geophysical Exploration Methods

MIGRATION 183

Unmigrated data

o Migrated data

FIG. 15. Unmigrated data lying in the plane ABCD is mapped onto the surface of a cone fJ = 0 and then onto the migrated plane defined by the region DTCO. A point R maps onto S and then to T. A point U which lies outside the cone is not mapped by this process-it corresponds to non-real data for which Ikxl > wle. A similar

mapping occurs for negative values of w.

and then a perpendicular is dropped onto the kx' kz plane. The imaged data is the inverse double Fourier transform of data in the kx and kz plane. The mapping in a dispersive but constant velocity media is analogous, but in this case a distorted conical surface f3 is used.

4.3. Stolt's Theory in 2D The geometrical illustration of F-K migration describes in essence all that is involved in the migration process. It is simply a mapping from the (kx, 0, w) plane to the (kx, kz' 0) plane (see Fig. 15). However, by developing the method from a wave theoretical rather than a ray theoretical basis, a cos OCr directivity factor appears in the mapping (which is identical to that obtained in Kirchhoff integral theory). This development is demonstrated in Appendix 2; we will only concern ourselves with the final relation (A2.lO):

P(x, d, d) = 4~2 f f P[kx, 0, (k? + kDl/2]

k' X (k'2 d 2 1/2 exp[i(kxX + k~d)]dkxdk~ (4.7)

d + k x )

Page 191: Developments in Geophysical Exploration Methods

184 P. HOOD

k ...

A

o

B

FIG. 16. The Stolt mapping of data from a square to a circular region in kx' kd space. Data in the shaded area is not mapped since it corresponds to values for

which Ikxl > Ikdl·

The coordinate d represents a distance type of coordinate-Stolt transforms from a time section to a 'depth' section by multiplication of the times by the constant section velocity. Migration is a mapping from the original Fourier transformed data as follows.

(4.8)

and is represented by a mapping from a square region in (kx , kd) space to a circular region (see Fig. 16) in the same space. This is identical in effect to Booer's construction. There is also a directivity term in eqn. (4.7) which is the multiplying factor

cos IXr = (k'2 k 2)1/2 d + x

k' d (4.9)

In considering a variable velocity medium Stolt converts the time section to a 'depth' section using a pseudo-velocity function. A simple 'depth' conversion using average velocities tends to distort the flanks of diffraction hyperbolae, relative to the shoulders, and Stolt has devised a special velocity which attempts to correct for this effect. The change in coordinates from time to depth uses the relation

d = (2 i C~Mstdt)1/2 (4.10)

where d represents a depth coordinate. The purpose in transforming to a

Page 192: Developments in Geophysical Exploration Methods

MIGRATION 185

depth coordinate is that the effects of variable velocity are considerably reduced in these coordinates. Migration can therefore be carried out with a constant velocity.

Stolt obtains an equation in which all the effects of velocity are lumped together in a single parameter W. Stolt suggests that, in most cases, a constant W may be used in migration. Trials by this author have borne this out to a certain extent, with W = 0·5 being quite a useful value. However, the main conclusion reached is that with uncertainties in effect of the pseudo-depth conversion followed by the use of a W factor Stolt's method is unattractive as a migration tool except as a preliminary migration in a detailed migration study, or where the velocities can be treated as constant, e.g. in modelling sea bottom multiples.

Having painted a sombre picture of Stolt's method, it nevertheless remains popular, and so perhaps some final caveats on practical processing procedures are in order. First of all, since the method is based in practice on fast Fourier transform algorithms, the number of traces to be processed is padded out with zero traces to a power of 2. There is a requirement in any case for some zero traces to be appended to avoid data imaging outside the computational domain (Section 3.5), but one should beware of 'just missing' a power of 2 and thereby doubling computation time. Another problem arises if the velocities are not thoroughly smoothed across the section to be processed. Undesirable distortions of the pseudo-depth section are liable to occur with disastrous effects on the final migrated section. Lastly, experimentation with synthetic sections may lead to the conclusion that a single W parameter is insufficient. It seems feasible to merge the results from several migrations, each with its own W parameter. This is analogous to the suggestion of Chun et al. 41 for multivelocity migration, in which the migrations with differing constant velocities are combined.

4.4. Gazdag's Phase Shift Method Gazdag36 has presented an alternative Fourier based method which is exact in a depth stratified medium. In the case of a constant velocity medium his method reduces to that of Stolt, whilst in a laterally heterogeneous medium certain approximations are required which render his method rather less useful. Unfortunately, with Gazdag's formalism it is not possible to use a double inverse Fourier transformation after the migration mapping; consequently his method is somewhat slow in comparison with Stolt's, but nevertheless it should be comparable in speed to finite difference computations.

Page 193: Developments in Geophysical Exploration Methods

186 P. HOOD

Gazdag starts from the scalar wave equation and, after a change of variables in which the depth z is replaced by a two-way vertical time r, he obtains (eqn. (5.3)) a result which in our notation is

(4.11)

This result, which is reminiscent of the earlier one by Stolt, implies that data can be migrated by applying a phase shift to each Fourier component, followed by a summation over wand a fast inverse Fourier transform over k x .

In the case of a weak lateral velocity variation, Gazdag approximates the wave equation by a IS 0 equation, and obtains an integro-differential expression for the migrated result. Bearing in mind how poorly a 150 equation behaves on steeply dipping beds we cannot believe that this is the way to approach the problem. Rather a better method, in our view, is to propagate through each layer using eqn. (4.11), assuming local homo­geneity within the layer. A phase mask is then applied to this result which takes care, to a large extent, of deviations from homogeneity. This approach, due to Estes and Fain,42 has been applied in underwater acoustics and certainly warrants examination by the oil industry.

Gazdag has produced some other interesting techniques which are based on hybrid finite difference/Fourier methods; we discuss these in Section 6.

4.5. Migration Before Stack Stolt24 has discussed F-K migration before stack for data arranged in eMP and offset coordinates. His algorithm is developed for migration in a constant velocity medium. In Stolt's development, migration of all the offsets simultaneously is implicitly required, and so we prefer the development by Phinney and Frazer39 which embraces the particular cases of migration of vertically and slant stacked data, as well as migration before stack of monochromatic sections. Again, unfortunately, the restriction is to a constant velocity; but small-scale fluctuations about the constant value are permitted. However, in this case, migration of any two-dimensional subset of the original three-dimensional spectrum of recorded data (viz. offset, eMP, time) can be used to determine the earth reflectivity series, which in turn is a function of only two dimensions (viz. eMP, depth). The

Page 194: Developments in Geophysical Exploration Methods

MIGRATION 187

correlation of different estimates of the reflectivity series permits noise suppression or velocity analysis to be achieved.

Phinney and Frazer's method is very similar in final effect to Stolt's method, but it does differ in two important respects. First of all, the spectrum of the source and receiver response is implicitly included in a premultiplying term. Secondly, the equations for dealing with stacked data are different in that Phinney and Frazer's method permits migration of data stacked with a constant move-out (i.e. slant stack) as opposed to normal move-out stacked data required by Stolt's method. For un stacked data both methods result in what is termed a double square root equation, so called because of the two square roots appearing under the integral. As an example, Stolt's equation for migration is

P(x, z = ct/z, h, t = 0) = (2n;3/2 f dw f dkx f dkhP* (kx, z = 0, kh' w)

where

x exp [-i[kxx - (qs + qo)ct/2]] (4.12)

x = (Xg + xs)/2

qs = [(W/C)2 - i(kx - kh)2P/2

qo = [(W/C)2 - i{kx + kh)2P/2

P*(kx, Z, kh' w) = (2n~3/2 f f f dtdxdhP(x, z, h, t) exp [i(kxx + khh - wt)]

Neither of these two approaches to the problem is suited to real migration problems. A great deal of effort needs to be expended before migration prior to stack using Fourier techniques becomes a practical proposition in a heterogeneous earth.

5. DIFFRACTION STACK MIGRATION

5.1. Introduction Diffraction stack migration is a well tried and tested technique and was, in the early 1970s, the most popular migration method. It has gone somewhat out offavour in the last few years, but nevertheless it is still widely used. The developments in this area have been steady rather than innovative, and have largely stemmed from a clearer theoretical understanding of the relation

Page 195: Developments in Geophysical Exploration Methods

188 P. HOOD

between the diffraction stack process and the Kirchhoff integral solution to the wave equation which it approximates (see for example Lamer and Hatton43). This has led to a better use of weighting and directivity factors, and to a filter for the correction of phase shifts.44

Another step forward has been the application of the 'datumming' techniques of Berryhill,7 which uses a mixture of downward continuation and diffraction stack processing. Datumming techniques are used to project data recorded on the surface of an arbitrarily segmented earth model down to different datum levels which may be irregular in shape; in each segment the velocity is approximately constant. This is related to, but distinct from, the concept of recursive Kirchhoff migration, in which the segment velocities can be variable but the datum layers are parallel-as discussed by Berkhout and Palthe. 45

Finally, the 'Hubral' correction must be mentioned.4 This correction deals with the case where migration has collapsed energy to the apexes of the diffraction hyperbolae. This energy, being misplaced due to laterally varying refraction effects, is repositioned by tracing 'image' rays down from the earth's surface. This process may be regarded as a first step towards a complete ray tracing procedure to determine the summation trajectories for the diffraction stack. Although the 'Hubral' correction can be applied to any migration scheme which ignores laterally varying refraction effects, in practice the better finite difference algorithms now in use include these effects, and the F-K migration Stolt algorithm copes with these refractions to a limited degree, so that the Hubral correction is only really useful in terms of the diffraction stack process, when velocities vary laterally. Note that refraction effects due to vertical variation in velocity do not displace the apexes of diffraction hyperbolae and so the Hubral correction is not required in this case.

Despite its poor reputation, diffraction stack migration still offers, in our view, quite an acceptable migration-particularly in areas of steep dip­provided that all the corrections and filters are properly applied. The main criticisms which its detractors make regarding the process are that it:

(1) appears to produce a great amount of migration noise from horizontal beds;

(2) loses high frequencies from the data; (3) introduces a large amount of noise when data is spatially

undersampled; (4) produces a larger amount of 'smile' patterns from noise on the

section than other methods.

Page 196: Developments in Geophysical Exploration Methods

MIGRATION 189

While there is some truth in all of these allegations, the effects can be overcome in part by ajudicious choice of processing parameters. Indeed it is our belief that it is the misapplication of weighting, directivity factors, filters and mutes which has bought the Kirchhoff integral method into its present disrepute.

In this section we shall be looking at these developments in our theoretical understanding of the migration process, and their relevance to the choice of processing parameters. Also we will examine 3D migration from the Kirchhoffviewpoint, and discuss when it is valid to split this into a series of 20 migrations in alternate directions. Although the question of splitting is relevant to all other migration techniques, it is perhaps easiest to comprehend the assumptions made in a diffraction stack process than any other.

5.2. Development of the Diffraction Stack via the Kirchhoff Integral The development of the diffraction stack method was initially based on ray tracing concepts and the scalar diffraction theory of Huygens and Fresnel. Later on, it was discovered that the diffraction stack process could be related to the Kirchhoff integral solution to the wave equation,5,46 and this provided the basis for weighting factors in the summation. 4 7 Finally, to complete the circle of interrelationships, it was shown that the F-K Stolt24

algorithm expresses precisely the same operation in the frequency domain as the diffraction stack process,48.49 and Berkhout and Palthe45 have related finite difference algorithms to the Kirchhoff integral.

Schneider46 has derived the Kirchhoff integral in terms of the free surface Green's function for the wave equation. He obtained the following form of the 3D Kirchhoff integral formula:

(5.1)

This relates thewavefield P(ro, to) observed on the plane z = 0 to its value at a point P(r, t) in the earth's subsurface (see Fig. 17) at an earlier time. In the seismic application, the second term in the square brackets is normally ignored since it is small, but, as Schneider points out, its inclusion is trivially achieved. The operations implied by eqn. (5.1) are simply weighting, scaling and phase shifting of data on a hyperboloid. The term cos IXr represents a directivity term which falls off from its value of unity at the apex of the hyperboloid to a lesser value on the flanks. Various other directivity terms have been discussed by Kuhn.50 In the conventional application, cos IXr

Page 197: Developments in Geophysical Exploration Methods

190 P. HOOD

directivity works well, provided that the data is not near the spatial aliasing limit (i.e. it is valid for gentle dips). Under the limiting conditions, Kuhn recommends instead a COS3 IXr directivity term coupled with a 'beam steering' approach to migration. In this case diffraction stack migration requires precalculated dip angles, ray path distances, migration velocities weights and mute patterns. It is clear that such complicated calculations make Kuhn's suggestion somewhat unattractive, although if the data is at

~°K-__________ ~X

p (r.~)

l

FIG. 17. Geometry for the 3D Kirchhoff integral solution.

the spatial aliasing limit then this price may have to be borne. The more usual directivity term cos IXr is identical to that obtained in Stolt's F-K migration, and the angle IXr represents the dip angle of the earth reflector which is to be imaged.

The other features of interest in eqn. (5.1) are the factor l/lRlc which represents a true amplitude scaling factor, and the differentiation of the pressure with respect to time. Differentiation, when examined in the frequency domain, represents nothing more than a nl2 phase shifting operation together with a linear high-frequency boost-this is the 'Newman' filter. In practice the pressure amplitude rather than its derivative is summed over the hyperboloid; if the derivative of pressure is summed then no Newman filter is required, otherwise aN ewman filter can be applied before migration.

For two-dimensional structures the area integration in eqn. (5.1) can be reduced to a line integral,51 by integration over one of the variables (y for example). This reduction is somewhat tedious and results in the following integral:

(5.2)

Page 198: Developments in Geophysical Exploration Methods

MIGRATION 191

where

R2 = [(Xo - X)2 + Z2p /2

In eqn. (5.2) we have dropped the last term in the square brackets of eqn. (5.1). The square root differentiation in eqn. (5.2) is not defined, except in the frequency domain, where it represents a non linear high-frequency boost followed by a n/4 phase shifting operation. This is the two­dimensional version of the Newman filter. The other factors appearing in eqn. (5.2) are the two-dimensional counterparts to those in eqn. (5.1). If the 3D migration is split into alternating direction 2D migrations, then the 2D Newman filter will be applied twice and produce the same effect as the 3D filter; however, the amplitude weighting factor will be in error since in eqn. (5.2) it was derived for a line rather than a point source.

In the practical implementation of eqns. (5.1) and (5.2), the integration is replaced by summation and the infinite limits on the integrals by finite limits. The replacement of integration by summation results in discreti­sation errors (due to the discrete nature of sampling on the ground), whilst the termination of the summation after a finite number of terms manifests itself in terms of an error which may be called 'truncation' error. Provided that the ground sampling interval Llx and the fold of the diffraction stack are chosen with due regard to reflector dips, neither of these effects will be significant. When this is not the case, the application of high cut filters before migratioIi, and trace weighting in the migration, will overcome the worst of the troubles, 52 but only at the expense of some loss in definition. Indeed, if sampling is near the spatial aliasing limit, then the beam steering method proposed by Kuhn 50 appears to offer the best solution.

It is easy to see how discretisation noise arises with the diffraction stack method. Consider a plane horizontal reflector (Fig. 18(a)). Application of the diffraction stack method will move data in the direction indicated by the arrows, for a central output trace. This output trace is shown in Fig. 18(b) and contains the main broadened pulse, preceded by a long tail. This effect is aggravated further when the beds are dipping. To remove these effects a time and offset varying top cut filter should be applied, before and during the migration. This explains why the finite difference method, which needs no pre-migration filtering, performs more satisfactorily for gently dipping beds. However, on the steeper dipping beds, dispersion effects produced by some numerical algorithms do tend to reverse the balance in favour of diffraction stack migration.

It was mentioned previously that the early termination of the infinite limits on the integrals of eqns. (5.1) and (5.2) caused some undesirable

Page 199: Developments in Geophysical Exploration Methods

192 P. HOOD

- r--~, V - r--

V~ ,/ ;::: r-- r--., r--.::::-/ ;...-

(0) (b)

FIG. 18. (a) Energy data from a plane horizontal reflector are moved, in the diffraction stack process, in the direction given by the curved arrows to a central output trace. (b) This gives rise to a broadened output pulse preceded by a long tail.

truncation errors. Nevertheless the onus remains on the user to reduce the number of terms retained in the discrete representation of the integrals in order to achieve the following:

(1) Rapid processing-the fewer the terms the faster the computation runs.

(2) Limitation of noise smiles produced from isolated bursts of noise on the unmigrated section.

(3) Accurate representation of dipping beds-the number of terms included in the summation must be sufficient to cope with the maximum dip on the time section, but no more, since this tends to introduce noise as above, and also the construction of otTset­dependent filters becomes more complicated at the larger otTsets.

Having truncated the number of terms in the ditTraction summation, there is a penalty to be paid in that a number of 'ghosts' will appear before each reflector on the migrated output. Although similar in appearance to the 'discretisation noise', Safar53 has shown that their cause is indeed due to truncation of the KirchhotT integral rather than its approximation by a discrete summation. These effects may be reduced by tapering the cos IXr

directivity factor at the end of the swings. Another penalty is that, with fewer terms than are required to achieve migration, i.e. short apertures and large dip angles, the migrated output will appear in the wrong place-

Page 200: Developments in Geophysical Exploration Methods

MIGRATION 193

'under-migrated'. Since it may not be obvious what the maximum dips involved on the time section are, one may well migrate with an aperture which is smaller than that which is required.

5.3. Variable Velocity Migration and Datumming What we have said so far applies strictly to a constant velocity medium, since Kirchhoff theory is only valid in this case. To apply the diffraction stack process in a practical situation means that some approximations are required. In a depth stratified medium ray tracing and timing would indicate that we could still use the previous theory, but with the constant velocity c replaced by the RMS velocity occurring at the apex of the diffraction hyperbola. In the case of a medium with a weak lateral velocity variation, the diffraction stack migration should be followed by a Hubral correction. 4 The Kirchhoff integral is employed to deal with the diffraction effects, and the Hubral correction with the refraction effects or ray bending. The ray bending is such that the minimum travel time path for a point diffractor does not emerge at the surface directly above it, but at a point displaced to the side with higher velocity. The correction made by Hubral entails tracing down 'image rays', i.e. rays emerging at an angle of90 a to the earth's surface, back into the earth and accumulating travel times and corresponding lateral displacements. These corrections are then applied to the migrated output of the diffraction stack process.

When the earth has strong lateral velocity variations this type of procedure breaks down, and one might then apply a downward continuation procedure in conjunction with the diffraction stack process. This is called recursive Kirchhoff migration by Berkhout and Palthe,45 who suggest the method. Another technique, which is particularly useful when the earth can be pictured as a series of arbitrarily shaped segments each with its own constant velocity, has been proposed by Berryhill. 7 Instead of projecting the seismogram downwards in regular depth slices, these slices are irregular and follow the interfaces between the constant velocity segments. Given that the pressures at the top of each segment are recorded, or have been previously computed, the method computes the values at the bottom of the segment-the new datum level-by summation and weighting over the traces at the top of each segment in a method based on the Kirchhoff integral. Thus Berryhill derives an expression for calculation of the output traces P(x, z(x), t) as follows:

P(x, z(x), t) = (lIn) L Llxicos 8Jtdlr;l)Q(xi, Zi(X), t - tJ (5.3)

Page 201: Developments in Geophysical Exploration Methods

194 P. HOOD

where:

P = output trace Q = input trace at location (Xi' zJx)) delayed by the travel time ti and

convolved with a 5-10 sample length shaping operator-the Newman filter

ti = time for pressure wave to travel in a straight line between input and output locations

8i = angle between the normal to the input horizon and the line joining input and output locations

Llxi = trace separation between input traces at the ith location.

The geometrical quantities occurring in eqn. (5.3) are shown in Fig. 19. This expression is naturally enough reminiscent of the diffraction stack process; with weighting, directivity, amplitude and trace filtering all included. Note that this method can be applied in a variable velocity segment provided that the travel times ti can be accurately computed. Berryhill claims that the naive approximation in which curved ray paths (such as the dashed path in Fig. 19) are replaced by straight ray paths gives adequate results for a practical procedure. In this case the travel time ti is computed by integrating the ratio drJc(x, z) over the straight line joining input and output locations. Although Berryhill's approximation will lead to some error, there is no reason, provided that the segments are thin, to dispute Berryhill's claims.

It must be borne in mind that the output from this datumming technique is not the migrated output per se, but the time section which would be recorded at each datum level. Zero time on the output level refers to the time at which shots placed on the actual datum level are set off and recording starts. The method is therefore particularly useful when there is irregular sea bottom topography, which tends to cause all manner of complications

P( "",l,t)

FIG. 19. Geometry used in Berryhill's datumming method.

Page 202: Developments in Geophysical Exploration Methods

MIGRATION 195

on the recorded section. Berryhill's method will remove these effects and, after datumming to a horizontal reference level, conventional migration processing is then possible.

5.4. Migration before Stack of Constant Offset Sections Although constant offset migration via the diffraction stack process is possibly one of the most widely available of the pre-stack migration processes, there appears to be virtually no discussion of the method in the literature. For this reason more space will be devoted to this technique than its importance would warrant, to redress this surprising imbalance. It was noted, in discussing the finite difference method (Section 3.6), that the wave equation could not be separated out over the offset direction unless a low­accuracy wave equation was used, or, alternatively, a steep-angle wave equation could be used provided that distorted diffraction hyperbolae on a finite offset section were approximated by true hyperbolae with the aid of 'pseudo-migration' velocities. Again in discussing F~K migration methods it was concluded that migration before stack was only relevant in the practically uninteresting case of a constant velocity medium. It is clear that migration of constant offset section is most readily performed via diffraction stack methods.

The theoretical basis for the migration procedure is again the wave equation, but for simplicity ray tracing and timing calculations are used here, although these do not give directivity factors (which wave theory predicts). Suppose that we consider a constant offset section where the half offset value is h (Fig. 20). If the medium is homogeneous and has a constant velocity c, then ray path considerations tell us that the two-way travel time to a point scatterer located at (0, zo) is

1 1 t = - (SO + OG) = ~ {[(x - h)2 + Z~]1/2 + [(x + h)2 + Z~F/2} (5.4)

c c

A sample occurring at time t on the constant offset section will be repositioned at a time to on the output trace, where to = 2zo/c. This operation includes migration and NMO corrections. Residual NMO corrections can be applied in a refined velocity analysis. However, it is sometimes convenient to apply migration without correcting for NMO. In this case the samples are repositioned to t' where

(5.5)

A conventional velocity analysis may be applied and the data stacked in the usual fashion.

Page 203: Developments in Geophysical Exploration Methods

196 P. HOOD

~----:x: > s h M h G

o (O. ~o)

FIG. 20. A point scatterer at 0 is illuminated by a sound wave emanating from S. Scattered energy is received at a geophone G. The common midpoint is located

at M.

Ray tracing calculations can be used to determine the number of terms to be included in the migration scans. As in the zero offset, this depends on the dip angle CXr of the dipping horizon. This may be related to the (one-way) time dip angle cx, on the zero offset section (compare with eqn. (4.1) by

(5.6)

In Appendix 3 it is shown that the minimum half-scanwidth Xm needed to migrate correctly a bed with dip angle CXr ii'i given by (see Fig. A3.1)

Xm = (cto/2)[tan (cxr - P)] + h (5.7)

where

p=tan1 0 0 r _ {-ct +[(ct)2+4h2(l-COS22cx)F/2}

2h(l - cos 2cxr) (5.8)

To gain some insight into the distortions produced by diffraction stack migration, it is instructive to examine an article by Gardner et al. 6 It is shown here that an initial wavefield with a wavelength of A, given by

P = cos2nct/A (5.9)

will be distorted after migration into a new wavefield given by

P/~ __ -cos- 2z+-+-(ZA)1/2 2n ( h2 A) 2 A Z 8

(5.10)

Page 204: Developments in Geophysical Exploration Methods

MIGRATION 197

The distortion can be removed by multiplication by a nonlinear frequency boost and amplitude scaling term:

2 _fl/22J2

fo = (c2\to)ll/2

and applying a 45 0 phase shift (compare with the term 21tA/8A in eqn. 5.10). So it appears that the ordinary 2D Newman filter is appropriate to migration before stack. Equation (5.10) further predicts a change in wavelength after migration to A', where

(5.11)

In all migration schemes the frequency after migration is lower than before migration. This is easiest to see in F-K migration, which represents a mapping from high to low frequencies. Another way of illustrating this is in terms of a dipping bed. Migration essentially preserves the thickness of individual beds as it rotates them to steeper angles. The display of both the migrated and unmigrated bed takes vertical slices (traces) through the bed, which appear of course thicker as the dip of the bed becomes steeper. The only bed whose frequency content is unchanged by migration is a bed of zero dip. The theory of Gardner et al. is derived for zero dip and eqn. (5.11) predicts that, at finite offset, there will be a lowering of the frequency after migration even at zero dip-NMO stretch. Equation (5.11) thus provides the basis for deriving corrective filters or a mute region in the migration to avoid excessive pulse stretching. Suppose that each pulse is permitted a maximum stretch of 25 %, then by eqn. (5.11)

or since

h2/2z2 ~t

z = cto/2 to ~ 2hlc (5.12)

At a given offset h, the equality sign in eqn. (5.12) determines the minimum time on the output trace from which we would expect to receive contributions from the input section.

All of the relations developed in this section extend in the obvious way to a depth stratified medium. The velocity c generalises to the RMS velocity CRMS at the output time to' The only exception to this rule is eqn. (5.12), where, since the velocity on the right-hand side of the expression is itself a function of to, an iterative procedure is required to define to' Here and

Page 205: Developments in Geophysical Exploration Methods

198 P. HOOD

elsewhere CRMS is appropriate only for angles of dip less than about 60 0 • 46

For steeper angles of dip, just as in eMP stacking, the inclusion of fourth­order terms in defining the summation trajectory is appropriate. In the case of lateral velocity variation, there are only heuristic procedures available for choosing a migration velocity. The mean RMS velocity between input and output trace positions at a time to is one possibility. Alternatively, ray tracing may be used to determine the summation trajectory.

Finally, it must be remarked that it is common practice, in the interests of economy, for groups of constant offset sections to be lumped together before migration into substacks. There are no fixed rules which dictate at what level of earth complexity migration of a stacked eMP section will fail, since the definition of the onset of such failure is a matter of subjective appraisal. Similarly it is uncertain what number of substacks should be used in a migration before stack procedure. Newman54 has, however, produced guidelines for determining the number of offsets to be stacked in a substack on the basis of subsurface coverage. Near-offset sections have a greater density of subsurface areal coverage than far offset sections and consequently the nearer offsets are stacked together whilst the far offsets are migrated singly or in small groups. At present constant offset migration has been demonstrated to offer advantages (generally in the early part of the section) over post-stack migration only when the eMP stacking process has failed in some respect; usually this occurs in regions of complex and steeply dipping structure.

5.5. 3D Migration-Splitting Techniques Diffraction stack migration in three dimensions is a generalisation of 2D migration. The diffraction hyperbola from a single point scatterer (Fig. 21)

x _

,,======L=+~( ~7 v = constant

i zl I I I I I I Point

• D lffractor

FIG. 21. Point diffract or in a constant velocity medium.

Page 206: Developments in Geophysical Exploration Methods

MIGRATION 199

x--/ --(- / y/

/ I I I I A

t 1

FIG. 22. Hyperboloidal diffraction pattern.

thus becomes a hyperboloid (Fig. 22), and migration consists of the collection of amplitudes over all points of the hyperboloid and placing at the apex A (Fig. 23). As was discussed in Section 5.2, weighting directivity and frequency filtering are required.

A considerable economy can be achieved in 3D migration by splitting it down into a series of2D migrations. In the Kirchhoff method, for example, these savings are typically of the order of a factor of 100. The penalty to be paid is a slight reduction in migration accuracy, as Gibson et al. 55 have demonstrated. In a splitting method, amplitudes are collected in two stages (Fig. 24); they are moved first along the path to (x, 0, to) and finally to (0,0, To). Provided that the velocity is constant, no errors are introduced. For a depth stratified medium it is possible to identify the source of error by

x-

FIG. 23. Heuristically, 3D migration involves placing the sum of all amplitudes on the hyperboloid -at its apex. Allowance for geometric spreading, directivity and

frequency-dependent effects are required as well.

Page 207: Developments in Geophysical Exploration Methods

200 P. HOOD

tp. (0,0, T.)

FIG. 24. The amplitude at any input point (x,y, t) can be brought to the intermediate point (x, 0, to) at the apex of the hyperbola with x constant, prior to

doing a second stage summing along the hyperbola on the plane y = o.

y

I""-----~~

x-x,

t

FIG. 25. In a splitting method amplitudes are summed along the curve S'; a diffraction hyperbola whose velocity depends on the apex time to. In the full 3D migration, amplitudes are summed along the curve S lying on the hyperboloid surface; this curve is also a hyperbola but the velocity in question comes from the

apex of hyperboloid To and not to.

.. E i=

Vrms

FIG. 26. We wish to perform two-step migration to move the amplitude at A to • and finally to • on Fig. 23. The problem when velocities are varying with time, however, is that the correct diffraction surface is characterised by the velocity at the apex position To, but during the first step of the migration the process does not

know of the apex time To and moves amplitudes instead along the thin lines.

Page 208: Developments in Geophysical Exploration Methods

MIGRATION 201

examining the trajectory over which amplitudes are summed. This is defined by

(S.13)

A plane x = x 1 which intersects a typical hyperboloidal surface is shown in Fig. 2S. The intersection of this plane with the hyperboloid on the curve S is itself a hyperbola, with a minimum time on the apex of to. The equation defining S is (from eqn. (S.13»

(S.14)

Now suppose A, B lie in the plane y = 0; the travel times to and To are then related by

t~ = TlJ + (4xi/c 2 (To))

Substituting in eqn. (S.14) gives

t; = t~ + (4y2/C~MS(To))

(S.lS)

(S.16)

Equation (S.16) is the correct summation curve for 3D migration. If we perform a splitting method then the summation curve used will be along a different hyperbola Sf, where the travel time t~ is defined by

(S.17)

The difference in eqns. (S.16) and (S.17) lies only in the time at which the RMS velocity is defined. In the usual case of velocities increasing with depth, the summation will be along the thin curve in Fig. 26, rather than the correct bold curve. The resulting migration error depends on dip and on variation in magnitude of velocity, since S' varies with both of these quantities from the true curve S. The error is zero when either x or y lie on the strike direction and is largest when either of these axes lie at 4S 0 from strike.

It is generally true that migration with the incorrect velocity will result in positional (Ax) and temporal (M) errors in placement' of a dipping reflector (see Fig. 27). The reverse is also a useful concept. We can interpret errors in x or t as an equivalent error in velocity. Gibson et al. 55 have produced a very useful study on these equivalent velocity errors and their results are reproduced here. Note that there is a slight change of notation convention; in their figures Vis used for velocity rather than the letter c used in this text, and x, y coordinates may not necessarily follow dip and strike directions. Taking as a model a single dipping fault plane (Fig. 28), they simulated migration of this plane using several different (but realistic)

Page 209: Developments in Geophysical Exploration Methods

202

G> E i=

P. HOOD

Distance

/ 1: / /

'. / fl.t / ,

fl.x

FIG. 27. Migration moves the dipping plane on the left of the figure to the right and places it at an earlier time. In general, migration with the incorrect velocity causes a dipping reflection to migrate to the wrong place (characterised by the wrong

lateral position) and/or the wrong time (indicated by the dashed event).

velocity functions, and converted known timing errors on a given output trace into the equivalent velocity error. The four velocity models which were used in this study are shown in Fig. 29, which may be compared with two typical velocity functions for the North Sea and the Gulf of Mexico (Fig. 30). The percentage velocity error is displayed as a function of reflection time for a steeply dipping reflector at the worst (i.e. 45°) azimuth direction (Fig. 31). The largest relative error (~l %) occurs for the lowest velocity function in Fig. 29, and gets progressively smaller for the higher-velocity models. In Fig. 32 the dip dependence of the errors in the splitting method (labelled 'fast') is compared with that of the full 3D approach (labelled 'full'). Errors in the 'full' approach are solely attributable to approximation of a distorted hyperboloidal surface by a hyperboloid (i.e. 4th order and higher terms in offset are neglected in the calculations of the surface). Even

x-

FIG. 28. Gibson et at. 55 simulated migration of a dipping fault plane using different velocity functions. The notation used in the text is defined in this figure.

Page 210: Developments in Geophysical Exploration Methods

MIGRATION 203

V,ms(km/s) 3 4

~2 II> E i=

3

4L-______ ~~L-~

FIG. 29. Lm.s. velocity functions for the four velocity models studied.

1.0

45' Dip 45' Azimuth

234 Time (5)

FIG. 31. Percentage equivalent vel­ocity error as a function of reflection time for a steeply dipping reflector at the worst azimuth orientation. The worst error (- I %) occurs for the slowest velocity function in Fig. 29. The errors are progressively smaller for the higher-

velocity models.

Vnns (km/s) 3 4

., ';'2 E i=

3

4L-______ ~~~~

FIG. 30. Typical velocity functions for the North Sea and the Gulf of

Mexico.

4 45° Azimuth

e:. > ..... > <32 )( III

:::E

20 40 60 Dip (deg).

FIG. 32. Maximum percentage error (the maximum point in Fig. 31) as a function of dip. Even for quite steep dip, the maximum error is modest in com­parison with uncertainties in migration velocity typically encountered in prac­tice. The curve labelled 'Fast' pertains to the splitting method. The curve labelled 'Full' pertains to the full 3D migration approach. Errors in the full approach are attributable solely to the approx­imation to the true diffraction surface

by a hyperboloid.

Page 211: Developments in Geophysical Exploration Methods

204

1.0 45° Dip

Full

o

P. HOOD

30 60 Azimuth (deg)

90

FIG. 33. The error in the splitting method is a function of the orientation of the dip direction relative to the x, y processing directions. The error in the full 3D method is

independent of azimuth.

for quite steep dip the errors are reasonably small. Finally, Fig. 33 shows that the error in the splitting method for a fixed dip angle is a function of the azimuthal angle of the x, y processing directions. The error in the full 3D method is of course independent of azimuth.

This study, by Gibson et al., 55 is particularly reassuring and in our view lends support to the idea of splitting 3D into a series of 20 migrations. The maximum errors involved, when expressed as equivalent velocity errors, are modest in comparison with other uncertainties in migration velocity which can occur in practice.

6. MISCELLANEOUS TECHNIQUES

6.1. Introduction The previous sections have concentrated on three of the most important methods for migration of seismic data. In this section we bring together some recent developments of theoretical interest, which will no doubt feature on a more practical level in the future. The first of these methods to be examined uses a hybrid finite difference/Fourier method-a technique which is already in use commercially on a small scale. This applies finite difference methodology to the downward continuation of the Fourier transformed surface data. Most of the published articles have concentrated on data which have been Fourier transformed in time t ll ,28,37,45 rather than spatially over X.24 The reason for this preference is that velocity is normally space variant, and so a Fourier transform over the space variable is not necessarily a good idea. On the other hand, hybrid methods which

Page 212: Developments in Geophysical Exploration Methods

MIGRATION 205

work with a transform over t can be applied in a heterogeneous medium relatively simply.

Another area, which has recently caused some interest, has been pioneered by Cohen and Bleistein, 56 and is called velocity inversion theory. By this is meant the determination, from the surface recorded seismic data, of the earth velocities; this results, at the same time, in a migration. Their method is based on a perturbation expansion to determine a small parameter 8 which measures the departure from a constant reference velocity; it also assumes a constant earth density. In the case of one­dimensional problems there is no limit placed on the magnitude of 8,57 but for two-dimensional or higher problems 8 is constrained to be 'small'-and a 20 % variation from the reference velocity is quoted by Cohen and Bleistein56 as being a reasonable bounding limit. Apart from these restrictions on velocity, there is a more serious practical limitation in the number of computer operations involved with the method. The determination of 8 requires evaluation of a fivefold integral, and in comparison with other migration methods such as Stolt's (with its twofold integration), the method is uneconomicaL

At present, Cohen and Bleistein's technique is no more than a theoretical curiosity, but it is possible that future developments could make this a truly important procedure. It is considered feasible for example that a perturbation expansion about a spatially varying velocity could be made. 58

Approximations which reduce the number of integrations are also being examined; these could lead to better computation times.

Another area for possible future exploitation is the field of underwater acoustics theory. Already some geophysicists have reported the use of what are termed split step techniques59 in connection with migration. Another practical approach to underwater acoustics has been put forward by Estes and Fain,42 which again could have geophysical application. Their technique consists of a two-part propagation: propagation through a homogeneous interval followed by a correction due to the fact that the medium in the interval is heterogeneous.

Stack enhancement techniques form a subject which is closely allied to migration. Rather than performing a complete migration before stack these methods transform, using the wave equation, finite offset sections to zero offset sections. It must be noted that the output from this procedure is a set of unmigrated zero offset time sections, even though a 'partial' migration is involved in the formation of this set. Because of their economy in relation to full migration before stack, it is expected that stack enhancement by partial migration will become more popular in the future.

Page 213: Developments in Geophysical Exploration Methods

206 P. HOOD

6.2. Hybrid Methods Claerbout 11 first introduced the idea of hybrid methods in conjunction with seismic migration, using the space-frequency domain variables (x and w) as opposed to space-time coordinates (x, t). The use of these coordinates holds some advantages. For a start, finite difference approximations to the time derivatives become multiplications by w; and a time shift over a non­integral number of samples, required for example by eqn. (A 1.1 0), becomes a phase shift in the frequency domain. Both these operations are performed much more accurately in the frequency domain than in the time domain. Another advantage of these coordinates is that only frequencies of seismic interest need be considered; this results in some economy.

In common with his other approaches, Claerbout11 approximated the full scalar wave equation before attempting any solution. Later developments by Kjartansson60 and Gazdag38 used similar approxi­mations to the wave equation. The difference between the various approaches lies in the way each one approximates the 'square root' term in the wave equation (see Appendix 1); all result in roughly similar equations. Kjartansson's method, for example, results in a pair of equations: a 'diffracting' part (see eqn. (A1.6»

( 2W C ( 2 ) of' .02 F' ----;:- + 2w ox2 a; = 1 ox2

and a phase shifting part (from eqns. (Al.4) and (A1.2»:

F(x, Z + ~z, w) = expi (~zwjc)P'(x, z, w)

where

F'(x,z,w) = P(x,z,w)exp( -iwzjc)

(6.1)

(6.2)

In this scheme the wavefield is advanced to greater depths via a solution of eqn. (6.1) followed by eqn. (6.2) (see Fig. 34). After each step the migrated output on the plane t = 0 is found by inverse Fourier transformation. Since the term exp (iwt) in this operation is unity at zero time, then inverse Fourier transformation reduces to a simple summation.

The operations in Kjartansson's method are summarised below:

(1) Fourier transform the surface data from time to frequency

P(x, z = 0, t) ~ P(x, z = 0, w)

(2) Downward continue using: (a) eqn. (6.1)-a diffraction equation which propagates energy from level z to level z + ~z (using a finite

Page 214: Developments in Geophysical Exploration Methods

MIGRATION 207

FIG. 34. Data are projected from the plane at level Zo to Zo + ~z; each frequency component is treated separately.

difference method); (b) eqn. (6.2) which takes account of propagation differences in a variable velocity medium.

(3) Synthesise data by summing over all the frequencies:

P(x,z + Az,t = 0) = LP(X,Z + dZ,w)

w

(4) Subtract the 'd.c. term' P(x, z = z + Az, t = 0) (5) Go to 2.

As Kjartansson has noted, operation (4) subtracts the wavefield from each frequency; this is to avoid wrap-around. In downward continuation of time domain data, once data has moved through zero time to negative times it is no longer considered. Similar effects occur in the frequency domain, except that data wraps around as it is downward continued. The subtraction of the term P(x, z = z + dZ, 0) cures this wrap-around effect by removing the 'd.c. term' after each step.

Gazdag38 starts from a binomial expansion of the square root term and obtains

(6.3)

Page 215: Developments in Geophysical Exploration Methods

208 P.HOOD

Instead of advancing the wavefield using a conventional finite difference approach, Gazdag downward continues data using a truncated Taylor expansion:

- - aP P(x,z + ~z,w) = P(x,z,w) + ~z-(x,z,w)

az

~z2a2p ~z3a3p + 2 az2 (X,Z,w) + 6 az3 (x,z,w) (6.4)

The z derivatives in eqn. (6.4) are evaluated from repeated differentiation of eqn. (6.3) with respect to z at the level z. This results in quite an accurate representation of the wavefield at the new level z = z + ~z. The surprising feature of Gazdag's method is that the evaluation of x derivatives in eqn. (6.3) is not done by an accurate finite differencing scheme but via a further Fourier transform over x and using the relation

a2 p I--2 = - k;P(kx,z,w)exp(ikxx) ax

(6.5)

Although Gazdag's approach avoids the problems of finite difference dispersion errors, the use of eqn. (6.5) to evaluate the x derivatives must make this method uneconomical.

6.3. Velocity Inversion Procedures As applied to seismic data, velocity inversion procedures are a means of obtaining the subsurface velocities (and hence migrating the data) directly from surface recorded measurements. Migration is often mistakenly referred to as an inverse problem. It is nevertheless a forward problem; the surface recorded wavefield defines the initial conditions on the acoustic wave equation, and the wave equation is downward continued with a prescribed velocity field. Mathematically this problem is rather trivial, and in practice the real difficulties lie in obtaining efficient and accurate numerical solutions. Velocity inversion, on the other hand, demands that the velocity used in the acoustic wave equation is derived directly from the data itself.

In theory the one-dimensional problem can be completely solved for arbitrary velocity variation, but constant density; however, in two or more dimensions there are restrictions on the permitted variation in velocity. Cohen and Bleistein57 laid the foundation for current approaches to the problem. In one dimension they transform the wave equation to the Schrodinger equation and solve the inverse problem via the

Page 216: Developments in Geophysical Exploration Methods

MIGRATION 209

Gelfand-Levitan integral equation. The propagation speed is then derived in terms of the potential for the Schrodinger equation. For two or more dimensions these authors express the unknown velocity in terms of a constant reference value and a small (up to 20 %) perturbation from it. 56.57.61 An integral equation is derived for the velocity perturbation which involves (for the 2D seismic case) a very expensive five-fold integration of the observed data. Cohen and Bleistein consider that practical restrictions such as noise, discretisation error and finite bandwidth, etc., are of more concern than the theoretical limitation on velocity perturbations. We cannot entirely agree, and whilst we believe that their method has limited practical application in terms of sensitivity to noise and amplitude errors, both the costs and the permitted velocity variation are severe limitations on their approach. If, as Kennett believes to be the case, a perturbation expansion about a velocity field defined by conventional velocity analysis could be made, then such a technique would be extremely powerful. The cost could be reduced if some of the integrations involved were approximated; just how this can be done is not clear.

In one dimension, Raz2 has derived a useful extension of the above theory to obtain both velocity and density information from field data. The method permits arbitrary variation of both velocities and densities from the reference values. Another article by the same author62 considers the question of multiple reflections in a lD velocity inversion scheme, but in this case the density information is not obtained.

6.4. Stack Enhancements When the subsurface reflectors are steeply dipping or have high curvature, it is known that conventional CMP stacking will not be satisfactory. For instance, the stacking velocity appropriate for a dipping bed will be inappropriate for a horizontal bed and vice versa, so that it will not be possible to stack crossing events such as fault planes or diffractions on a time section optimally. Migration of the CMP stack will not therefore be adequate. There are a number of measures which can be taken to avoid these undesirable effects. The simplest approach has been called a 'broad dip band stack' by Western Geophysical Corp. One stacking velocity function is used to stack up the flat events, and a second or third velocity function is used to enhance various dipping effects. The final stack is the sum of these intermediate stacks. In Fig. 35 we show a conventional stack of a growth fault area, and in Fig. 36 a 'broad dip band stack'. The corresponding migrations using Gazdag's phase shift method (Section 4.3) are shown in Figs. 37 and 38. Improvements are subtle and particularly

Page 217: Developments in Geophysical Exploration Methods

210 P. HOOD

FIG. 35. CDP stack of a line over the Brazos Ridge, offshore Texas.

relate to the greater clarity of the fault plane. This may be compared with the much more costly migration before stack using Kirchhoff summation (Section 5.4) shown in Fig. 39, where the large growth fault stands out very clearly.

An advance on the broad dip and stack method is a partial migration scheme which transforms non-zero offset time recorded data to zero offset data. Ifwe examine the response on a hypothetical zero offset and far offset section to a buried point scatterer (Fig. 40), it is seen that the NMO process moves the apex of the diffraction hyperbola, but not the tails, to the correct

FIG. 36. Broad--dip-band stack of the data used to create the CDP stack in Fig. 35.

Page 218: Developments in Geophysical Exploration Methods

MIGRATION 211

FIG. 37. Migration of the CDP stacked data in Fig. 35 (frequency-wavenumber method).

zero offset posltlOn. To obtain an optimum stack requires a partial migration of data (either before or after application of NMO). Digicon Geophysical Corp. have developed a procedure which goes some way along these lines;63 they have termed the process 'DEVILISH'. The main restriction of this procedure is that it assumes that lateral velocity variations can be ignored in the partial migration procedure. This may not be important since there is some evidence that pre-stack partial migration is remarkably insensitive to lateral velocity changes. In Fig. 41 various stacks of a salt dome region are shown. The stacks before 'DEVILISH' are quite

FIG. 38. Migration of the Broad-dip--band stack in Fig. 36 (frequency­wavenumber method).

Page 219: Developments in Geophysical Exploration Methods

212 P. HOOD

FIG. 39. Migration before stack (Kirchhoff summation). Note the reflections from the major fault at 2·8 s beneath location A. The zone of weak amplitudes in Fig. 37 now shows beneath B a sequence of beds dipping into the fault at 3·0 s. Also, reflections now appear from the steeply dipping adjustment faults in the shallow

Earth Model 11,111111111/

(0 )

t

section.

CMP CMP

ol1 •• t

t

(C)

FIG. 40. (a) A buried point scatterer in a homogeneous medium gives rise to (b) the corresponding time section. Zero offset and far offset sections have been superimposed. (c) After NM 0 the peaks of the diffraction response will coincide but the tails will not. To obtain a good zero offset stack, far offset data must be mapped

in the direction shown by the arrows.

Page 220: Developments in Geophysical Exploration Methods

BE

FO

RE

D

EV

ILIS

H

HIG

H

VE

LO

CIT

Y

'I..II

NC

'TtO

N

AF

TE

R D

EV

ILIS

H

FIG

. 41

. B

efor

e D

EV

ILIS

H.

Con

vent

iona

l st

acks

usi

ng d

iffe

rent

sta

ckin

g ve

loci

ty f

unct

ions

. T

he d

ippi

ng e

vent

s ap

pear

st

rong

er a

t a

high

er s

tack

ing

velo

city

tha

n th

e m

ore

gent

le d

ippi

ng e

vent

s. A

fter

DE

VIL

ISH

. S

tack

s af

ter

DE

VIL

ISH

are

less

se

nsit

ive

to v

eloc

ity,

and

dip

ping

eve

nts

are

mor

e co

ntin

uous

tha

n w

ith

conv

enti

onal

sta

ckin

g.

s:: 8 '" > :l ~ tv

W

Page 221: Developments in Geophysical Exploration Methods

214 P. HOOD

..

FIG. 42. Migration of conventionally produced stack. The left flank of the salt dome at about 2·~2·2 s is broken up.

sensitive to the stacking velocity, with some events being stacked out by too low or too high a velocity. Analogous results after 'DEVILISH' are less sensitive to the chosen velocity function. In Figs. 42 and 43 the migrated stacks are displayed. The continuity of the left flank of the salt dome at about 2·0 s is noticeably improved by the 'DEVILISH' procedure.

Yilmaz has completed an important study of pre-stack partial migration in which he has developed finite difference procedures which will handle lateral variations in velocity. 30 After transforming constant offset sections to zero offset sections, the equations developed imply a lateral shift along the section to handle variations in velocity: a splitting which is reminiscent of the 'diffraction' and 'shifting' equations in depth migration by finite differences (Section 3.4).

Stack enhancements will no doubt become more widely used in the future. Judson et al. report the cost of their procedure 'DEVILISH' plus stack at only twice the cost of conventional NMO plus stack. The

Page 222: Developments in Geophysical Exploration Methods

MIGRATION 215

FIG. 43. Migration of a stack after DEVILISH. Note the continuity of the salt dome at 2·0-2 · 2 s in comparison with Fig. 42. Greater clarity of the dipping events is

also obtained.

importance of this pre-stack partial migration is that the resulting zero offset section has a better signal to noise ratio than a conventional stack. Post-stack migration should then stand an improved chance of resolving detail on the section. Stack enhancements of course only go so far in improving the zero offset section. When severe focusing effects are present, or the velocity varies significantly within a spread length, migration before stack by downward continuation of shot and receiver gathers will almost certainly be required (Section 3.8).

7. OVERVIEW OF THE VARIOUS MIGRATION TECHNIQUES

In this chapter we have covered most of the migration techniques which are currently available. Faced with such a plethora of methods, the

Page 223: Developments in Geophysical Exploration Methods

216 P. HOOD

TABLE I RECOMMENDED MIGRATION METHOD FOR VARIOUS EARTH VELOCITY FUNCTIONS

Velocity variation

1. Constant

2. Depth stratified

3. Depth varying and slow lateral changes (significant changes over several spread lengths) (a) Gentle dip

(b) Steep dips

4. Rapidly varying la teral velocity (significant changes within a spread length)

Methods recommended

F-K

Gazdag's phase shift method Kirchhoff time migration

Kjartansson's method Finite difference depth migration Kirchhoff + Hubral correction Stack Enhancements followed by: Kjartansson's method Finite difference depth migration or Kirchhoff migration before stack

Refer to Section

4·3

4-4 5·2

6·2 H

5'2-5'3 6-4 6·2 3-4

+ Hubral correction 5·4

Depth Migration before stack of shot and geophone gathers using:

Kjartansson's method Finite difference method

H 6·2 H

geophysicist may well not recall the particular strengths or weaknesses of one method as opposed to another. We address this problem here, and attempt to lay down some guidelines for choosing the appropriate migration algorithm. In general, the progression from a simple time migration to a depth migration before stack should be undertaken in stages. It is quite possible that interpretation can be done on the basis of a cheap migration such as F-K migration; in this case there is no point in progressing to a better method. Even if a complete interpretation is not possible, a better idea of the velocity and earth structure should be obtainable after migration, and these can be used in a more detailed migration study.

The single most important factor which controls the type of migration required is the earth velocity. In Table 1 we recommend, in order of merit, specific migration methods which will give acceptable results for the various

Page 224: Developments in Geophysical Exploration Methods

MIGRATION 217

km 2 6 10 km

2300

3080

2 2

5800 mil

4 4

6 6

8 8

Velocity Model

FIG. 44. Velocity model for migration of the stack in Fig. 45. Note the large velocity contrast across the second interface.

earth models listed. When using this table it must be noted that, although Kirchhoff time migration followed by a Hubral correction is a possibility when velocities are slowly varying laterally, there can be problems when interfaces are curved. In this case depth migration or Kjartansson's method is required. In Fig. 44 we show a model which was based on the time section shown in Fig. 45. It might be imagined that lateral velocity variations are slight. In Fig. 46 a time migration is shown, and it behaves quite satisfactorily up to about 1·5 s, but, due to refraction effects, fails in the lower half of the section. The image ray plot (shown in Fig. 47) has severe focusing effects at about 6 km along the line, and so the two-step time migration plus Hubral correction breaks down. A comparison may be made between the conventional time migrated section converted to depth (Fig. 48) and the depth migrated section (Fig. 49). Down to 2 km in depth the results are similar, but the horizon at 8 km is quite severely broken up by the time migration. The 'smile' appearing above this reflector on the depth migrated section is thought to be an artefact whose cause may be traced back to the original eMP stack. The deep reflectors lack the diffraction tails associated with the termination of the beds. These have been destroyed by stacking; the migration of the truncated reflections produces this artificial

Page 225: Developments in Geophysical Exploration Methods

FIG. 45. Stack section from Guatemala. It is likely that an anomalous overburden causes the apparent anomaly at depth.

2 6 10 km

FIG. 46. Conventional time migration (finite difference algorithm). Appears over­migrated at depth.

Page 226: Developments in Geophysical Exploration Methods

6 10

FIG. 47. Image ray paths on a depth plot. The diverging image rays below lateral position 7 to 9 km. will be unable to unscramble the complex deep reflection in

Fig. 46.

2

4

6

8

FIG. 48. Conventional depth section (vertical stretching of the time migrated section).

Page 227: Developments in Geophysical Exploration Methods

220 P. HOOD

6 10 km

2 2

4 4

6 6

8 8

FIG. 49. Depth migration. The deep reflection is well imaged. (The upward smile event rising from the deep reflection is thought to be an artefact related to

shortcomings of the CMP stack in the vicinity of the anomaly.)

..- -o --""" -

-

Jo-. - ~ r- -~-

---~-1--

r-- ~-1----- ,

~---~~ --

I-\\'t ~- ~~--~~

2

It- " ~ --, =: ~, -

~

:I r--~~ --~-,--.=

-~------

4 .!=--- ~--- ::.-- ~--

o b c

FIG. 50. Comparison between migrated synthetic seismogram of Fig. 6: (a) F-K migration; (b) diffraction stack migration; (c) finite difference migration.

Page 228: Developments in Geophysical Exploration Methods

MIGRATION 221

TABLE 2 STRENGTHS AND WEAKNESSES OF THE MAIN MIGRATION METHODS

Migration method

Finite difference method

F-K methods

Diffraction stack methods

Strengths

I. Can, though does not usually, handle rapid lateral variations in velocity

2. Quite economical 3. Low computational noise

from gently dipping events

I. Few control parameters 2. Can, though does not

usually, handle depth stratification exactly

3. Very low migration noise from steeply or gently dipping events

4. Economical

1. Can handle steep dips 2. Equally spaced grid of

data is not required 3. Effect of control

parameters is readily understood

Weaknesses

I. Ghosting and grid dispersion errors [r('m steep dip events

2. Regular grid of data points is required

3. Effect of controlling parameters is not apparent to the user, and may not be known except to the original programmer

1. To avoid aliasing effects sometimes requires double length transforms

2. Lateral refraction effects are not correctly handled

I. Liable to produce 'break-up' noise and smiles

2. Cannot handle lateral velocity variations without a ray tracing procedure

3. An expensive method

smile. This example serves to illustrate that caution should be exercised when using this table.

It should be noted that the remarks, here and elsewhere, concerning the breakdown of the Hubral correction, apply only to the automatic application of the image ray traced corrections to the migrated section. These failures are manifested in an incorrect amplitude prediction and in the deformation of the wavelet shape when the migration has failed to bring diffractions to a focus. Another widespread use of the Hubral method is to define a velocity/depth model for depth migration. In this application the

Page 229: Developments in Geophysical Exploration Methods

222 P. HOOD

Hubral procedure is used in time to depth conversion of certain picked horizons on a time-migrated section, and irrespective of horizon curvature or image ray focusing the method succeeds.

With this caveat in mind, it can be seen that we would recommend only a few migration methods. Of these it is seen that a finite difference depth migration or Kjartansson's method can be applied in most realistic situations, whether pre- or post-stack. There are certain cosmetic differences between the main migration methods, which we summarise in Table 2, and these should be self-explanatory. This table can be read in conjunction with a demonstration of the output obtained from migrating the synthetic seismogram in Fig. 6 by each of the methods (Fig. 50). Unfortunately, it was not possible to include Kjartansson's method in this comparison, but it is expected to be a cross between the F-K migration and the finite difference migration in appearance and effect.

8. ACKNOWLEDGEMENTS

The author would like to thank the Chairman and Directors of The British Petroleum Company Limited for permission to publish this chapter, and to his colleagues in the Company for their assistance and advice. The author also acknowledges his debt to the management of Seismograph Service (England) Limited for much of his early grounding in migration theory. Particular gratitude is expressed to the Western Geophysical Company of America, and to Dr. K. Lamer for his assistance in providing Figs. 21-4, 26-33,35-9,44-9 along with the captions, and for his advice on the subject of splitting 3D migration into a series of 20 migrations. Dr. L. Hatton of Merlin Geophysical Company Limited is acknowledged for his discussion on finite difference depth migration. Dr. J. W. C. Sherwood is thanked for his discussion on finite difference migration and the 0 EVILISH procedure, and Digicon Inc. for providing Figs. 41-3. Finally, the editor of Geophysical Prospecting is acknowledged for permission to reproduce Figs. 6, 7 and 50, which came from an earlier publication by the author.

REFERENCES

1. HAGEDOORN, J. G., A process of seismic reflection interpretation, Geophys. Prospecting, 2, pp. 85-127, 1954.

2. RAZ, S., Direct reconstruction of velocity and density profiles from scattered field data, Geophysics, submitted for publication, 1980.

Page 230: Developments in Geophysical Exploration Methods

MIGRATION 223

3. MARFURT, K. J., Elastic wave equation migration-inversion, PhD Thesis, Columbia University, 1978.

4. HUBRAL, P., Time migration-some ray theoretical aspects, Geophys. Prospecting, 25, pp. 738-45, 1977.

5. FRENCH, W. S., Computer migration of oblique seismic reflection profiles, Geophysics, 40, pp. 961-80, 1975.

6. GARDNER, G. H. F., FRENCH, W. S. and MATZUK, T., Elements of migration and velocity analysis, Geophysics, 39, pp. 811-25,1974.

7. BERRYHILL, J. R., Wave equation datumming, Geophysics, 44, pp. 1329-44, 1979. 8. CLAERBOUT, J. F., Toward a unified theory of reflector mapping, Geophysics,

36, pp. 467-81,1971. 9. LOEWENTHAL, D., Lu, L., ROBERSON, R. and SHERWOOD, J. W. c., The wave

equation applied to migration, Geophys. Prospecting, 24, pp. 380--99, 1976. 10. CLAERBOUT, J. F., Course grid calculations of waves in inhomogeneous media

with application to delineation of complicated seismic structure, Geophysics, 35, pp.407-18, 1970.

11. CLAERBOUT, J. F., Numerical holography, Acoustical holography, Vol. 3, ed. A. F. Metherell, pp. 273-83, Plenum Press, New York, 1970.

12. CLAERBOUT, J. F. and JOHNSON, A. G., Extrapolation of time dependent waveforms along their path of propagation, Geophys. J., Roy. Astron. Soc., 26, pp. 285-93, 1971.

13. HATTON, L., LARNER, K. and GIBSON, B., Migration of seismic data from inhomogeneous media, presented at 41st Mtg of EAEG, Hamburg, 1979.

14. ALFORD, R. M., KELLY, K. R. and BooRE, D. M., Accuracy of finite difference modelling of the acoustic wave equation, Geophysics, 39, pp. 834--42, 1974.

15. DEREGOWSKI, S. M., A finite difference method for CDP stacked section migration, presented at 40th Mtg of EAEG, Dublin, 1978.

16. McDANIEL, S. T., Parabolic approximations for underwater sound pro-pagation, J. Acoust. Soc. Am., 58, pp. 1178-85, 1975. .

17. LAPIDUS, L., Digital computation for chemical engineers, McGraw-Hill, New York, 1962.

18. MUIR, F., Stanford Exploration Project, Vol. 8, p. 54, 1976. 19. CLAYTON, R. and ENGQUIST, B., Absorbing boundary conditions for acoustic

and elastic wave equations, Bull. Seis. Soc. Am., 67, pp. 1529-40, 1977. 20. MARSCHALL, R., Derivative of two-sided recursive filters with seismic

applications, presented at 48th Ann. Mtg of SEG, San Francisco, 1978. 21. HOOD, P., Finite difference and wavenumber migration, Geophys. Prospecting,

26, pp. 773-89, 1978. 22. BUCHANAN, D. J., An exact solvable one way wave equation, presented at the

48th Ann. Mtg of the SEG, San Francisco, '1978. 23. WHITTLESEY, J. R. B. and QUAY, R. G., Wave equation migration operators

using 2-D Z-Transform theory, presented at 47th Ann. Mtg of SEG, Calgary, 1977.

24. STOLT, R. H., Migration by Fourier transform, Geophysics/, 43, pp. 23-48, 1978.

25. DEREGOWSKI, S. M., Report on the finite difference method, BP Company Limited (in preparation), 1979.

26. DEREGOWSKI, S. M., Private Communication, BP Company Limited, 1980.

Page 231: Developments in Geophysical Exploration Methods

224 P. HOOD

27. DOHERTY, S. M., Structure independent seismic velocity estimation, PhD Thesis, Geophysics Department, Stanford University, Ca., 1975.

28. CLAERBOUT, J. F., Fundamentals of geophysical data processing, McGraw-Hill, New York, 1976.

29. SHERWOOD, J. W. c., Private communication, Digicon Inc., 1980. 30. YILMAZ, 0., Pre-stack partial migration, PhD Thesis, Department of

Geophysics, Stanford University, Ca., 1979. 31. SCHULTZ, P. S. and CLAERBOUT, J. F., Velocity estimation and downward

continuation by wavefront synthesis, Geophysics, 43, pp. 691-714, 1978. 32. ESTEVEZ, R., Wide angle diffracted multiple reflections, PhD Thesis,

Geophysics Department, Stanford University, Ca., 1977. 33. SCHULTZ, P. S. and SHERWOOD, J. W. c., Depth migration before stack,

Geophysics, 45, pp. 376--93, 1980. 34. MAGINNESS, M. G., The reconstruction of elastic wavefields from measure­

ments over a transducer array, 1. Sound and Vibration, 20 (No.2), pp. 219-40, 1972.

35. BooER, A. K., CHAMBERS, J. and MASON, I. M., Numerical holographic reconstruction by a projective transform, Electron. Lett., 13, pp. 569-70, 1977.

36. GAZDAG, J., Wave equation migration with the phase-shift method, Geophysics, 43, pp. 1342-51, 1978.

37. GAZDAG, J., Extrapolation of seismic waveforms by Fourier methods, IBM 1. Res. Dev., 22, pp. 481-6, 1978.

38. GAZDAG, J., Wave equation migration with the accurate space derivative method, Geophys. Prospecting, 28, pp. 60--70, 1980.

39. PHINNEY, R. A. and FRAZER, L. N., On the theory of imaging by Fourier transform, presented at 48th Ann. Mtg of SEG, San Francisco, 1978.

40. CHUN, J. H. and JACEWITZ, C. A., Fundamentals of frequency domain migration, presented at 48th Ann. Mtg of SEG, San Francisco, 1978.

41. CHUN, J. H. and JACEWITZ, C. A., A fast multi-velocity function frequency domain migration, presented at 48th Ann. Mtg of SEG, San Francisco, 1978.

42. ESTES, L. E. and FAIN, G., Numerical technique for computing the wide angle acoustic field in an ocean with range-dependent velocity profiles, 1. Acoust. Soc. Am., 62, pp. 38-43, 1977.

43. LARNER, K. and Hatton, L., Wave equation migration: two approaches, Offshore Technology Conference, paper OTC-2568, Houston, 1976.

44. NEWMAN, P., Amplitude and phase properties of a digital process, presented at 37th Mtg of EAEG, Bergen, Norway, 1975.

45. BERKHOUT, A. J., and PALTHE, D. W. VAN W., Migration in terms of spatial deconvolution. Geophys. Prospecting, 27, pp. 261-91, 1979.

46. SCHNEIDER, W. S., Integral formulation for migration in two and three dimensions, Geophysics, 43, pp. 49-76, 1978.

47. KUHN, M. J. and ALHILAL!, K. A., Weighting factors in the construction and reconstruction of acoustical wavefields, Geophysics, 42, pp. 1183-98, 1977.

48. BOLONDI, G., ROCCA, F. and SAVELL!, S., A frequency domain approach to two dimensional migration, Geophys. Prospecting, 26, pp. 750--72, 1978.

49. GARIBOTTO, G., 2-D recursive filters for the solution of two-dimensional wave equations, IEEE Trans. on Acoust. Speech and Signal Processing, ASSP-27, pp. 367-73,1979.

Page 232: Developments in Geophysical Exploration Methods

MIGRATION 225

50. KUHN, M. J., Acoustical imaging of source receiver coincident profiles, Geophys. Prospecting, 27, pp.62-77, 1979.

51. DEVEY, M. G., Derivation of the migration integral, Technical Note TN451, BP Company Ltd,_Exploration and Production Department, 1979.

52. HOSKEN, J. W. J., Improvements in the practice of 2D diffraction stack migration, Report No. EPRjR1247 BP Company Limited, Exploration and Production Department, 1979.

53. SAFAR, M., Private communication, The British Petroleum Company Ltd, 1980.

54. NEWMAN, P., Geometrical aspects of migration before stack, presented at 40th Mtg of EA'EG, Dublin, 1978.

55. GIBSON, B., LARNER, K. L., SOLANKI, J. J. and NG, A. T. Y., Efficient 3D migration in 2 steps, presented at 41st Mtg of EAEG, Hamburg, 1979.

56. COHEN, J. K. and BLEISTEIN, N., Velocity inversion procedure for acoustic waves, Geophysics, 44, pp. 1077-87, 1979.

57. COHEN, J. K. and BLEISTEIN, N., An inverse method for determining small variations in propagation speed, Soc. Ind. Appl. Math., J. Appl. Math., 32, pp. 784-99, 1977.

58. KENNETT, B. L. N., Private communication, Department of Geodesy and Geophysics, University of Cambridge, 1980.

59. TAPPERT, F. D. and HARDIN, R. H., A synopsis of the AESD workshop on acoustic propagation modelling by non-ray tracing techniques, AD-773 741, AESD Tech. Note TN-73-05, 1973.

60. KJARTANSSON, E., The effect of Q on bright spots, presented at 48th Ann. Mtg of SEG, San Francisco, 1978.

61. GRAY, S. H., BLEISTEIN, N. and COHEN, J. K., Direct inversionfor strongly depth dependent velocity profile, Report MS-R-7902, Department of Mathematics, University of Denver, Denver, Colorado, 1978.

62. RAZ, S., An approximate propagation speed inversion over a prescribed slab, Acoustic imaging, Vol. 9, Plenum Press, New York, in press, 1980.

63. JUDSON, D. R., SCHULTZ, P. S. and SHERWOOD, J. W. c., Equalising the stacking velocities via DEVILISH, presented at 48th Ann. Mtg of SEG, San Francisco, 1978.

64. JUDSON, D. R., LIN, J., SCHULTZ, P. S. and SHERWOOD, J. W. C, Depth migration after stack, Geophysics, 45, pp. 361-75.

65. RAYLEIGH, J. W. S. (1877). The theory of sound, Sections 107-11, Dover Publications, London, 1945.

APPENDIX 1: DERIVATION OF A 45° WAVE EQUATION

We use the square root approximation S(3) defined in eqn. (3.5) with the following substitution:

c a x=-­wax

Page 233: Developments in Geophysical Exploration Methods

226 P. HOOD

Thus eqn. (3.5) becomes

5'3) = I + 2[(c/w)0/oxF 4 + [(c/w)%xF

Substituting eqn. (Al.l) into eqn. (3.3) gives

oP = iW[1 + 2[(c/w)0/oxF Jp oz c 4 + [(c/w)0/OX]2

(Al.l)

Now transform to a moving coordinate frame by means of the substitution

P = P' exp(iwz/C)

where c is a constant velocity. Then

-=I---P+ P oP' .(w w) _, (i(2C/W)02/ox2)_, OZ c c 4 + (C/W)202/0X2

(Al.2)

(Al.3)

This equation may be split (with slight error) into two equations:

and

o~' = i(~ -7)P' oP' _ ( i(2c/W)a2/8x2 )p, OZ - 4 + (C/W)202/0X2

Rearranging eqn. (AI.5),

[ . 2 c2 02 Jop, . c02P'

- (IW) + 4 ox2 a; = lW 2 ox2

Equation (AI.4) can be solved directly to give

P'(x,z + ~z,w) = P'(x,z,W)exP[iw fHZ G -~)dzJ Transforming back into the time domain, eqn. (A1.7) becomes

P'(x, z + ~z, 1) = P'(x, z, t + 1) where

fZ+&Z (I I) , T = - - -= dz

Z c c and eqn. (A1.6) becomes

03 P' c2 03 P' C 03 P' ------+---. =0 ozot2 40x20z 20x20t

(AI.4)

(AI.5)

(Al.6)

(A 1.7)

(A l. 8a)

(A1.8b)

(A 1.9)

Page 234: Developments in Geophysical Exploration Methods

MIGRATION 227

Equations (AI.8) and (AI.9) can be used to downward continue data. Equation (AI.9) is applied first to move the data through a distance flz, followed by eqn. (A 1.8) which represents a correction for the departure over the interval flz of the velocities from a constant reference velocity c. To avoid error, this constant should approximate to the local velocity c(x, z) averaged over x. The reason for the transformation (Al.2) should now be clear-it is used to eliminate large data shifts within each depth step.

These equations can be cast into a form appropriate to the zero offset exploding reflector model by dividing every occurrence of velocity in eqns. (Al.2), (Al.8) and (Al.9) by 2. The relevant equations in this case are

P'(x, z + flz, t) = P'(x, z, t + n (A l.l Oa)

where

[Z+AZ (1 1) T = 2 Jz C - i dz' (A l.l Ob)

and a3p' c2 a3p' c a3p' ------+~--=O azat2 16ax2 az 4ax2 at (Al.ll)

Again eqns. (AI. II) and (Al.IO) are solved alternatively in each depth step; eqn. (A 1.1 0) deals with all the refraction effects and eqn. (A 1.11) deals with diffraction effects.

APPENDIX 2: DERIVATION OF F~K MIGRATION THEORY IN TWO DIMENSIONS

The development of the equations in this Appendix will be restricted to migration of two-dimensional eMP stacks; an exploding reflector model is appropriate in this case, hence the following wave equation is used:

a2p a2 p 4 a2 p

ax2 + az2 = c2 at2 (A2.1)

with initial data of P(x, 0, t) recorded on the plane z = O. We define a new coordinate system

ct d' =- + z

2

x' =X

z' = z (A2.2)

Page 235: Developments in Geophysical Exploration Methods

228 P. HOOD

and the wavefield P'(x',z',d') = P(x,z,t). In these new coordinates the wave equation becomes

a2p a2p' a2p'

ax,2 + 2 ad' az' + az,2 = ° (A2.3)

with initial data P'(x', 0, d'). The imaged section will be obtained at time t = ° and by (A2.2) this Fill be at the point d' = z, so P'(x', d', d') defines the migrated image. Let P' be the dou ble F ourier transform of the wavefield (primes will be dropped hereafter):

P(x, z, d) = 4:2 f f P(kx, z, kd) exp [-i(kxx - kdd)] dkxdkd (A2.4)

Substituting eqn. (A2.4) into eqn. (A2.3) and treating each component separately, we obtain

2 ~ d2P . dP -kxP + dz2 + 21kd dz = ° (A2.5)

This ordinary differential equation has the solution

P(kx,z,kd) = P(kx,O,kd)exp{ -i[kd - (k~ - k;)1/2]Z} (A2.6)

Substitution of eqn. (A2.6) into eqn. (A2.4) and taking z = d gives

P(x, d, d) = 4:2 f f P(kx' 0, kd) exp {i[kxx + (k~ - k;)1/2d] dkx dkd (A2.7)

The limits on this integral are infinite and may be split into three regions:

I = II + 12 + 13 == fOO dkx [rOO dkd + r- 1kxl dkd + rkxl dkdJ -00 Jkd=lkxl Jkd=-oo Jkd=-Ikxl

(A2.8)

13 vanishes for all but very small depths and can be ignored. II and 12 are handled by a change of variables from kd to k~, where

kd = sgn (kd)(k~ + k;)1/2 for Ik~1 > 0 (A2.9)

and in this coordinate system eqn. (A2.7) becomes

(A2.10)

Page 236: Developments in Geophysical Exploration Methods

MIGRATION 229

The mapping defined by eqn. (A2.9) is only unambiguous for Ik~1 > 0; for k~ = 0 it is convenient in eqn. (A2.10) to take P(kx' 0, kx) as zero.

In practical application of the method a modified directivity term is sometimes used in eqn. (A2.IO) to reduce the effect of noise smiles. The modifications might typically lead to a directivity term

k' d h { y > 1 attenuates dipping beds and noise smiles were . .

y < 1 boosts dlPpmg beds (A2.11)

For a variable velocity medium, Sto1tZ4 obtains a modified equation in which velocity dependence resides in a single parameter W where o ::::; W::::; 1. Pursuing the analysis yields the following relation for the migrated wavefield:

P(x, d, d) = 4:Z f dkx f dk~A(kx' k~) exp [i(kxx + k~d)] (A.212)

where

A(kx,k~) = 2(WI_ 2) {(W - 1) - [k?+ (2 ~ W)k;p/2}

x p{kx'k~(W - 1) - [k~Z + (2 - W)k;PIZ} 2(W - 2)

Migration in this case involves, as before, a simple shifting and scaling in the Fourier transformed domain.

APPENDIX 3: DERIVATION OF THE SCANWIDTH IN A CONSTANT OFFSET DIFFRACTION STACK

Consider the construction shown in Fig. (A3.1). A ray leaves the shot point S and, after striking the reflector at 0, is specularly reflected and received back at the surface at G. In the constant offset framework the CMP position is at M. The distance xm given by AM on the figure is the required minimum scan size for the diffraction stack operation. By geometry

xm = Zo tan(lXr - P) + h (A3.I)

and

OS/sin [90 0 - (IXr + P)] = 2h/sin 2P (A3.2)

Page 237: Developments in Geophysical Exploration Methods

230 P. HOOD

:x: I - - rn "I S C 1M

"' h h

FIG. A3.I. Geometry for dipping plane.

Now

os = zo/cos(a r - {3)

Substituting eqn. (A3.3) into eqn. (A3.2) gives

z/cos(rJ.r - {3)cos(ar + f3) = 2h/sin2{3

G

(A3.3)

(A3.4)

Making the further substitution b = tan {3 into eqn. (A3.4) and solving the resulting quadratic equation yields

b = -cto + [(cto)2 + 4h2(1 - cos2 2ar)] 1/2

2h(1 - cos 2ar) (A3.5)

Page 238: Developments in Geophysical Exploration Methods

INDEX

Active seismic methods, 142-5 All-pass filter, 94 All-pass systems, 88, 94 Autocorrelation

coefficient, 85, 99, 105 function, 81,92

Autocorrelogram, 39 Automatic gain control, 45

Ball bearing model, 157 Bessel function, 131 Bipole-dipole method, 124, 128, 129 Black Rock Desert, 140 Boundary conditions, 172 Broad dip band stack, 209

Canonical representation, 88 CDP, 3, 21-2, 28, 30, 35, 49,51,70 Common midpoint stacked sections

migration of, 169-72 ' Complete common midpoint sections

153 ' Contoured datum, 21 Convolutional model, 55, 82-5, 90,

91,103 Crooked line

processing, 23, 49-52 shooting techniques, 2

Cross-correlations, 21-3, 40, 41, 47, 48, 126

Cross-dip analyses, 51-2 effect, 35

Curie point method, Ill, 113 Curie temperature, 135-7

Datum choice, 19-21 Datumming method, 159, 188, 193-5 Deconvolution, 40, 65-9, 77-106

problem of, 86 purpose of, 82

Deconvolutional model, 89 Deconvolutional operator, 92-3 DEVILISH process, 211-14-Diffraction

effects, 156 stack migration, 153, 187-204

Dipole-dipole technique, 114, 124, 128, 129

Dipole mapping surveys, 114 Dirac spinor theory, 166 Direct current resistivity method,

128-30 Dirichlet conditions, 173 Diversity stacking, 45-7 Down-going waves, 157-9 Downward continuation, 159

Earth model, 156--7 231

Page 239: Developments in Geophysical Exploration Methods

232 INDEX

Elastic constants, 154 Electrical resistivity surveys, 114,

122-34 Electromagnetic soundings, 114 Elevation changes, 19 Epicentre location, 138 Exploding reflector model, 161-3

Fault plane solutions, 139 Feedback system, 86, 89 Filter curve, 17-18 Finite difference

approximations, 166-9 migration, 164-78

before stack, 173 First break plots, 12-14 F-K migration, 153, 179-87,216,227 Floating-point recording, 44 45 0 approximation, 166 45 0 wave equation, 225-7 Fourier

domain, 56 equation, 120, 122 migration methods, 179-83 techniques, 180 transforms, 56, 180, 204

Frequency analyses of traffic noise, 48

Gain history, 47 Gaussian distribution, 62 Gelfand-Levitan integral equation,

209 Geochemical surveys, 114 Geochemical thermometers, 116-20 Geophone gathers, 178 Geothermal energy, 107-49

development, 107 evaluation programme, 110 exploration programme, 110 locations, 109

Geothermal field, 108 Geothermal reservoirs, 108, 111, 115 Geothermal system, 108, 109, 111 Geysers Steam Field, 130 Green's function, 132

Hankel transform integral, 131-2 Heat flow holes, 114, 115 Huber criterion, 63 P"hral correction, 221 Huys~ ~'s principle, 159 Hybrid me.;::'1ds, 206-8

Imaging conditions, 159-~1 Impedance tensor, 126 'Inverse operator', 92 Inverse system, 89 Inversion process, 131-4

Kirchhoff integral, 189-93 migration, 153 summation, 212

Klauder wavelet, 40-1

I, deconvolution, 65-70 I, norm, 53-76 12 deconvolution, 67 12 norm, 57-9 la norm, 57, 58 Laplace distribution, 64 Linear models, 54-7 Loewenthal model, 161, 163 Log gain of overall system, 87 Ip norms, 57, 58 LVL surveys, 4-7,14

picking and computation, 5-7 recording methods, 4-5

Magnetic surveys, 135-7 Magnetotelluric method, 111, 112,

124-7 'Median stack', 60-1 Median stacking, 69 Microseismic surveys, 115 Migration, 151-230

before stack, 186-7 constant offset sections, of, 195-8

common midpoint stacked sections, of, 169-72

Page 240: Developments in Geophysical Exploration Methods

Migration-contd. diffraction stack, 187-204 finite difference, 164-78 F-K, 153, 179-87,216,227 fundamental concepts, 155-63 introduction, 153-5 miscellaneous techniques, 204-15 overview of techniques, 215-22 shot and geophone gathers, of, 178 strengths and weaknesses of main

methods, 221 time, 156 variable velocity, 193-5

Minimum-delay system, 86-91 Model

errors, 57 norms, 57

MUltiple events, 79, 80

National Coal Board, 15 Negative feedback

loop, 85 system, 89

Newman filter, 190, 191, 197 Nimbus 12-trace summer, 4 NMO, 20-3, 210 Noise

reduction techniques, 41-52 rejection, 42-4 suppression, 44-7

Non-minimum-delay system, 88, 89 Non-trivial all-pass system, 88 'Normal' distribution, 62

O'Doherty-Anstey reinforcing multiple paths, 105

One-way wave equations, 164-6 Optimal case, 80-2

Passive seismic methods, 137-42 Phase shift method, 185-6 Physical first-order reverberation

paths, 99 Plus-minus method, 11

INDEX 233

Poisson's ratio, 115, 140, 141 Predictive deconvolution, 77-106 Primary events, 78, 79, 82, 92 Probability density function, 63 p-wave delay method, III, 113, 142

Recursive Kirchhoff migration, 193 Reference energy level, 47 Reflection

coefficient, 77, 80-3, 84, 90-2, 96, 97, 103-5

seismogram, 80-3, 89-91, 102, 103 z-transform, 101, 102

Reflectivity function, 68, 80-6, 89-91, 94,99, 100-4

Refraction geophones, 7 interpretation method, 11

Refractor velocity, 5 Residual statics

calculation, 64 shot point, 18-19

Residual weathering corrections, 15-18

Reverberation, 85, 99, 100 Richter M factor, 137 Ripple effect, 35 Robustness, 59-62

Sand dunes, 10 Scalar wave equation, 154 Scanwidth in constant offset

diffraction stack, 229-30 Schlumberger soundings, 114, 124,

128, 143, 145 Section multiple train, 102 Section multiple waveform, 98 Sedimentary layers, 80 Seismic data processing, 53-76 Seismic wavelet, 56, 68, 71 Self-potential method, 134-5 Shear wave shadow studies, 113 Shot point residual statics, 18-19 Shots, 178

Page 241: Developments in Geophysical Exploration Methods

234

Signal-to-noise ratio, 23, 40, 44, 45, 47,49

Silica gel content, 117 Slant stack migration, 174-8 Sloping datum, 21 Snell's law parameter, 176 Sodium-potassium geothermometer,

118 Source wavelet, 93-5 Sparse spike processing, 68-74 Specification errors, 92 Spectral analysis, 126 Splitting

matrices, 165 techniques, 198-204

Stack enhancement techniques, 205, 209-15

Stacking, 79 Stanford Exploration Project, 153 Static

corrections, 1-36 automatic residual, 21-2, 28 datum choice, 19-21 high-resolution, 14 large, 19-21 problems in, 28 production records, from, 10-14 purpose of, 2-4 shot point, 18-19

iNDEX

Thermal gradient surveys, 120-2 e parameter, 168 3D migration, 156, 198-204 Time

differences, 17 domain electromagnetic sounding

method, 124, 130-1 migration, 156

Time-variant equalisation, 45 Traffic noise, frequency analyses of,

48 Transmission

coefficient, 78, 96--8, 100, 102 factors, 99, 105 function, 102 losses, 79-82 z-transform, 101

Trivial all-pass system, 88 2D, Stolt's theory in, 183-5 2D migration, 156 Two-layer weathering, 6

Up-going waves, 157-9 Up-hole method, 4, 7-9, 10

Velocity functions, 216

determination for particular surface inversion procedures, 208-9 model,217

conditions, 10 Statistical errors, 61-4, 92 Stolt's theory in 2D, 183-5 Structure element, 22 Subsurface

scatter of points, 28 temperature surveys, 120--2

Surface consistency, 22-3 stations, 28

Surface-consistent automatic static program, 35

Sweep bandwidth, 48 length effect, 47-9

Vibroseis, 6--7,13,14,37-52, 142-3

Wadati diagram, 138 Wave equation, 154, 159, 164

migration, 153 Weathered layer, 2-4 Weathering

corrections, 19 depths, 5, 7, 28 problems, 10

Well log, 68, 71

z-transform, 101