Seismic Data Compression Using 2D Lifting-Wavelet coefficients. The rigorous mathematical explanation of the wavelet transform using the lifting scheme can be found in [20],[15],[21]

  • View
    215

  • Download
    3

Embed Size (px)

Text of Seismic Data Compression Using 2D Lifting-Wavelet coefficients. The rigorous...

  • Ingeniera y CienciaISSN:1794-9165ISSN-e: 2256-4314ing. cienc., vol. 11, no. 21, pp. 221238, enero-junio. 2015.http://www.eafit.edu.co/ingcienciaThis article license a creative commons attribution 4.0 by

    Seismic Data Compression Using 2D

    Lifting-Wavelet Algorithms

    Carlos Fajardo1, Oscar M. Reyes2 and Ana Ramirez3

    Received: 06-10-2014 | Acepted: 02-12-2014 | Online: 30-01-2015

    MSC:68P30, PACS:93.85.Rt

    doi:10.17230/ingciencia.11.21.11

    Abstract

    Different seismic data compression algorithms have been developed in or-der to make the storage more efficient, and to reduce both the transmissiontime and cost. In general, those algorithms have three stages: transforma-tion, quantization and coding. The Wavelet transform is highly used tocompress seismic data, due to the capabilities of the Wavelets on represen-ting geophysical events in seismic data. We selected the lifting scheme toimplement the Wavelet transform because it reduces both computationaland storage resources. This work aims to determine how the transforma-tion and the coding stages affect the data compression ratio. Several 2Dlifting-based algorithms were implemented to compress three different seis-mic data sets. Experimental results obtained for different filter type, filterlength, number of decomposition levels and coding scheme, are presentedin this work.

    Key words: seismic data compression; wavelet transform; lifting

    1 Universidad Industrial de Santander, Bucaramanga, Colombia, cafajar@uis.edu.co.2 Universidad Industrial de Santander, Bucaramanga, Colombia, omreyes@uis.edu.co.3 Universidad Industrial de Santander, Bucaramanga, Colombia, anaberam@uis.edu.co.

    Universidad EAFIT 221|

    http://www.eafit.edu.co/ingciencia/mailto:cafajar@uis.edu.comailto:omreyes@uis.edu.comailto:anaberam@uis.edu.co

  • Seismic Data Compression using 2D Lifting-Wavelet Algorithms

    Compresin de datos ssmicos usando algoritmoslifting-wavelet 2D

    ResumenDiferentes algoritmos para compresin de datos ssmicos han sido desa-rrollados con el objetivo de hacer ms eficiente el uso de capacidad dealmacenamiento, y para reducir los tiempos y costos de la transmisinde datos. En general, estos algoritmos tienen tres etapas: transformacin,cuantizacin y codificacin. La transformada Wavelet ha sido ampliamenteusada para comprimir datos ssmicos debido a la capacidad de las ondculaspara representar eventos geofsicos presentes en los datos ssmicos. En estetrabajo se usa el esquema Lifting para la implementacin de la transforma-da Wavelet, debido a que este mtodo reduce los recursos computacionalesy de almacenamiento necesarios. Este trabajo estudia la influencia de lasetapas de transformacin y codificacin en la relacin de compresin de losdatos. Adems se muestran los resultados de la implementacin de dife-rentes esquemas lifting 2D para la compresin de tres diferentes conjuntosde datos ssmicos. Los resultados obtenidos para diferentes tipos de filtros,longitud de filtros, nmero de niveles de descomposicin y esquemas decompresin son presentados en este trabajo.

    Palabras clave: compresin de datos ssmicos; transformada wavelet;

    lifting

    1 Introduction

    The main strategy used by companies during the oil and gas explorationprocess is the construction of subsurface images which are used both toidentify the reservoirs and also to plan the hydrocarbons extraction. Theconstruction of those images starts with a seismic survey that generates ahuge amount of seismic data. Then, the acquired data is transmitted to theprocessing center to be processed in order to generate the subsurface image.

    A typical seismic survey can produce hundreds of terabytes of data.Compression algorithms are then desirable to make the storage more effi-cient, and to reduce time and costs related to network and satellite trans-mission. The seismic data compression algorithms usually require threestages: first a transformation, then a uniform or quasi-uniform quanti-zation and finally a coding scheme. On the other hand, decompression

    |222 Ingeniera y Ciencia

  • Carlos Fajardo, Oscar M. Reyes, Ana Ramirez

    algorithms perform the inverse process of each one of these steps. Figure1 shows the compression and decompression schemes.

    Xn

    Xn

    Quantization CodingTransform

    Figure 1: Data compression algorithm

    The transformation stage allows de-correlating the seismic data, i.e.,the representation in terms of its coefficients (in the transformed domain)must be more compact than the original data. The wavelet transform offerswell recognized advantages over other conventional transforms (e.g., cosineand Fourier) because of its multi-resolution analysis with localization bothin time and frequency. Hence this family of transformations has been widelyused for seismic data compression [1],[2],[3],[4].

    There is no compression after the transformation stage when the num-ber of coefficients is the same as the number of samples in the original data.In lossy compression algorithms the stage that follows the transformationis an approximation of the floating-point transform coefficients in a set ofintegers. This stage is called quantization [5]. As will be seen in section 4,a uniform quantizer is generally used in seismic compression [2],[6],[7],[3].

    Finally, a coding scheme is applied to the quantized coefficients. Thecoding scheme is a lossless compression strategy. It seeks a data represen-tation with a small number of bits. In seismic data compression, the mostused coding schemes have been Huffman [8],[9],[10],[11] and ArithmeticCoding [3],[12],[13],[7]. Sometimes a Run Lengh Encoding (RLE) [14] hasbeen used prior to the entropy coding stage [8],[11],[7].

    Different methods have been proposed to compress seismic data. Thegoal has been to achieve higher compression ratios at a quality above 40

    ing.cienc., vol. 11, no. 21, pp. 221238, enero-junio. 2015. 223|

  • Seismic Data Compression using 2D Lifting-Wavelet Algorithms

    dB in terms of the signal to noise ratio (SNR). Since each one of thosealgorithms uses different data sets, the comparison among them becomesdifficult. This paper presents a comparative study for a specific set ofseismic data compression algorithms.

    We analyzed the wavelet-based algorithms using the lifting scheme, fo-llowed by a uniform quantization stage and then by a coding scheme. Wehave selected the lifting scheme since it requires less computation and sto-rages resources than the traditional discrete wavelet transform (DWT) [15].

    In this comparative study, we examined how the variation of some pa-rameters affects the compression ratio for the same level of SNR. The pa-rameters of interest in this paper are the type and length of the waveletfilter, the number of levels in the wavelet decomposition, and the typeof coding scheme used. Our results are presented using graphics of com-pression ratio (CR) versus SNR. The implemented algorithms were testedon three different seismic datasets from Oz Yilmazs book [16], which canbe downloaded from Center for Wave Phenomena (ozdata.2, ozdata.10 andozdata.28 respectively)1. Those datasets are shot gathers, i.e., seismic datamethod traditionally used to store the data in the field. Figure 2 shows asection of each dataset.

    This article is organized as follows. Section 2 gives a brief overview ofdata compression. Sections 3 to 5 examine how the compression ratio isaffected by each one of the three mentioned stages. Finally, the last sectionpresents discussion and conclusions of this work.

    2 Data compression

    According to the Shannon theorem, a message of n symbols (a1, a2, . . . ,an) can be compressed down to nH bits, on average, but no further [17],where H is called the entropy, a quantity that determines the average num-ber of bits needed to represent a data set. The entropy is calculated byEquation 1, where Pi is the probability of the occurrence of ai.

    1Available at: http://www.cwp.mines.edu/cwpcodes/data/oz.original/

    |224 Ingeniera y Ciencia

    http://www.cwp.mines.edu/cwpcodes/data/oz.original/

  • Carlos Fajardo, Oscar M. Reyes, Ana Ramirez

    80 85 90 95 100 1050

    200

    400

    600

    800

    1000

    1200

    1400

    1600

    1800

    Trace number

    Tim

    e(m

    s)

    Dataset 1

    0 10 20 300

    200

    400

    600

    800

    1000

    1200

    1400

    1600

    1800

    Trace number

    Tim

    e (m

    s)

    Dataset 2

    5 10 15 20 250

    200

    400

    600

    800

    1000

    1200

    1400

    1600

    1800

    Trace number

    Tim

    e (m

    s)

    Dataset 3

    Figure 2: The three datasets used. The dataset 1 is a shot gather with 120traces, having 1075 samples each. The dataset 2 is a shot gather with 120 traces,having 1375 samples each. The dataset 3 is a shot gather with 48 traces, having2600 samples each.

    H = n

    i=1

    Pi log2 Pi (1)

    The entropy is larger when all n probabilities are similar, and it is loweras they become more and more different. This fact is used to define theredundancy (R) in the data. The redundancy is defined as the differencebetween the current number of bits per symbol used (Hc) and the minimumnumber of bits per symbol required to represent the data set (Equation 2).

    R = Hc

    [

    n

    i=1

    Pi log2 Pi

    ]

    = Hc +

    n

    i=1

    Pi log2 Pi (2)

    In simple words, the redundancy is the amount of wasted bits unnece-ssary bits in the data representation. There are many strategies for datacompression, but they are all based on the same principle: compressingdata requires reducing redundancy.

    ing.cienc., vol. 11, no. 21, pp. 221238, enero-junio. 2015. 225|

  • Seismic Data Compression using 2D Lifting-

View more