Statistical properties of aftershock rate decay: Implications for the assessment of continuing activity

  • Published on
    03-Aug-2016

  • View
    212

  • Download
    0

Embed Size (px)

Transcript

  • Acta Geophysica vol. 59, no. 4, Aug. 2011, pp. 748-769

    DOI: 10.2478/s11600-011-0016-2

    ________________________________________________ 2011 Institute of Geophysics, Polish Academy of Sciences

    Statistical Properties of Aftershock Rate Decay: Implications for the Assessment

    of Continuing Activity

    Aggeliki ADAMAKI1, Eleftheria E. PAPADIMITRIOU1, George M. TSAKLIDIS2, and Vassilios KARAKOSTAS1

    1Geophysics Department, Aristotle University of Thessaloniki, Thessaloniki, Greece e-mails: aadama@geo.auth.gr, ritsa@geo.auth.gr, vkarak@geo.auth.gr

    2Department of Statistics and Operational Research, Aristotle University of Thessaloniki, Thessaloniki, Greece

    e-mail: tsaklidi@math.auth.gr

    A b s t r a c t

    Aftershock rates seem to follow a power law decay, but the assess-ment of the aftershock frequency immediately after an earthquake, as well as during the evolution of a seismic excitation remains a demand for the imminent seismic hazard. The purpose of this work is to study the temporal distribution of triggered earthquakes in short time scales fol-lowing a strong event, and thus a multiple seismic sequence was chosen for this purpose. Statistical models are applied to the 1981 Corinth Gulf sequence, comprising three strong (M = 6.7, M = 6.5, and M = 6.3) events between 24 February and 4 March. The non-homogeneous Pois-son process outperforms the simple Poisson process in order to model the aftershock sequence, whereas the Weibull process is more appropriate to capture the features of the short-term behavior, but not the most proper for describing the seismicity in long term. The aftershock data defines a smooth curve of the declining rate and a long-tail theoretical model is more appropriate to fit the data than a rapidly declining exponential func-tion, as supported by the quantitative results derived from the survival function. An autoregressive model is also applied to the seismic se-quence, shedding more light on the stationarity of the time series.

    Key words: aftershock rate changes, decay forecasting, Greece.

  • STATISTICAL PROPERTIES OF AFTERSHOCK RATE DECAY

    749

    1. INTRODUCTION Given the threat posed by aftershocks in an area already hit by a strong main shock, it is of paramount importance for seismic hazard assessment purposes to model the expected occurrence rate of the aftershock sequences. The strongest aftershocks oftentimes occur late after the main event, and for this reason the time window selected for this purpose in the current paper is long enough compared with what is usually defined as aftershock period, that is, a period of several weeks or months. It is commonly accepted that aftershock activity, meaning activity significantly higher than before the main shock occurrence, lasts for years and even decades, and that an aftershock produced at any time may be large (Lomnitz 1966).

    Several researchers focus on the application of the probability theory in seismic sequences in order to study the temporal and spatial distribution of induced seismicity that follows a strong earthquake, starting from the well- known Omori law for aftershocks describing the power law decay of seis-micity after an earthquake. A useful tool for the statistical analysis of this kind of data is a point process which can be used to model random events in time. Several point-process models have been proposed (Vere-Jones 1992, Ogata 1999, among others) in which each point represents the time and place of an event and several attempts have been made to assess the seismicity rate changes caused by a strong event. A simple approach to the phenomenon is to consider a homogeneous process, taking into account the previous seis-micity (Toda et al. 1998, 2002), which underestimates the rate changes (Marsan 2003, Felzer et al. 2003). In these cases, the homogeneous process is not directly applicable and data heterogeneity is removed using decluster-ing techniques (Matthews and Reasenberg 1988, Kilb et al. 2000, Gomberg et al. 2001, Wyss and Wiemer 2000). On the other hand, the aftershocks are a considerable part of a seismic sequence, including important information about changes in the seismicity rate and for this reason significant research devoted to the modeling of aftershock occurrence and its impact to the re-gional seismicity has been done.

    A homogeneous Poisson process was tested by Wyss and Toya (2000) as the point process that can describe the earthquake occurrence characterized by a constant rate in time scale of decades. The authors did not use any dec-lustering techniques, although some of these methods yield to a homogene-ous Poisson process (Gardner and Knopoff 1974, Reasenberg 1985). They also used the chi-square test, splitting the data into groups of a fixed number of events. The result was to reject the null hypothesis for the homogeneous Poisson process only in a small percentage of the cases.

    Marsan (2003) tested three seismic sequences to assess rate changes caused by a strong earthquake. He observed that over a period of 100 days

  • A. ADAMAKI et al.

    750

    after the main shock it was not easy to observe the phenomenon of seismic quiescence. Of course, in long-time scales it is even more difficult to proper-ly assess whether the rate changes are due to a previous strong event or not. Firstly, a Poisson process with rate (t) is introduced, which can be either constant (homogeneous Poisson process) or given as a function of time, whereas the second case is a non-homogeneous Poisson process. In this work a power law model (Utsu 1970) was also tested, which is a generaliza-tion of the model of Ogata and Shimazaki (1984) and Woessner et al. (2004). Finally, an autoregressive model was applied, which led to a syste-matic overestimation of future values of the seismicity rate during the after-shock sequences. Marsan and Nalbant (2005) analyzed the steps of the method that has to be followed in order to estimate the parameters of several models and then correlated the results with the observed changes in Coulomb stress.

    Ogata (2005a, b) also investigated a point process model (Daley and Vere-Jones 2003) which introduces the seismicity rate as a function of the form (t) (where stands for the characterizing parameters of the rate),

    Fig. 1. Main seismotectonic properties of the Aegean and the surrounding area: NAF North Anatolian Fault, NAT North Aegean Trough, CTF Cephalonia Transform Fault, and RTF Rhodes Transform Fault. Rectangle indicates the study area. Colour version of this figure is available in electronic edition only.

  • STATISTICAL PROPERTIES OF AFTERSHOCK RATE DECAY

    751

    depending on time, t. The rate (t) depends not only on the elapsed time (as for example, the modified Omori law) but also on the occurrence times and the size of previous events. In order to test the fitting quality of the model in each seismic sequence, the cumulative number of earthquakes is compared to the theoretical values estimated from the models.

    The purpose of this article is to use earthquake statistics to investigate whether we may use part of the data comprising a seismic sequence, in order to evaluate the temporal evolution of the imminent seismic activity. In order to perform this investigation, it is necessary to seek for the appropriate statis-tical tools. Our objective is to capture the main features of an aftershock se-quence through a number of appropriate approaches.

    Considering the seismic sequences as stochastic point processes, three statistical models were chosen to apply to a multiplet that occurred in 1981 in the Corinth Gulf (shown by a rectangle in Fig. 1), a rapidly extended half graben which has experienced several destructive earthquakes both in histor-ical times and the recent decades. As in most relevant investigations, the spa-tial distribution of aftershocks was considered stationary in time, and changes in the rate at which events occur is explored, not where they occur. After defining the model parameters, tests were performed in order to show whether they are supported by the data.

    2. METHODOLOGY Estimation of the seismicity rate changes caused by a strong earthquake is based upon the assumption that the earthquake occurrence can be described by stochastic processes, e.g., a Poisson point process. A Poisson distribution is a discrete probability distribution expressing the probability for the occurrence of a certain number of events occurring during a specific time interval, when the average occurrence rate is known and these events are independent of the time of the last event. Knowing that the expected number of occurrences within the specific time interval is equal to , the probability of having exactly k-occurrences is given by

    ( ) ,!

    k ef kk

    = (1)

    where k N, giving the number of occurrences, 0, and R represent-ing the expected number of observations within the given time period. Equa-tion (1) represents the probability mass function of a Poisson distribution. The cumulative density function is given by

    0

    ( ) .!

    ik

    iF k e

    i

    == (2)

  • A. ADAMAKI et al.

    752

    A Poisson distribution is used to describe a phenomenon that can be as-sumed as a Poisson process. A discrete process, {(t), t 0}, where N(t) N is called a Poisson process with rate , > 0, if

    it is (0) = 0, when t = 0, the number of occurrences within each time interval t is indepen-

    dent of the number of occurrences within any other irrelevant time interval, the number of events occurring within any time interval of length t

    can be approached by the Poisson distribution with mean value t. Assuming times, s and s + t , we get

    { } ( )( ) ( ) , 0 .!

    nt tP N t s N s n e n

    n + = = (3)

    If the rate parameter is time-dependent, (t), then the expected number of events of the Poisson process in the time interval [t, t + ] is given by [ , ] ( )d .

    t

    t

    t t s s

    +

    + = (4) The Poisson process can be defined by the seismicity rate (t) coming up

    from the time period [0, ] under study and the observed earthquake times, ti , i.e., the realizations of this Poisson process.

    It is known that the waiting times (inter-arrival times) of a Poisson process are exponentially distributed. The probability density function of the exponential distribution is given by

    11 for 0( )

    0 for 0

    xe xf x

    x

    =

  • STATISTICAL PROPERTIES OF AFTERSHOCK RATE DECAY

    753

    process. In this case, the number of earthquakes in any interval is time de-pendent, and the mean rate in an interval [t, t + t] is given by eq. (4). The probability that there are exactly n-occurrences within the specific interval is then given by

    ( , )[ , ][ , ]( ) , 0, 1, 2,... .

    !

    nt t t

    t t tt t tP n n e n

    n +

    ++ = = = (7)

    Two functions expressing the rate (t) are tested in the present study, namely

    ( ) a btt e += (8) and 1( ) b bt a bt = , (9) both selected because they allow the rate decaying as time passes. More spe-cifically, the rate function (t) given by eq. (8) decays rapidly to zero while (t) given by eq. (9) may decay smoothly to 0 (by a suitable choice of the parameters).

    In order to apply those models to the data of a seismic sequence, a spe-cific region must be selected, with dimensions a few times larger than the main rupture length. Determining the study area, the threshold magnitude must be defined to ensure the completeness of the data set. The next step is to define the duration of the earthquake catalog, which depends on the pur-pose of the specific study. Naming T0 the time of the main shock occurrence, the catalog expands within the intervals [0, T0] and [T0, T], where T is lo-cated several days after T0. The lengths of the time intervals between con-secutive events occurring at times ti (t0 = T0) are computed and their distribution is tested using the chi-square test; in our case the null hypothesis is that the inter-arrival times are exponentially distributed. This fact is tested separately for consecutive sub-intervals of the data set periods [0, T0] and [T0, T]. If the null hypothesis is accepted for a tested sub-interval, then the number of occurrences in this sub-interval follows a Poisson distribution with a specific constant rate parameter (homogeneous Poisson process). The data in each sub-interval are divided into groups (cells) based on the lengths of the inter-arrival times and the expected frequencies are estimated for each group of inter-arrival times. In order to apply the chi-square test, attention is paid in defining the groups such that the expected frequency in each group is greater than 5.

    Especially for the period [T0, T], since the aftershock rate over the first few days after the main shock (which is very high compared with the rate in the previous time interval [0, T0]) is characterized by strong changes, (differ-ent) constant rates are assumed over short sub-intervals of [T0, T]. Thus, the

  • A. ADAMAKI et al.

    754

    interval [T0, T] is divided into short intervals [Ti-1, Ti], i N+, each one tested for its homogeneity as a Poisson process with a rate parameter, i (using the chi-square test). Note that each i is computed taking into consideration the number of events in the respective sub-interval [Ti-1, Ti], which is not neces-sarily equal for all sub-intervals. The set {i , [(Ti-1 + Ti)/2]}, i N+, is then used as the input data set to fit a selected curve (t) that best describes the evolution of the seismicity rate within the entire period [T0, T]. In the present study a rate function of the form (t) = ea + bt is fitted to the data set {i , [(Ti-1 + Ti)/2]}, i N+, where the parameters a and b are estimated by the least squares method.

    Finally, the non-homogeneous Poisson process is assumed to have a rate function of the form (t) = ab btb1; the parameters a and b are estimated by the maximum likelihood method using the time of occurrence, ti , of each earthquake. This rate function can be an increasing or decreasing function, according to whether b > 1 or b < 1, respectively. Based on the form of (t), the aforementioned process is named a process with a Weibull hazard rate or simply a Weibull process. The special case of b = 1 leads to a constant ha-zard rate, which corresponds to the exponential case characterized by the memoryless property. The maximum likelihood estimators for a and b are given by

    1/ 1

    1

    , ,ln( / )

    nb n

    n ii

    t na bn t t

    =

    = =

    (10)

    where n stands for the number of data and ti denotes the time of occurrence of the i-th earthquake, i = 1, 2, , n.

    One more way to model the sequence of earthquakes is to consider a random variable Z(t), representing the number of earthquake occurrences at any time, t. The set {Z(t)} constitutes a time series, i.e., a family of stochastic processes. The model which is fitted to the data is the autoregressive model of order p, p N+, abbreviated as AR(p). The AR(p) model assumes that

    1,

    P

    t i t i ti

    Z Z a =

    = + (11) where i are the unknown parameters and at is a white noise function with a...

Recommended

View more >