52
1 USC-SCEC/CEA Technical Report #1 Milestone 1A Submitted to California Earthquake Authority 801 K Street, Suite 1000 Sacramento, CA 95814 By the Southern California Earthquake Center University of Southern California 3651 Trousdale Parkway, ZHS 169 Los Angeles, CA 90089-0742 Principal Investigator: Thomas H. Jordan Director, SCEC Project 1 Manager: Edward H. Field United States Geological Survey Projects 2/3 Manager: Paul Somerville URS Corporation February 8, 2006

USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

1

USC-SCEC/CEA Technical Report #1

Milestone 1A

Submitted to California Earthquake Authority

801 K Street, Suite 1000 Sacramento, CA 95814

By the

Southern California Earthquake Center University of Southern California

3651 Trousdale Parkway, ZHS 169 Los Angeles, CA 90089-0742

Principal Investigator: Thomas H. Jordan

Director, SCEC

Project 1 Manager: Edward H. Field United States Geological Survey

Projects 2/3 Manager: Paul Somerville

URS Corporation

February 8, 2006

Page 2: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

2

Summary and Introduction

This report is submitted on behalf of the Working Group on California Earthquake Probablities (WGCEP). WGCEP is a collaboration of scientists from the Southern California Earthquake Center (SCEC), the United State Geological Survey (USGS), and the California Geological Survey (CGS).

This report contains deliverables under Milestone 1A as follows: Section I. Uniform California Earthquake Rupture Forecast 1.0 (UCERF 1.0). This

report gives a technical description of the Uniform California Earthquake Rupture Forecast (UCERF) 1.0 and a San Andreas Fault (SAF) Assessment developed by the Working Group on California Earthquake Probabilities (WGCEP). Initial versions of these products were formally reviewed by the WGCEP Management Oversight Committee (MOC) and Scientific Review Panel (SRP), and by some members of the California Earthquake Prediction Evaluation Council (CEPEC), at a November 17-18, 2005 meeting at the University of Southern California. One recommendation that came out of that meeting was to add a Poisson component to the southern California ruptures in UCERF 1.0 to make them consistent with northern California ruptures. This modification is reflected in the version presented here. A paper describing UCERF 1.0 has been submitted for publication in the journal Seismological Research Letters. This model will serve as an important prototype for implementing an interface for downstream seismic-hazard and loss calculations.

Section II. San Andreas Fault Assessment. The San Andreas Fault (SAF) Assessment

is composed of two components: 1) A Synoptic View of Southern SAF Paleoseismic Constraints; and 2) A Framework for Developing a Time-Dependent Earthquake Rupture Forecast. The first represents a comprehensive evaluation of what existing paleoseismic data on the southern San Andreas implies about earthquake ruptures on the fault. This study has raised important questions, including the degree to which fault are segmented (a key assumption in previous WGCEPs and in UCERF 1.0). The second element here represents a proposed, generalized framework for UCERF 2.0, specifically designed with the following innovations in mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of fault-to-fault rupture jumps (observed in recent earthquake but not generally accommodated by previous WGCEPs or in UCERF 1.0); and c) the accommodation of alternative time-dependent earthquake probability models (because different viable approaches exist). In addition to a discussion of the SAF problem, the SAF assessment provides a preliminary framework that will allow a consistent treatment for all faults throughout the state.

Other WGCEP elements are available upon request or at http://www.WGCEP.org. These

include the WGCEP project plan, a review of previous WGCEPs, and SHA analysis tools for UCERF 1.0.

Page 3: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

3

Project 1 Oversight The overall management of this project is under the Principal Investigator, Thomas H. Jordan, the director of the Southern California Earthquake Center. Project 1 is directed by Edward H. Field, a research scientist at the United States Geological Survey in Pasadena, California. Independent committees assist the directors in Project 1. The Management Oversight Committee (MOC) is in charge of resource allocation and approves project plans, budgets, and schedules. The MOC will endorse the models before submittal of reports. MOC members are Thomas Jordan for SCEC, Rufus Catchings and Jill McCarthy for the USGS, and Michael Reichle for the CGS. The Executive Committee (EXCOM) is responsible for convening experts, reviewing options, and making decisions on the implementation of the model and supporting databases. The EXCOM will not advocate specific model components, but will make sure that datasets are accommodated to adequately span the range of models. EXCOM members are Edward Field, Thomas Parsons, Mark Petersen, and Ross Stein of the USGS; Ray Weldon of SCEC, and Chris Wills of the CGS. The Science Review Panel (SRP) is an independent body of experts who will decide whether the WGCEP has considered an adequate range of models, given the forecast duration of interest, and that the relative weights have been set appropriately in the models. SRP members are William Ellsworth (Chair), Art Frankel, Mike Blanpied, and David Schwartz of the USGS; David Jackson and Jim Dieterich of SCEC; Lloyd Cluff of PG&E; and Allin Cornell of Stanford.

Page 4: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

4

Table of Contents Summary and Introduction -------------------------------------------------------------------------- 2 Section I. Uniform California Earthquake Rupture Forecast (UCERF) 1.0 ---------------- 5 Section II. San Andreas Fault (SAF) Assessment ----------------------------------------------- 29

A Synoptic View of S. SAF Paleoseismic Constraints --------------------------------- 29 A Framework for Developing a Time-Dependent Earthquake Rupture Forecast -- 37

Page 5: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

5

Section I

Uniform California Earthquake Rupture Forecast (UCERF) 1.0

A Time-independent and Time-dependent Model for the State of California

Mark D. Petersen, Tianqing Cao, Kenneth W. Campbell, and Arthur D. Frankel U.S. Geological Survey, California Geological Survey, and EQE International

ABSTRACT

In this paper we compare the 2002 U.S. Geological Survey (USGS) time-independent (Poisson) California hazard calculations with preliminary time-dependent calculations. The time-independent calculations are time invariant and are based on the 2002 USGS national seismic hazard model that contains about 200 fault sources (Frankel et al., 2002). The time-dependent model incorporates current state-of-the-art methods for calculating hazard and will provide a benchmark for comparison with future statewide Working Group on California Earthquake Probabilities (WGCEP) updates and with the Regional Earthquake Likelihood Models (RELM). This model is referred to as the Uniform California Earthquake Rupture Forecast version 1.0 model (UCERF 1.0). The time-dependent model is based on the 2002 USGS time-independent hazard parameters such as long-term earthquake recurrence, fault slip rates, magnitude-frequency distributions, and ground motion attenuation relations. In addition to these fault and ground motion parameters the time-dependent hazard also requires paleoseismic information to estimate the elapsed time since the last event and the uncertainty in the earthquake recurrence distribution. In this paper we apply information published in the WGCEP 2003 northern California model, a statewide analysis by Cramer et al. (2000), an analysis of the Cascadia subduction zone by Petersen et al. (2002), and paleoseismic results published by Weldon et al. (2004) to estimate these additional parameters. We have updated the time-dependent fault probabilities for the San Andreas, San Gregorio, Hayward, Rodgers Creek, Calaveras, Green Valley, Concord, Greenville, Mount Diablo, San Jacinto, Elsinore, Imperial, Laguna Salada faults, and the Cascadia subduction zone. Probabilities of earthquake ruptures are calculated for 1, 5, 10, 20, 30, and 50 year intervals beginning in the year 2006. Time-dependent probabilistic ground motion maps for peak horizontal ground acceleration on rock with a 10% probability of exceedance in the next 30 years are generally as much as 10-20% higher than corresponding time-independent maps near the southern San Andreas, the Cascadia subduction zone, and the eastern San Francisco Bay area faults. These maps are as much as 10-20% lower along the San Andreas fault north of the San Francisco Bay area and near the southern San Jacinto fault. The difference between the time-dependent and time-independent models for high frequencies that control the peak horizontal ground acceleration are significant near the faults, but at sites farther away the differences may be negligible at all return periods.

Page 6: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

6

INTRODUCTION Ground shaking hazard maps that are based on sound earth-science research are effective tools for mitigating damage from future earthquakes. Applying the assumptions that future earthquakes will occur on active faults or near previous events and that the ground shaking from these events will fall within the range of globally-recorded motions leads to probabilistic hazard maps that predict ground-shaking potential across a region. Development of hazard and risk maps requires technical interactions between earth scientists and engineers in estimating the rate of potential earthquakes across a region, quantifying likely ground shaking levels at a site, and understanding how buildings respond to strong ground-shaking. For the past 25 years the U.S. Geological Survey (USGS) and California Geological Survey (CGS) have cooperated with professional organizations to incorporate hazard maps and other hazard products in public and corporate documents such as building codes, insurance rate structures, and earthquake risk mitigation plans (Algermissen et al., 1982, Frankel et al., 1996, Petersen et al., 1996, Frankel et al., 2002). These hazard products are used in making public-policy decisions, therefore, it is essential that the official USGS-CGS hazard models reflect the “best available science”. This qualification is also required by the statute that regulates the California Earthquake Authority which provides most earthquake insurance in the state. To adequately represent the “best available science”, the hazard maps need to be updated regularly to keep pace with new scientific advancements. The USGS and CGS promote science research and consensus building by: (1) providing internal and external funding to scientists and engineers for collecting, interpreting, and publishing geologic and seismic information needed to quantify earthquake hazard and risk across the country and (2) involving scientists from academia, government, and industry in workshops and working groups to define the current “best available science”. The methodologies, computer codes, and input data used in developing these products need to be openly available for review and analysis. Input parameters and codes for current hazard maps may be obtained at: http://earthquake.usgs.gov/hazmaps and http://consrv.ca.gov/cgs. At these websites a user may access documentation that describes the methodologies and parameters, a relational database and tables that contain explanations of how the fault parameters and uncertainties were chosen, interactive tools that allow the user to view hazard map information and to disaggregate the hazard models, and web interfaces that present building code design values at a latitude and longitude or zip code of interest. The USGS has historically developed time-independent models of earthquake occurrence that are based on the assumption that the probability of the occurrence of an earthquake in a given period of time follows a Poisson distribution. Probabilities calculated in this way require only knowledge of the mean recurrence time. Results of these calculations do not vary with time (i.e., results are independent of the time since the last event) and are a reasonable basis for the earthquake resistant provisions in building codes and long-term mitigation strategies. In contrast, time-dependent models of earthquake occurrence are based on the assumption that the probability of occurrence of an earthquake in a given time period follows a renewal model, that is a lognormal, Brownian Passage Time (BPT), or other probability distribution in which the probability of the event depends on the time since the last event (Appendix A). In addition to the

Page 7: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

7

mean frequency (or recurrence time) of earthquakes, these models require additional information about the variability of the frequency of events (the variance or standard deviation), and the time of the last event. The time-dependent models are intuitively appealing because they produce results broadly consistent with the elastic rebound theory of earthquakes. Reliable time-dependent models are desired for setting insurance rates as well as other short- to intermediate-term mitigation strategies. The USGS and CGS are beginning to develop these types of hazard products as new geologic and seismic information regarding the dates of previous events along faults becomes available. In application, both the time-independent and time-dependent models also depend on assumptions about the magnitude-frequency characteristics of earthquake occurrence, the simplest of which is the “characteristic earthquake model” in which all large earthquakes along a particular fault segment are assumed to have similar magnitudes, average displacements, and rupture lengths. More complicated models include Gutenburg-Richter magnitude-frequency distributions and multi-segment ruptures. In as much as time-dependent models require more input parameters and assumptions as contrasted with time-independent models, there is not yet the same degree of consensus about the methods and results for these calculations. Both time-independent and time-dependent hazard calculations require moment-balanced models that are consistent with the global plate rate models and slip rates determined on individual faults. Geologists can estimate the average slip rates on faults in California from offset geologic features that have been dated using radiometric dating techniques. At sites along some faults we know the approximate times of past events extending hundreds or thousands of years into the past, but we do not know the magnitudes of or the length of faults involved in these past earthquakes. A fundamental constraint that we apply to candidate earthquake occurrence models, commonly called “moment balancing,” is the requirement that over the long term, the displacements from the earthquakes sum to the observed slip rate all along the fault. Models that permit smaller earthquakes will generally contain more frequent earthquakes in order to add up to the total slip rate. It should be remembered that since the ground motion hazard increases with the frequency of earthquakes, then models that permit smaller but more frequent earthquakes will typically lead to estimates of higher hazard. In this paper we describe the general characteristics of the time-independent (Poisson) 2002 USGS-CGS California seismic hazard models and develop a time-dependent model - the first version of the Working Group on California Earthquake Probability (WGCEP) uniform California earthquake rupture forecast model (UCERF 1.0). The time-independent and time-dependent hazard maps and hazard curves provide a comparison for the Regional Earthquake Likelihood Models (RELM) presented in this volume and future WGCEP models that will be developed over the next couple years. The time-dependent model does not have the same consensus inputs that are incorporated in the standard time-independent model and so the user should use caution in applying these maps. However, this new model builds on information collected from several Working Group on California Earthquake Probability models (1988, 1990, 1995, 1999, 2003), time-dependent models published by Cramer et al. (2000) and Petersen et al. (2002), paleoseismic data from Weldon et al. (2004), and recent seismicity data. Time-dependent analysis incorporates first-order information on the elapsed time since the last earthquake and should provide a reasonable basis for comparison.

Page 8: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

8

THE 2002 USGS-CGS TIME-INDEPENDENT SEISMIC HAZARD MODEL The USGS and CGS released California hazard maps in 1996 and 2002 using a probabilistic seismic hazard framework and incorporating information from regional workshops held across the country (Petersen et al., 1996; Frankel et al., 1996; Frankel et al., 2002). Hundreds of earth scientists and engineers participated in USGS regional workshops to define the “best available science”. The current hazard model is based on fault information, seismicity catalogs, geodetic data, and ground-shaking attenuation relations that were discussed at these workshops. An advisory committee reviewed these models and made recommendations on how the resulting products could be improved. The 2002 California model incorporates nearly two hundred fault sources that generate thousands of earthquake ruptures. It is not possible to describe details of the methodology and fault parameters in the limited space available in this publication. Instead, we refer the reader to published references, data, and products available on the websites and publications (http://earthquake.usgs.gov/hazmaps/ (USGS) ; http://www.consrv.ca.gov/cgs (CGS); Petersen et al., 1996; Frankel et al., 1996, Frankel et al., 2002) that provide the input parameters and codes needed to reproduce the official hazard model. In the section below we provide only a general description of the methodology, input data, and results. The fault sources considered in the model contribute most to the hazard in California than the background seismicity. Faults are divided into two classes, A-type and B-type. The A-type faults generally have slip rates greater than 1 mm/yr and paleoseismic data that constrain the recurrence intervals of large earthquakes (Figure 1). Various editions of the “Working Group on California Earthquake Probability” (WGCEP) report indicate that sufficient information is available for these faults to allow development of rupture histories and time-dependent forecasts of earthquake ruptures. Models are developed using single-segment and multi-segment earthquake ruptures as defined by various working groups in northern California and southern California. The B-type faults include all of the other faults in California that have published slip rates and fault locations that can be used to estimate a recurrence interval. To calculate the recurrence, the moment of the potential earthquake is divided by the moment rate determined from the long-term fault slip rate to obtain the recurrence time of a characteristic size earthquake. We use a logic tree to account for epistemic uncertainty in our knowledge of which magnitude-frequency distribution is correct. In the hazard model a Gutenberg-Richter distribution that spans magnitudes between 6.5 and the characteristic size is weighted 1/3 and a characteristic distribution defined by a simple delta function is weighted 2/3. Modeling uncertainties in the characteristic or maximum magnitudes of a fault are accounted for explicitly in the calculation procedure. In addition to the fault sources, a random source is used to account for earthquakes on unknown fault sources, moderate-size earthquakes on faults, and other earthquake sources that have not been quantified. This portion of the model is most important in areas that lack identified active faults, however, it also contributes significantly to the overall seismicity hazard across the state.

Page 9: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

9

The random background source model is based on earthquake catalogs. For the 1996 and 2002 hazard analyses, we developed a California state-wide earthquake catalog for magnitudes greater than 4 from the late 1700’s to 1994 and 2000, respectively (Petersen et al., 1996, Petersen et al., 2000; Toppozada et al., 2000). This statewide catalog was developed using regional catalogs from the U.S. Geological Survey (Menlo Park and Pasadena), California Institute of Technology, University of California Berkeley, University of Nevada, Reno; and various publications regarding earthquake moment magnitudes and aftershocks. Mine blasts and duplicate earthquakes were removed from the catalog. For the time-independent hazard assessment we only consider independent events; dependent events are not consistent with the assumption of independence in a Poisson process. We applied the algorithm of Gardner and Knopoff (1974) to decluster the catalog, removing aftershocks and foreshocks that are identified using magnitude and distance criteria. The earthquakes in the catalog are spatially binned over a grid and smoothed using a Gaussian distribution with a 50 km correlation length to obtain the rate of earthquakes in the background model (Frankel et al., 1995). The hazard is then computed by using the rate at each grid node in conjunction with a Gutenberg-Richter magnitude frequency distribution and attenuation relations to obtain the rate of exceedance at each ground motion level. In portions of eastern California and western Nevada there are differences between the model rate of earthquakes calculated from the geologic slip rate data and the historic rate of earthquakes from the earthquake catalog. Recent geodetic data seems to be more consistent with the historic earthquake data, both suggesting higher contemporary strain rates than would be implied from geologic studies. Therefore, in four regions of eastern California and western Nevada we have used the geodetic data to supplement our earthquake fault models. The earthquakes are modeled using geodetically-based slip rates that are spread uniformly across a zone and modeled using a Gutenberg-Richter magnitude-frequency distribution. Future research should help delineate the particular faults that are accommodating the observed geodetic strains and determine if recent data reflects the long-term strain rates or if these data are dominated by secular variability. Once we have quantified the earthquake sources we can apply published empirical attenuation relations to estimate the potential ground shaking levels from the modeled earthquakes. We have applied four attenuation relations, equally weighted, for coastal California earthquakes (Abrahamson and Silva, 1997; Boore et al., 1997; Campbell and Borzorgnia, 2003; Sadigh et al., 1997). For the extensional region we have also applied the attenuation relation of Spudich et al. (1999). Ground motions from the Cascadia subduction zone are calculated using the attenuation relations for interface earthquakes of Youngs et al. (1997) and Sadigh et al. (1997) and for deep intraslab earthquakes of Atkinson and Boore (2003) and Youngs et al. (1997). Generally, attenuation relations should be updated when sufficient strong motion data is recorded that shows inconsistencies with the previous relations. Figure 2 shows the time-independent, or Poisson, hazard of peak ground acceleration for 10% probability of exceedance in 30 years from the 2002 national seismic hazard model.

Page 10: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

10

Tests of hazard maps It is difficult to test the hazard maps that are used in building codes because they consider earthquakes with return periods of more than two thousand years whereas most readily-available test criteria come from the relatively short historical record that only spans a couple hundred years. Most of the RELM models are compared to the recent 5-years of seismicity. This short catalog is useful for very high-probability ground-motion predictions, but does not test the low-probability (2500-year) ground motions that are considered in building codes. We have performed tests of different components of the hazard model using three datasets that are somewhat independent from the data used to generate the models: (1) historic earthquake magnitude-frequency distribution from the catalog, (2) plate tectonic strain-rate models, and (3) historical intensity data. One simple way to test the magnitude-frequency distribution of the hazard source model is to compare the rate of earthquakes in the model with the observed rate of earthquakes in the 100 to 200-year historic earthquake catalog. In the USGS-CGS source model the catalog was used to estimate the rate of M 4 and greater background (random) earthquakes, meaning that the actual magnitude-frequency distribution was not used directly in the model. The fault information was added to the random earthquakes to produce a complete source model. The complete source model was tested to determine if the earthquake rates at each potential magnitude are consistent with the magnitude-frequency distribution observed in the historic earthquake record. In such tests it was shown that there were significant differences between the 1996 model rate of magnitude 6.5 to 7.0 events and the historic rate of earthquakes (Petersen et al., 2000). However, this earthquake rate discrepancy was considered in developing the 2002 model, and the model was modified so that the rate of earthquakes agreed much more closely with the rate of historic earthquakes (Frankel et al., 2002-Figure 6). The moment rate measured across the plate boundary is another constraint that is applied in developing a proper seismic hazard model for California. We used the NUVEL I plate tectonic rate model (DeMets et al., 1990) to compare with the geological vector slip rates in the 1996 model (which is very similar to the 2002 model; Petersen et al., 1996- Figure 4). The slip rates predicted by the NUVEL I plate rate model and the USGS-CGS slip rate model are generally, with a few exceptions, within 10% of each other. Orientations of the vector slip rates are also generally compatible. A clearer understanding of geological slip rates and orientations would reduce the differences between the datasets, especially in southern California. Another test that may be applied to the hazard maps is to use the historical record of intensity (either the 100-200 year Modified Mercalli Intensity data or the 10,000 year precarious rock data) across California to compare with the intensities predicted by the hazard models. Stirling and Petersen (2005) compared MMI intensity data with the seismic hazard intensities at 26 sites across California and many other sites across the United States and New Zealand. Intensity data indicate that historical rates for peak ground acceleration were similar to the 2002 California hazard model accelerations after some biases in intensity were corrected. The intensity data is difficult to analyze and contains large uncertainties because it is a qualitative measure. Nevertheless, this analysis provides some level of confidence that the short return period portion

Page 11: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

11

of the hazard models is producing high-frequency ground motions that are similar to the observed historical intensities.

THE USGS-CGS PRELIMINARY TIME-DEPENDENT SEISMIC HAZARD MODEL The time-dependent hazard presented in this paper is based on the time-independent, or Poissonian, 2002 national seismic hazard model and additional recurrence information for A-type faults that include: San Andreas, San Gregorio, Hayward, Rodgers Creek, Calaveras, Green Valley, Concord, Greenville, Mount Diablo, San Jacinto, Elsinore, Imperial, Laguna Salada, and the Cascadia subduction zone (Figure 1). A-type faults are defined as having geologic evidence for long-term rupture histories and an estimate of the elapsed time-since the last earthquake. A simple elastic dislocation model predicts that the probability of an earthquake rupture increases with time as the tectonic loading builds stress on a fault. Thus, the elapsed time is the first-order parameter in calculating time-dependent earthquake probabilities. However, other parameters such as static elastic fault interactions, viscoelastic stress-transfer, and dynamic stress changes from earthquakes on nearby faults will also influence the short-term probabilities for earthquake occurrence. In this paper we only consider the influence of the elapsed time since the last earthquake. Over the past 30 years, the USGS and CGS have developed time-dependent source and ground motion models for California using the elapsed time since the last earthquake (Working Group on California Earthquake Probabilities, 1988, 1990, 1995 (led by the Southern California Earthquake Center), 1999, 2003; Cramer et al., 2000; and Petersen et al., 2002, Appendix A). The probabilities of occurrence for the next event were assessed using Poisson, Gaussian, lognormal, and Brownian Passage Time statistical distributions. Past Working Groups applied a value of about 0.5 +/- 0.2 for the ratio of the total sigma to the mean of the recurrence distribution. This ratio, known as the coefficient of variation, accounts for the periodicity in the recurrence times for an earthquake; a coefficient of variation of 1.0 represents irregular behavior (nearly Poissonian) and a coefficient of variation of 0 indicates periodic behavior. For this analysis, we have applied the parameters shown in Table 1 to calculate the time-dependent earthquake probabilities. The basic parameters needed for these simple models are the mean-recurrence interval (T-bar), parametric uncertainty Sigma-p, intrinsic variability Sigma-i, and the year of the last earthquake. The parametric sigma is calculated from the uncertainties in mean displacement and mean slip rate of each fault (Cramer et al. 2000). The intrinsic sigma describes the randomness in the periodicity of the recurrence intervals. The total sigma for the lognormal distribution is the square root of the sum of the squares of the intrinsic and parametric sigmas. For this analysis we assume characteristic earthquake recurrence models with segment boundaries defined by previous working groups. We calculated the time-dependent hazard using the 2002 Working Group on California Earthquake Probabilities report (WGCEP, 2003) for the San Francisco Bay area, the 2002 National Seismic Hazard and Cramer et al. (2000) models for the other faults in northern and southern California, and the Petersen et al. (2002) model for the Cascadia subduction zone. Ned Field and Bill Ellsworth reran the computer code that was used to produce the WGCEP (2003)

Page 12: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

12

report and provided an update to the time-dependent probabilities for the San Francisco Bay area for a 1, 5, 10, 20, 30, and 50-year time period beginning in 2006. For the Cascadia subduction zone, we applied a time-dependent model for the magnitude 9.0 events using the results of Petersen et al. (2002). Recurrence rates for the magnitude 9 earthquakes in the model were estimated from paleo-tsunami data along the coast of Oregon and Washington. The M 8.3 earthquakes in the subduction plate-interface model were parameterized using Poisson process statistics (Frankel et al., 2002). The M 9.0 and 8.3 models were equally weighted in the 2002 hazard model as well as in this time-dependent model. The San Andreas (Parkfield segment), San Jacinto, Elsinore, Imperial, and Laguna Salada faults were all modeled using single segment ruptures following the methodology of Cramer et al. (2000). Multi-segment ruptures were allowed in the WGCEP 1995 model, but these were not incorporated in this preliminary time-dependent model. We developed three southern San Andreas models that consider various combinations of the five segments of the southern San Andreas Fault that were defined by previous working groups: Cholame, Carrizo, Mojave, San Bernardino, and Coachella and three multiple-segment ruptures. In the time-dependent models, 11% of the occurrence rate is based on the Poisson model and 89% is based on the time-dependent model, similar to the method applied in WGCEP (2002). For the time-dependent portion of the model, it is easier to define the single-segment time-dependent probabilities because there are published recurrence rates and elapsed times since the last rupture for these segments based on historical and paleo-earthquake studies (e.g., WGCEP 1995). However, defining time-dependent multiple-rupture probabilities (cascades) are much more complicated. The first time-independent model (T.I. Model 1), assumes single-segment and multiple-segment ruptures with weights that balance the moment rate and that are similar to the observed paleoseismic rupture rates (Frankel et al., 2002, Appendix A). Possible rupture models of the southern San Andreas include: (a) ruptures along five individual segments, (b) rupture of the southern two segments and of the northern three segments (similar to the 1857 earthquake rupture), and (c) rupture of all of the five segments together. For each of the complete rupture models, the magnitudes of the earthquake were determined from the rupture area. Recurrence rates were assessed by dividing the moment rate along the rupture with this calculated magnitude. The single-segment rupture models were weighted 10% and the multi-segment rupture models were weighted 90% (50% for sub-model b and 40% for sub-model c) to fit the observed paleoseismic data. In the first time-dependent model (T.D. Model 1), which is based on T.I. Model 1, probabilities are calculated, as in previous working groups, by using a lognormal distribution for all the segments and parametric sigmas listed in Table 1. For the time-dependent portion of the model, we have adjusted the Poisson probabilities to account for the information from the time-dependent probabilities of single-segment events. The southern San Andreas individual fault segments have higher time-dependent probabilities than the corresponding Poisson probabilities (a probability gain), therefore, the multi-segment rupture should also have higher time-dependent probabilities than the Poisson model. Since it is not known in advance what segment might

Page 13: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

13

trigger rupture of the cascade, this multi-segment rupture probability is calculated using the weighted average of the probability gains from each of the segments involved in the rupture, where the weights are proportional to the 30-year time-dependent probability of each segment. We show an example containing two segments in Appendices B and C. The second time independent model (T.I. Model 2) is also based on the 2002 national seismic hazard model (model 2) and considers characteristic displacements for earthquake ruptures. This model assumes two multiple-segment ruptures that are composed of segments from Cholame through Mojave (1857 type ruptures) and from San Bernardino through Coachella. In addition, single segment ruptures of the Cholame and Mojave are considered. The model assumes that the Carrizo segment only ruptures in 1857 type earthquakes with a rate of 4.7e-3 events/yr, based on paleoseismic observations. Therefore, this multi-segment rupture accounts for 22 mm/yr of the total slip rate of 34 mm/yr (WGCEP, 1995), given the earthquake rate and a 4.75 characteristic slip on the Cholame segment. The remaining 12 mm/yr is taken up by single-segment ruptures of the Cholame segment. Using a single-segment magnitude of 7.3 and a 12mm/yr slip rate yields a single-segment recurrence rate for Cholame of 2.5e-3/yr. For the Mojave segment, the slip rate available after the slip from 1857 type ruptures is removed is 9 mm/yr. Using an earthquake with magnitude 7.4 (4.4 m/event) for single-segment rupture and a slip rate of 9 mm/yr yields a recurrence rate of 2.05e-3 for a single segment Mojave rupture. For the San Bernardino through Coachella rupture a M 7.7 earthquake with recurrence rate of 5.5e-3 event/yr is used to be consistent with the paleoseismic data. Inclusion of other ruptures on these segments leads to estimated recurrence rates that exceed the paleoseismic observations. The total moment rate of this model is 92% of the total predicted moment rate. This second time-dependent model (T.D. Model 2), which is based on T.I. Model 2, accommodates the difference between the total segment time-dependent rupture rate (the time-dependent rate of all potential ruptures that involve that segment) and the corresponding multiple-segment rupture rate that involves that segment. The segment time-dependent probabilities for all ruptures combined are calculated the same way as for the first model and are shown in Table 1. The Carrizo segment is assumed to rupture only in 1857 type events and its total segment time-dependent probability is the same as the time-dependent probability for the 1857 type events (following the partial cascades model in Petersen et al., 1996 and Cramer et al., 2000). We first calculate a time-dependent probability Pctotal for any type of rupture scenario involving the Cholame segment (single segment or 1857 type). Here we use the total recurrence rate derived from the time independent calculation from Model 2. Next we calculate the time dependent probability P1857 for 1857 type ruptures using the paleoseismic recurrence rate. Now the time-dependent probability of a single-segment Cholame rupture is derived from the total time-dependent rate (calculated from Pctotal ) subtracted by the rate of the 1857 type events (converted from P1857 ). An example is shown in Appendix C. The time-dependent rate for the Coachella and San Bernardino segments rupturing together has to be the smaller of the two segment rates. In T.I Model 2, the San Bernardino segment is not allowed to rupture by itself. Now, when the conditional probability weighting is applied, this rupture has to be allowed in order to accommodate the excess rate on this segment. Its time-dependent rate is the segment rate (converted from probability) subtracted by the event rate of the Coachella and San Bernardino segments rupturing together.

Page 14: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

14

For the third model we have applied two rupture scenarios that are based on new (i.e., post 2002) geologic data and interpretations: (1) single segment time-dependent rates that were used in model 1 above and (2) two multi-segment ruptures, the 1857 type rupture that includes the Carrizo, Chalome, and Mojave segments and the southern multi-segment rupture that includes the San Bernardino and Coachella segments. The recurrence rates and elapsed time since the last earthquake for multi-segment ruptures are based on geologic data shown in Weldon et al. (2004, Figure 12). The five single-segment rupture models were weighted 10% and the two multi-segment ruptures were weighted 90%, similar to the weighting in T.I. Model 1. The multi-segment earthquakes incorporate a recurrence time of 200 years (5 events in 1,000 years) and elapsed time of 149 years for the 1857 type event and a recurrence time of 220 years (4 events in 880 years) and elapsed time of 310 years for the southern two-segment rupture. In general the models of Weldon et al. (2004) are moment balanced using slip rate. However, when we apply the 200 and 220 yr recurrence intervals to the 1857-type (M7.8) and southern multi-segment rupture that includes the San Bernardino and Coachella segments (M7.7), we get a moment rate which is about 80% of the other models. The reason for the lower moment is that the magnitude of the multi-segment rupture is not specified in the Weldon et al. (2004) model. If the magnitude of the 1857-type rupture is raised from 7.8 to 7.9 the updated model releases about the same moment as the other models and is moment balanced. This slight magnitude adjustment would not change the hazard calculation significantly since ground motions from M 7.8 and M 7.9 earthquakes are very similar. Therefore, for model 3 we have maintained the M 7.8 magnitude in order to be consistent with the magnitudes used in other models, recognizing that the moment rate is a little lower as a result. Weldon et al. (2004) also show data that indicates variability in the southern extent of the 1857 ruptures and the northern extent of the southernmost multi-segment rupture in the vicinity of the 1812 rupture. Therefore, we have also included an aleatory variability for the segment boundary near the southern end of the 1857 rupture and have not included the 1812 rupture as the date of the last event. We have developed time-independent (Poisson) and time-dependent models for these ruptures (T.I. Model 3 and T.D. Model 3). An example calculation is shown in Appendix C. The time-dependent hazard of peak ground acceleration for 10% probability in 30 years is shown in Figure 3 and the time-dependent probabilities are listed in Table 2. The time-dependent map is developed from the WGCEP (2003) model for the San Francisco Bay area; the Cramer et al. (2000) model for the San Andreas (Parkfield), San Jacinto, Elsinore, Imperial, and Laguna Salada faults; and the Petersen et al. (2002) model for the Cascadia subduction zone. In addition the southern San Andreas hazard was developed using T.D. Models 1, 2, and 3 with equal weighting.

COMPARISON OF TIME-INDEPENDENT AND TIME-DEPENDENT SEISMIC HAZARD MODELS

To compare the time-independent and time-dependent hazard, we have produced hazard curves and maps from three equally weighted Poisson models (T.I. Models 1, 2, and 3) and three equally weighted time-dependent models (T.D. Models 1, 2, and 3). The difference between these models is in southern California, northern California remains the same in all of these

Page 15: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

15

models. The time-independent (Poisson) and time-dependent maps are shown in Figure 2 and 3 and the ratio of the time-dependent and time-independent maps is shown in Figure 4. The ratio map shows that the largest positive changes to the hazard due to time-dependence are along the eastern San Francisco Bay area faults, the Cascadia subduction zone, the southern San Andreas Fault, and the northern San Jacinto Fault, which are all elevated up to 10% to 20% with respect to the Poisson model. Hazard along the San Andreas Fault in northern California and the southern San Jacinto, Superstition Hills, and Imperial Fault in southern California is reduced by up to 10-20% with respect to the time-independent hazard because they all experienced earthquakes over the past few decades. Figure 5 shows hazard curves that indicate the differences between hazard models at three sites in Parkfield, Los Angeles, and Cajon Pass (located near the intersection of the San Bernardino and Mojave segments of the San Andreas fault). The Parkfield hazard curve indicates that the 30-year time-dependent probability is consistently 10% higher than the Poisson probability for a large range of annual frequency of exceedance levels. However, the 5-year probability is considerably lower than the corresponding Poisson probability. The Parkfield segment of the San Andreas fault ruptured last in 2004. However, the mean recurrence for this segment is only 25 years. Thus, when one considers a 30-year probability, which is greater than the 25 year average recurrence, the probability is enhanced compared to the Poisson for having an earthquake rupture. When one considers a 5-year probability the probabilities are much lower because this time period is in the early portion of the seismic cycle. The site at Los Angeles indicates that the time-dependent probabilities are not controlling the hazard at distances of about 50 km. Deaggregations for sites in the greater Los Angeles region indicate that local faults tend to contribute more to the hazard at high frequencies which control the peak horizontal ground acceleration than the large events on the San Andreas fault. This may not be true for longer periods for which the San Andreas fault is more important. The differences between the time-dependent and time-independent curves are negligible for peak ground acceleration (less than 1%). Hazard at sites along the San Andreas Fault are dependent on how the local fault segments were modeled. For example, the Cajon Pass site is controlled by earthquakes on the Mojave and San Bernardino segments as well as the multi-segment ruptures that include those segments. We have included this site to show the differences between the time-dependent and time-independent ruptures for each of the 3 different time-independent and time-dependent models. The hazard differences are about 10-20% between these models for risk levels used in building codes. The time-dependent version typically gives larger ground motions because of the longer elapsed time since the last earthquake. Future versions of the time-independent hazard maps will allow stress interactions and viscoelastic effects that will add some variability to these curves.

CONCLUSIONS In this paper we have presented both time-independent and time-dependent probabilities for several faults and statewide ground motion hazard maps for California that show the value of

Page 16: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

16

peak ground acceleration with a 10% probability of exceedance for a time period of 30 years starting in 2006. The time-dependent maps differ by about 10% to 20% from the time-dependent maps. The southern San Andreas fault, Cascadia subduction zone, and the eastern San Francisco Bay area faults generally have elevated hazard relative to the time-independent maps. This is because it has been quite long since the last earthquake, about 150 years since the 1857 M 7.9 Fort Tejon earthquake, 300 years since the 1700 M 9 Cascadia earthquake, and 137 years since the 1868 M 6.8 on the southern Hayward fault. All of these faults are, most likely, in the latter half of their seismic cycles. The northern San Andreas fault, southern San Jacinto fault, and Imperial fault, on the other hand, have time-dependent hazard that is lower than the time-independent hazard due to the relatively short period since the 1906 (M 7.8) San Francisco earthquake, the 1968 (M6.4) Borrego Mountain earthquake, and the 1971 Imperial Valley earthquake, which places these faults in the first half of their seismic cycles. Sites located well away from the A-faults are typically controlled by local faults, especially for high frequencies greater than 1 hz. Three time-independent and corresponding time-dependent models that are proposed in this paper are based on characteristic earthquake recurrence models that have distinct segment boundaries; for T.D. Model 3 we have allowed the end of the rupture to vary according the geologic models. For the past 15 years WGCEP reports have all applied the characteristic model with fixed or slightly variable boundaries. Recent studies (e.g., Weldon et al., 2005) suggest that other more random ruptures also fit the same geologic data constraining earthquake ruptures on the southern San Andreas Fault. This implies that strict characteristic models should be relaxed in future time-dependent hazard calculations to account for this potential variability in source models. Variable rupture characteristics may not result in significant changes to the hazard at low frequencies (i.e., long return periods), but should be considered in future WGCEP models. Probabilistic hazard maps are used for making important risk mitigation decisions regarding building design, insurance rates, land use planning, and public policy issues that need to balance safety and economics. This map is the basis for the Working Group on California Earthquake Probability – Uniform California Earthquake Rupture Model version 1.0 (UCERF 1.0) and will be used to compare current 2006 methods with future, more complex, models. It is important that state-of-the-art science is incorporated in hazard maps that are used for public policy. Generally hazard products should be updated regularly as new information on earthquake recurrence and ground shaking becomes available from the science community. Research on such important hazard topics as recurrence time and rupture histories of prehistoric earthquakes, magnitude-frequency distributions for individual faults, and the effects of shallow and deep site conditions on ground shaking will improve these maps in the future.

ACKNOWLEDGEMENTS We acknowledge Mike Blanpied, Bill Ellsworth, and Ned Field for calculating the time-dependent hazard using the 2002 Working Group model and Ken Rukstales for producing GIS maps for Figure 1. Rob Wesson, Yuehua Zeng, and Mark Stirling, and Ned Field provided helpful reviews the manuscript.

Page 17: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

17

APPENDIX A

For this paper we have calculated the time-dependent probabilities for time-periods of 5, 10, 20, 30, and 50 years. For these calculations we have generally assumed a lognormal probability density function, however, the WG2002 report used a Brownian Passage Time model that does not cause a significant difference from the lognormal distribution except for very long elapsed times since the previous earthquake. Following the WGCEP 1995 report we find that the density function f(t) has the following form:

}2

)]/[ln(exp{

2

1)(

2

ln

2^

ln iiTT

T

t

ttf

!

µ

"!

#= , (B1)

where µ is the mean, µ^ is the median, σlnTi is the intrinsic sigma, t is time period of interest. If µ^ and σlnTi are known, then the conditional time-dependent probability in time interval (te, te + ΔT) is given by:

)(

)()|(

!""

#+""=>#+""

TtP

ttTtPtTttTtP

e

ee

eee, (B2)

where te and is the elapsed time and ΔT is the time period of interest. A Poisson process follows the rule: P=1-exp(-rT), where P is the Poisson probability, r is the rate of earthquakes, and T is the time period of interest. If we want to convert between probability P and rate r, then we can use the formula: r=-ln(1-P)/t. (B3) We calculate the probability and annualize this rate using the above formula.

APPENDIX B

If we denote the calculated time-dependent probabilities and time-independent (Poisson) probabilities for two single-segment rupture events as t

aP , t

bP , p

aP , and p

bP , the ratios p

a

t

aa PPR /= and p

b

t

bb PPR /= are sometimes called the probability gain or loss over the average Poisson probabilities. For a multi-segment (cascade) event involving these two segments, we also define the probability gain or loss as p

ab

t

abab PPR /= , in which the Poisson probability p

abP is known. Since p

abP already accounts for the conditional probability of multi-segment rupture, we further assume that the cascade event is triggered by independent rupture of one of the segments A or B. So we know that

ab aR R= if the cascade event starts from A and that

ab bR R= if it starts

from B. Assuming segment A is more likely to rupture in some future time period than segment

Page 18: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

18

B, then baRR > , and the chance of a cascade event occurring must be smaller than the chance of

A rupturing but larger than the chance of B rupturing. Therefore, abR has to be smaller than

aR

but larger than bR if

baRR > and vice versa. Considering that a cascade event can start from A

or B with different likelihoods, we approximate abR by weighting

aR and

bR by t

aP and t

bP ,

their probabilities of rupture, resulting in the cascade event ratio )/()( t

b

t

ab

t

ba

t

aabPPRPRPR ++= .

The physical basis for this type of weighting process is that a multi-segment rupture has to start from one of the segment and the higher segment probability has higher probability to lead to or trigger a multi-segment event.

APPENDIX C

Example applications for calculating time dependent rates Models 1 and 3: In this section we show how the annual occurrence rates for a multi-segment rupture are calculated in Models 1 and 3. For our first example, we calculate the rate of rupture that involves all five segments. The time-dependent 30-year probabilities for the five segments Coachella, San Bernardino, Mojave, Carrizo, and Cholame are 0.325, 0.358, 0.342, 0.442, and 0.512 assuming a lognormal distribution. The equivalent annual rates are calculated using the formula r = -ln(1-p)/t, where p is the segment time-dependent probability in t (30 years). This rate is divided by the Poissonian rate of the 2002 model and produces the probability gain for each segment. The gains for five segments are 1.141, 1.918, 1.065, 1.690, and 1.114. The weighted gain for this 5-segment rupture is 1.384 (= (0.325x1.141 + 0.358x1.918 + 0.342x1.065 + 0.442x1.690 + 0.512X1.114)/(0.325 + 0.358 + 0.342 + 0.442 + 0.512)). The final annual rate for this rupture is the Poissonian rate (0.00355) multiplied by this gain and the 2002 model weight (0.4), which is 0.00196 (= 0.00355x1.384x0.4). For model 3, the cascading allows only 1857 and 1690 types of events and their recurrence times are 200 and 220 years respectively, which are different from the 2002 model. We follow the same steps as in the T.D. Model 1 to calculate the time-dependent annual rates for the multi-segment ruptures with the new Poissonian rates for multi-segment events. After obtaining the time-dependent annual rates for the 1857 and 1690 multi-segment ruptures, we weight each of the Weldon et al. (2004) rupture scenarios included in the model. Model 2: In 2002 model 2, the Poissonian rates for the five segments are different from T.D. model 1. We apply these different mean recurrence times and the same elapse times and intrinsic and parametric uncertainties and calculate time-dependent 30-year probabilities and their equivalent annual rates as we did in model 1. These rates are 0.008260, 0.010336, 0.008908, 0.007396, and 0.011173 for the segments: Coachella, San Bernardino, Mojave, Carrizo, and Cholame respectively. The Carrizo segment in T.D. model 2 only ruptures in 1857 type of events, so the time-dependent annual rate for 1857 type of rupture is defined as the rate for Carrizo segment (0.007396). The Cholame and Mojave segments are allowed in 2002 model to rupture independently. The time-dependent rates for these two segments are their time-dependent rates, which are converted from their 30-year probabilities, subtracted by the rate for 1857 type events

Page 19: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

19

or 0.003777 (= 0.011173 – 0.007396) for Cholame and 0.001512 (= 0.008908 – 0.007396) for Mojave ruptures. The time-dependent rate for Coachella and San Bernardino segments rupturing together has to be the smaller of the two segment rates or 0.00826 (< 0.011173). In the 2002 model, San Bernardino segment is not allowed to rupture by itself. But now the difference between the San Bernardino segment rate (0.010336) and the rate (0.008260) for San Bernardino and Coachella segments rupturing together defines the single segment rupture on the San Bernardino segment, i.e., (0.002076 = 0.010336 – 0.008260).

REFERENCES

Abrahamson, N.A. and W.J. Silva (1997). Empirical response spectral attenuation relations for shallow crustal earthquakes, Seis. Res. Letts., v. 68, no. 1, pp. 94- 127.

Algermissen, S.T. and D.M. Perkins (1982). A probabilistic estimate of maximum acceleration in rock in the contiguous United States, U.S. Geol. Surv. Open-file Rept. 76-416.

Atkinson, G.M. and D.M. Boore (2003). Empirical ground-motion relations for subduction zone earthquakes and their application to Cascadia and other regions, Bull. Seism. Soc. Am., v. 93, pp. 1703-1709.

Boore, D.M., W.B. Joyner, and T.E. Fumal (1997). Equations for estimating horizontal response spectra and peak acceleration from western North American earthquakes: a summary of recent work, Seism. Res. Letts., v. 68, pp. 128-153.

Campbell, K.W. and Y. Bozorgnia (2003). Updated near-source ground motion (attenuation) relations for the horizontal and vertical components of peak ground acceleration and acceleration response spectra, Bull. Seism. Soc. Am., v. 93, pp. 314-331.

Cramer, C.H., M.D. Petersen, T. Cao, T.R. Toppozada, and M.S. Reichle (2000). A time-dependent probabilistic seismic-hazard model for California, Bull. Seism. Soc. Am. V. 90, pp. 1-21.

DeMets, C., R.G. Gordon, D.F. Argus, and S. Stein (1990). Current plate motions, Geophys. J. Int., v. 101, pp. 425-478.

Frankel, A. (1995). Mapping seismic hazard in the Central and Eastern United States, Seism. Res. Letts., v. 66, no. 4, pp. 8-21.

Frankel, A., C. Mueller, T. Barnhard, D. Perkins, E. Leyendecker, N. Dickman, S. Hanson, and M. Hopper (1996). National seismic-hazard maps: documentation June 1996, U.S. Geol. Surv. Open-file Rept. 96-532.

Frankel, A.D., M.D. Petersen, C.S. Mueller, K.M. Haller, R.L. Wheeler, E.V. Leyendecker, R.L. Wesson, S.C. Harmsen, C.H. Cramer, D.M. Perkins, K.S. Rukstales (2002). Documentation for the 2002 update of the national seismic hazard map, U.S. Geol. Surv. Open-file Report 02-420.

Gardner, J. K., and L. Knopoff (1974). Is the sequence of earthquakes in southern California, with aftershocks removed, Poissonian?, Bull. Seism. Soc. Am., v. 64, pp. 1363-1367.

Petersen, M.D., W.A. Bryant, C.H. Cramer, T. Cao, N.S. Reichle, A.D. Frankel, J.J. Lienkaemper, P.A. McCrory, and D.P. Schwartz (1996). Probabilistic seismic hazard

Page 20: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

20

assessment for the state of California, California Division of Mines and Geology Open-File Rept. 96-08, U.S. Geol. Surv. Open-File Rept. 96-706.

Petersen, M.D., C.H. Cramer, M.S. Reichle, A.D. Frankel, and T.C. Hanks (2000). Discrepancy between earthquake rates implied by historic earthquakes and a consensus geologic source model for California, Bull. Seism. Soc. Am., v. 90, pp. 1117-1132.

Petersen, M.D., C.H. Cramer, and A.D. Frankel (2002). Simulations of seismic hazard for the Pacific Northwest of the United States from earthquakes associated with the Cascadia subduction zone, Pure Appl. Geophys, v. 159, pp. 2147-2168.

Sadigh, K., C.Y. Chang, J. Egan, F. Makdisi, and R. Youngs (1997). Attenuation relationships for shallow crustal earthquakes based on California strong motion data, Seism. Res. Letts., v. 68, pp. 180-189.

Spudich, P., W.B. Joyner, A.G. Lindh, D.M. Boore, B.M. Margaris, and J.B. Fletcher (1999). SEA99: A revised ground motion prediction relation for use in extensional tectonic regimes, Bull. Seism. Soc. Am., v. 89, pp. 1156-1170.

Stirling, M. and M. Petersen (2005). Comparison of intensity data with seismic hazard models in the U.S. and New Zealand, preprint.

Toppozada, T. Branum, D., Petersen, M., Hallstrom, C., Cramer, C. and Reichle, M., (2000). Epicenters of and areas damaged by M≥5 California earthquakes, 1800-1999, California Division of Mines and Geology Map Sheet 49.

Weldon , R., K. Sharer, T. Fumal, and G. Biasi (2004). Wrightwood and the earthquake cycle: What a long recurrence record tells us about how faults work, GSA Today, V14 (#9), p. 8.

Wells, D.L. and K.J. Coppersmith (1994). New empirical relationships among magnitude, rupture length, rupture width, and surface displacements, Bull. Seism. Soc. Am., v. 84, pp. 974-1002.

Working Group on California Earthquake Probabilities (1988). Probabilities of large earthquakes occurring in California on the San Andreas fault, U.S. Geol. Surv. Open-file Rept. 88-398.

Working Group on California Earthquake Probabilities (1990). Probabilities of large earthquakes in the San Francisco Bay region, California, U.S. Geol. Surv. Circ. 1053.

Working Group on California Earthquake Probabilities (1995). Seismic hazards in southern California: probable earthquakes, 1994-2024, Bull. Seism. Soc. Am., v. 85, pp. 379-439.

Working Group on California Earthquake Probabilities (1999). Earthquake probabilities in the San Francisco Bay Region: 2000 to 2030 - A summary of findings, U.S. Geol. Surv. Open-File Rept. 99-517.

Working Group on California Earthquake Probabilities (2003). Earthquake probabilities in the San Francisco Bay region: 2002-2031, U.S. Geol. Surv. Open-file Rept. 03-214.

Youngs, R.R., S.J. Chiou, W.J. Silva, and J.R. Humphrey (1997). Strong ground motion attenuation relationships for subduction zone earthquakes, Seism. Res. Letts., v. 68, no. 1, pp. 58-73.

Page 21: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

21

Figure 1: Locations and names of A-faults contained in the source model.

Page 22: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

22

Figure 2: Time-independent (Poisson) map for rock site condition and a 10% probability of exceedance in 30 years. This map was developed from the 2002 national seismic hazard model but also includes the new Poisson model for T.I. Model 3.

Page 23: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

23

Figure 3: Time-dependent map for rock site condition and a 10% probability of exceedance in 30 years. This map was developed by equally weighting three time-dependent models (T.D. model 1, 2, and 3).

Page 24: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

24

Figure 4: Ratio of the time-dependent map (Figure 3) and the time-independent map (Figure 2) for rock site conditions and a 10% probability of exceedance in 30 years.

Page 25: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

25

Figure 5: Hazard curves showing the annual frequency of exceedance and the peak ground acceleration for three sites for the time-independent (Poisson) and time-dependent models. The annual frequency of exceedance is obtained by taking the 30-year probability and calculating the equivalent annual frequency of exceedance as shown in Appendix B.

Page 26: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

26

TABLE 1: parameters used in the time-dependent analysis

SOUTHERN SAN ANDREAS FAULT T (mean)

T (median)

Sigma-P

Sigma-T

Last Event

Elapse Time

P in 30 yrs

Model 1: SAF - Coachella seg. 87 71 0.39 0.63 1690 316 0.325496 SAF - San Bernardino seg. 130 112 0.19 0.53 1812 194 0.358353 SAF - Mojave seg. 76 56 0.20 0.80 1857 149 0.342035 SAF - Carrizo seg. 87 74 0.29 0.58 1857 149 0.441794 SAF - Cholame seg. 47 37 0.43 0.66 1857 149 0.512431 Model 2: SAF - Coachella seg. 182 149 0.39 0.63 1690 316 0.219495 SAF - San Bernardino seg. 182 158 0.19 0.53 1812 194 0.266615 SAF - Mojave seg. 148 108 0.20 0.80 1857 149 0.234520 SAF - Carrizo seg. 212 179 0.29 0.58 1857 149 0.198981 SAF - Cholame seg. 138 111 0.43 0.66 1857 149 0.284790 Model 3: Same as model 1 for single segments SAF - Parkfield seg. 25 23 0.16 0.38 2004 2 0.808552 ELSINORE FAULT Whittier 641 553 0.21 0.54 650 1356 0.080761 Elsinore - Glen Ivy seg. 340 292 0.24 0.55 1910 96 0.043533 Elsinore - Temecula seg. 240 206 0.24 0.55 1818 188 0.188134 Elsinore - Julian seg. 340 294 0.21 0.54 1892 114 0.056206 Elsinore - Coyote Mtn. Seg. 625 532 0.27 0.57 1892 114 0.007411 Laguna Salada 337 287 0.26 0.56 1892 114 0.062880 SAN JACINTO FAULT SJF - San Bernardino seg. 100 85 0.28 0.57 1890 116 0.412896 SJF - San Jacinto Valley seg. 83 71 0.27 0.57 1918 88 0.475706 SJF - Anza seg. 250 212 0.29 0.58 1750 256 0.188205 SJF - Coyote Creek seg. 175 146 0.33 0.60 1892 114 0.228000 SJF - Borrego seg. 175 148 0.29 0.58 1968 38 0.080351 SJF - Superstition Mtn. Seg. 500 421 0.31 0.59 1430 576 0.098268 SJF - Superstition Hills seg. 250 212 0.29 0.58 1987 19 0.005682 Imperial 79 66 0.35 0.61 1979 27 0.359390 CASCADIA SUBDUCTION ZONE Cascadia megathrust - mid (M 9.0) 501 452 0.14 0.45 1700 306 0.076498 Cascadia megathrust - top (M 9.0) 501 452 0.14 0.45 1700 306 0.076498 Cascadia megathrust - bottom (M 9.0) 501 452 0.14 0.45 1700 306 0.076498 Cascadia megathrust - old (M 9.0) 501 452 0.14 0.45 1700 306 0.076498

Page 27: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

27

TABLE 2: probabilities calculated for different time periods Fault 5-years 10-years 20-years 30-years 50-years

NORTHERN SAN ANDREAS FAULT SAF - Santa Cruz seg. 0.003315 0.007285 0.017492 0.030944 0.065042 SAF - Peninsula seg. 0.007566 0.015335 0.031023 0.046477 0.077429 SAF - North Coast seg. (so.) 0.00135 0.002769 0.005622 0.008434 0.014394 SAF - North Coast seg. (no.) 0.001522 0.003199 0.006617 0.010385 0.018332 SAF - Santa Cruz & Peninsula 0.005979 0.012169 0.025033 0.038379 0.066354 SAF - Peninsula & North Coast (so.) 0 0 0 0 0 SAF - North Coast seg. (so. & no.) 0.005943 0.012151 0.02475 0.03756 0.06403 SAF - Santa Cruz, Peninsula & North Coast (so.) 0.0001 0.000203 0.000412 0.000624 0.001055 SAF - Peninsula & North Coast (so. & no.) 0.000277 0.000563 0.001151 0.001753 0.002989 SAF - 1906 Rupture 0.008707 0.017626 0.035838 0.054407 0.092085 SAF - 1906 Rupture (floating) 0.011712 0.02413 0.049251 0.074827 0.12933 HAYWARD-RODGERS CREEK Hayward (southern) 0.022994 0.045034 0.086287 0.12393 0.189386 Hayward (northern) 0.026388 0.050625 0.093602 0.130553 0.190905 Hayward (so. & no.) 0.017676 0.034412 0.065308 0.093136 0.141075 Rodgers Creek 0.031004 0.060478 0.115179 0.164745 0.250697 Hayward (no.)-Rodgers Creek 0.003816 0.007431 0.014118 0.020166 0.030685 Hayward (so. & no.)-Rodgers Creek 0.002133 0.004167 0.00796 0.011421 0.0175 Hayward-Rodgers Creek (floating) 0.001321 0.00264 0.005271 0.007894 0.013115 CALAVERAS FAULT Calaveras (southern) 0.054532 0.099227 0.169106 0.222278 0.300099 Calaveras (central) 0.03191 0.060155 0.108546 0.148665 0.211439 Calaveras (so. & cent.) 0.01114 0.021149 0.038684 0.053678 0.077973 Calaveras (northern) 0.025715 0.049964 0.094514 0.134474 0.203198 Calaveras (cent. & no.) 0.00064 0.00125 0.002389 0.003436 0.005289 Calaveras (so., cent. & no.) 0.004166 0.008054 0.01517 0.021572 0.032686 Calaveras (entire floating) 0.013752 0.02721 0.053276 0.078262 0.125224 Calaveras (so. & cent. floating) 0.052935 0.101711 0.188368 0.262716 0.382761 CONCORD-GREEN VALLEY FAULT Concord 0.009858 0.019288 0.036985 0.053301 0.082437 Green Valley (southern) 0.004674 0.009117 0.017381 0.024932 0.038312 Concord-Green Valley (so.) 0.003176 0.006212 0.011913 0.017178 0.026628 Green Valley (northern) 0.012413 0.024104 0.045597 0.064948 0.098554 Green Valley (so. & no.) 0.006387 0.012445 0.023686 0.033925 0.051985 Concord-Green Valley (entire) 0.011831 0.023175 0.044537 0.064319 0.099852 Concord-Green Valley (floating) 0.012023 0.023583 0.045412 0.065668 0.102079 SAN GREGORIO FAULT San Gregorio (southern) 0.004422 0.008785 0.017328 0.025643 0.041629 San Gregorio (northern) 0.007545 0.014956 0.029378 0.043265 0.069491 San Gregorio (so. & no.) 0.004749 0.009463 0.018776 0.02793 0.045732 San Gregorio (floating) 0.003797 0.007577 0.015088 0.022533 0.037228

Page 28: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

28

GREENVILLE FAULT Greenville (southern) 0.005894 0.011737 0.023248 0.034542 0.056504 Greenville (northern) 0.005408 0.010775 0.021385 0.031832 0.052235 Greenville (so. & no.) 0.002846 0.005672 0.011258 0.016761 0.027522 Greenville (floating) 0.000791 0.001582 0.003161 0.004738 0.007881 Mt. Diablo Thrust 0.014486 0.028646 0.056056 0.082298 0.131579 SOUTHERN SAN ANDREAS FAULT Model 1: SAF - Coachella seg. 0.06465 0.124687 0.23234 0.32550 0.47644 SAF - San Bernardino seg. 0.07155 0.13791 0.25648 0.35835 0.52097 SAF - Mojave seg. 0.06951 0.133372 0.24619 0.34203 0.49377 SAF - Carrizo seg. 0.09397 0.178644 0.32375 0.44179 0.61659 SAF - Cholame seg. 0.11673 0.218408 0.38475 0.51243 0.68805 Model 2: SAF - Coachella seg. 0.04083 0.079849 0.15281 0.21950 0.33629 SAF - San Bernardino seg. 0.04976 0.097313 0.18603 0.26662 0.40559 SAF - Mojave seg. 0.04421 0.086231 0.16413 0.23452 0.35575 SAF - Carrizo seg. 0.03490 0.069191 0.13566 0.19898 0.31523 SAF - Cholame seg. 0.05454 0.106061 0.20062 0.28479 0.42620 SAF - Parkfield seg. 0.00105 0.046858 0.45975 0.80855 0.98359 ELSINORE FAULT Whittier 0.01397 0.027726 0.05464 0.08076 0.13073 Elsinore - Glen Ivy seg. 0.00550 0.011715 0.02626 0.04353 0.08546 Elsinore - Temecula seg. 0.03315 0.065619 0.12840 0.18813 0.29805 Elsinore - Julian seg. 0.00770 0.016094 0.03490 0.05621 0.10515 Elsinore - Coyote Mtn. Seg. 0.00085 0.001844 0.00429 0.00741 0.01590 Laguna Salada 0.00888 0.018438 0.03950 0.06288 0.11527 SAN JACINTO FAULT SJF - San Bernardino seg. 0.08470 0.162433 0.29884 0.41290 0.58724 SJF - San Jacinto Valley seg. 0.10098 0.192404 0.34915 0.47571 0.65876 SJF - Anza seg. 0.03391 0.066783 0.12947 0.18820 0.29452 SJF - Coyote Creek seg. 0.04016 0.079619 0.15589 0.22800 0.35820 SJF - Borrego seg. 0.00694 0.016449 0.04344 0.08035 0.17599 SJF - Superstition Mtn. Seg. 0.01707 0.033859 0.06661 0.09827 0.15845 SJF - Superstition Hills seg. 0.00007 0.000278 0.00171 0.00568 0.02628 Imperial 0.04985 0.107736 0.23435 0.35939 0.56901 CASCADIA SUBDUCTION ZONE Cascadia megathrust - mid (M 9.0) 0.01239 0.024937 0.05048 0.07650 0.12949 Cascadia megathrust - top (M 9.0) 0.01239 0.024937 0.05048 0.07650 0.12949 Cascadia megathrust - bottom (M 9.0) 0.01239 0.024937 0.05048 0.07650 0.12949 Cascadia megathrust - old (M 9.0) 0.01239 0.024937 0.05048 0.07650 0.12949

Page 29: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

29

Section II A Synoptic View of Southern San Andreas Fault

Paleoseismic Constraints

by Ray Weldon

The seismic hazard associated with the Southern San Andreas fault has traditionally been based on conceptual recurrence and segmentation models parameterized by the locally available paleoseismic data (Working Groups on California Earthquakes). While this may be the best way to infer hazard, we are exploring an alternative approach that attempts to determine hazard directly from the data without appealing to simple models or at least to expand the range of possible models that are allowed by the paleoseismic data. To accomplish this goal we have gathered all of the existing paleoseismic data from the Southern San Andreas fault, including complete probability density functions that describe the range of possible ages for the events, the coseismic displacement associated with as many ruptures as possible, and the slip rate across the fault. From this data set we explore the range of possible fault behaviors by constructing scenarios from the data. To date these data have only been qualitatively interpreted. Figure 1 shows three scenarios that have been discussed by Weldon et al. (2004, GSA Today 14, pp. 4-10; and 2005, Science, reproduced here in Appendix A). Scenario (A) describes the data with highly periodic alternating north and south Southern San Andreas ruptures each of characteristic size. Scenario (B) attempts to explain the data with as great a variety of ruptures as possible and (C) describes the data as a combination of long ruptures that span the entire Southern San Andreas fault and 1812 type earthquakes in between to satisfy the paleoseismic data. Obviously, many more possibilities are exist.

To fully explore the range of rupture scenarios consistent with the paleoseismic data, and to objectively generate scenarios we have automated the process of generating scenarios. We link sites as one might string pearls, with first one site, then one and its neighbor, and so on, to include all adjoining site linkages (see Figure 2). We accommodate dating uncertainty by allowing a rupture to include a site even though the original record did not report an event at that time. We apply a likelihood penalty to such ruptures, but this approach keeps absent or incorrect event information at an individual site from trumping a rupture otherwise favored by adjoining paleoseismic records. Rupture likelihood also considers consistency with dating and surface displacement evidence.

Scenarios, which we call “possible fault histories” are constructed by drawing from the pool of all possible ruptures until all reported events from all the sites have been included. Each rupture history thus can be regarded, with greater or lesser probability, as what might have happened, given all available evidence. We develop likelihoods among rupture histories based on the combined likelihood of its contributing ruptures. By

Page 30: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

30

generating thousands of histories and keeping the most likely ones, the ruptures provide a chronology, location, and rupture length that can be translated into the history of ground shaking near the San Andreas fault and evaluated for their seismic hazard implications. Two examples are shown in Figure 3.

Figure 1 - Three possible rupture sequences on the southern San Andreas fault. Vertical colored bars are 95% confidence ranges in age for earthquakes at the sites listed at the lower margin and horizontal bars are rupture extent. Open boxes represent multiple event age ranges; the individual event ages are unknown. Grey shading indicates regions and times with no data. (A) In this

Page 31: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

31

model, the data are interpreted in terms of north and south ruptures with substantial overlap and the 1812 event is anomalous. (B) A random looking distribution of event timing and rupture lengths is constructed to fit the data. (C) “Wall to wall” ruptures are used to explain the data with a minimum number of earthquakes; in this scenario additional small earthquakes, like 1812, are necessary to explain the data, and 1857 was anomalously short. Site abbreviations (see appendices for references): PK—Parkfield; LY—Las Yeguas, Young et al. (2002); CP—Carrizo Plain, integration of Liu et al. (2004), Sims (1994), Grant and Sieh (1994); FM—Frasier Mountain, Lindvall et al. (2002); 3P—Three Points, reinterpretation of Rust (1982); LR—Littlerock, reinterpretation of Schwartz and Weldon (1986); PC—Pallett Creek, Salyards et al. (1992), Biasi et al. (2002), Sieh et al. (1989); WW—Wrightwood, Biasi et al. (2002), Fumal et al. (2002b), Weldon et al. (2002); CC—Cajon Creek, Weldon and Sieh (1985); PT—Pitman Canyon, Seitz et al. (2000), G. Seitz et al. (2003, personal commun.); PL—Plunge Creek, McGill et al. (2002); BF—Burro Flats, Yule and Sieh (2001), D. Yule and K. Sieh (2003, personal commun.); TP—Thousand Palms Oasis, Fumal et al. (2002a); IO—Indio, reinterpretation of Sieh (1986), Sieh and Williams (1990), Fumal et al. (2002a); SC—Salt Creek, Williams (1989), P.L. Williams (2003, personal commun.).

Figure 2 – Left shows a hypothetical set of overlapping event ages from 4 sites. All

possible ruptures that include these 4 sites can be constructed by progressively linking all of the sites, beginning with 1 earthquake per site, to a single earthquake that spans all 4 sites. Right shows three possible results for 4 earthquakes at each of 4 sites. Each possible history is then weighted according to its consistency with the available age control, displacement per event, and fault slip rate/moment rate.

Page 32: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

The upper part of the record cannot be run until we combine historical and C-14 data

Before about 600 AD there is not enough data to compare sites.

18571812

1690Figure 3 - Two possible earthquake histories generated by a preliminary version of our scenario generator, with the rupture lengths of 1857, 1812 and the immediately prehistoric ~1690 event for comparison.

Page 33: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

33

APPENDIX A

PERSPECTIVES

GEOPHYSICS: Past and Future Earthquakes on the San Andreas Fault Ray J. Weldon, Thomas E. Fumal, Glenn P. Biasi, Katherine M. Scharer*

The San Andreas fault is one of the most famous and--because of its proximity to large population centers in California--one of the most dangerous earthquake-generating faults on Earth. Concern about the timing, magnitude, and location of future earthquakes, combined with convenient access, have motivated more research on this fault than on any other. In recent years, an increasing number of sites along the fault have provided evidence for prehistoric earthquakes (1, 2).

Damaging earthquakes are generated by rupture that can span hundreds of kilometers on a fault. Data from many sites must therefore be integrated into "rupture scenarios"--possible histories of earthquakes that include the date, location, and size (length of fault rupture) of all earthquakes on a fault during a period of time. Recently, rupture scenarios for the southern San Andreas fault have stimulated interest in how different scenarios affect interpretations of seismic hazard and underlying models of earthquake recurrence behavior.

Large earthquakes occur infrequently on individual faults. Scientists therefore cannot test recurrence models for damaging earthquakes by waiting for a series of large earthquakes to occur or by consulting instrumental records, which span at most 100 years. Records of large earthquakes must be dug out of the geologic record to characterize earthquakes that predate the instrumental record.

Such studies tend to provide samples of the date and ground displacement at isolated sites along the ruptures, hundreds of kilometers long, caused by large paleoearthquakes. Key insights into fault recurrence behavior have been gained from site-specific data on the southern San Andreas fault (3, 4). However, measurements of the date and displacement often vary considerably between sites. Further advances in understanding the San Andreas fault will require the construction of rupture scenarios. Given the large body of data and recent advances in interpretive methodology, this goal is now within reach for the southern San Andreas fault.

Ruptures on the southern San Andreas fault. Lines are probability density functions for the dates of individual earthquakes, colored by site; the 1812 and 1857 earthquakes have exact dates. Peaks and valleys in the smoothed sum of the individual probability density functions suggest that large parts

Page 34: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

34

of the fault rupture every ~200 years in individual large earthquakes or series of a few earthquakes. See (2) for site locations and data sources. To date, 56 dates of prehistoric earthquakes have been published, based on data from 12 sites on the southern 550 km of the San Andreas fault. There are also about 10 paleoseismic records for the earthquakes of 1857 and 1812. Analysis of these data (4, 5) yields probability density functions of the dates of the earthquakes (see the first figure). These date distributions provide the raw material for correlating ruptures from site to site and the first step toward constructing a history of large earthquakes.

Unfortunately, rupture scenarios based on earthquake date alone are poorly constrained and support a wide range of earthquake models. The existing data can be explained by highly periodic overlapping ruptures (see the second figure, top panel), randomly distributed ruptures (middle panel), and even repeated ruptures spanning the entire southern San Andreas fault (bottom panel). Each model implies a different level of hazard to the Los Angeles region (see the figure legend, second figure) and supports a different physical model of faulting (2).

Strong overlap of event dates (see the first figure) may occur when many sites along the fault record the same earthquake or a sequence of earthquakes occur within years or decades. Poor or no overlap may indicate earthquakes with lesser rupture extent or errors in the dating and interpretation of paleoseismic data. Given the rupture lengths of the 1812 and 1857 earthquakes (~150 and 300 km, respectively) and the lack of substantial rupture in the 148 years since 1857, most scientists doubt the possibility of frequent small ruptures on the southern San Andreas fault. Three recent developments strengthen the hypothesis that the fault breaks in relatively infrequent, large earthquakes.

Cartoon of rupture scenarios. Black boxes denote paleoearthquake dates at sites along the fault. Black horizontal bars show the extents of the 1857 and 1812 ruptures. Three scenarios accommodate all dates. (Top) Ruptures spanning the northern two-thirds of the fault (like the 1857 earthquake) alternate with shorter ruptures centered on the southern third. This model yields a conditional probability of earthquake recurrence of ~70% in the next 30 years, largely due to the long time since a southern event. (Middle) Ruptures of variable length recur randomly.

This model yields a conditional probability of ~40% in the next 30 years assuming Poisson behavior. (Bottom) Long ruptures (violet) span most of the fault, with small additional ruptures (like the 1812 earthquake) (orange) to satisfy all dates. This model yields a conditional probability of ~20% in the next 30 years, assuming quasi-periodic behavior of short and long events.

Page 35: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

35

First, the relationship between a displacement observed at a site and the probability of seeing the same rupture at the next site along the fault has been quantified (6). Commonly observed displacements of 4 to 7 m (7, 8) imply rupture lengths of more than 100 km (9), much more than the distances between paleoseismic sites. Date ranges from nearby sites that overlap poorly are thus likely in error. Second, different chemical, physical, and biological fractions of materials such as peat and charcoal yield very different radiocarbon dates (10-12). Because the type of material varies between sites, overlap of dates may be imperfect even if a single rupture spans the sites. Third, careful documentation of evidence from multiple excavations (8, 10-12) shows a wide range in the quality of event evidence from excavation to excavation and site to site. Thus, the evidence for some paleoearthquakes may have been misinterpreted.

A much clearer picture of earthquakes on the southern San Andreas fault should emerge in the next 5 to 10 years. The groups of earthquake date ranges seen every ~200 years in the first figure will probably withstand this reevaluation. Some of these groups contain a single earthquake that ruptured through many sites and may have ruptured large parts of the southern San Andreas fault. Others contain multiple earthquakes at individual sites and could be multiple earthquakes with overlapping ruptures, like the 1812 and 1857 earthquakes (see the second figure). The current 148-year hiatus is probably not exceptional. However, no lull in the past 1600 years appears to have lasted more than ~200 years, and when the current hiatus ends, a substantial portion of the fault is likely to rupture, either as a single long rupture or a series of overlapping ruptures in a short time interval.

References and Notes

1. L. Grant, W. R. Lettis, Eds., Special issue on The Paleoseismology of the San Andreas Fault, Bull. Seismol. Soc. Am. 92 (2002).

2. R. J. Weldon, K. M. Scharer, T. E. Fumal, G. P. Biasi, GSA Today 14, 4 (September 2004).

3. K. Sieh, M. Stuiver, D. Brillinger, J. Geophys. Res. 94, 603 (1989).

4. G. P. Biasi, R. J. Weldon, T. E. Fumal, G. G. Seitz, Bull. Seismol. Soc. Am. 92, 2761 (2002)

5. At four further sites, no exact dates can be determined for individual earthquakes, but the data can be related to other paleoearthquakes and thus help constrain the overall rupture scenarios.

6. G. P. Biasi, R. J. Weldon, Bull. Seismol. Soc. Am., in press.

7. J. Liu, Y. Klinger, K. Sieh, C. M. Rubin, Geology 32, 649 (2004).

8. R. J. Weldon et al., Bull. Seismol. Soc. Am. 92, 2704 (2002).

9. L. Wells, K. J. Coppersmith, Bull. Seismol. Soc. Am. 84, 974 (1994).

Page 36: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

36

10. T. E. Fumal et al., Bull. Seismol. Soc. Am. 92, 2726 (2002).

11. G. G. Seitz, thesis, University of Oregon (1999).

12. K. M. Scharer, thesis, University of Oregon (2005).

10.1126/science.1111707

R. J. Weldon and K. M. Scharer are in the Department of Geological Sciences, University of Oregon, Eugene, OR 97403, USA. E-mail: [email protected] ; [email protected] T. E. Fumal is with the Earthquake Hazards Team, U.S. Geological Survey, Menlo Park, CA 94025, USA. E-mail: [email protected] G. P. Biasi is at the Seismological Laboratory, University of Nevada, Reno, NV 89557, USA. E-mail: [email protected]

Page 37: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

37

A Framework for Developing a Time-Dependent Earthquake Rupture Forecast

by Edward (Ned) Field

Introduction

This document outlines an attempt to establish a simple framework that can accommodate a variety of approaches to modeling time-dependent earthquake probabilities. There is an emphasis on modularity and extensibility, so that simple, existing approaches can be applied now, yet more sophisticated approaches can also be added later. A primary goal is to relax the assumption of persistent rupture boundaries (segmentation) and to allow fault-to-fault jumps in a time-dependent framework (no solution to this problem has previously been articulated, at least not as far as the author and other members of the WGCEP are aware).

Long-Term Earthquake Rate Model

The purpose of this model is to give the long-term rate of all possible earthquake ruptures (magnitude, rupture surface, and average rake) for the entire region. Of course “all possible” might include an infinite number, so what we mean is all those at some level of discretization (and above some magnitude cutoff) that is sufficient for hazard and loss estimation. Although a tectonic region may evolve (with old faults healing and new faults being created), the concept of a long-term model is legitimate in that there will be some absolute rate of ruptures over any given time span. By “long” we simply mean long enough to capture the statistics of events relevant to the forecast duration, and short enough that the system does not significantly change.

Known faults provide a means of identifying where future ruptures will occur. Given the fault slip rate, seismogenic depths, and knowledge or assumptions about spatial extent of ruptures and/or magnitude frequency distributions, it is possible solve for the relative rate of all ruptures on each fault. We can write the rate of these discrete events as:

FRf,m,r = rate of rth rupture for the mth magnitude on the fth fault (where the possible

rupture surfaces will depend on the magnitude) Of course known faults do not provide the location of all possible ruptures, so we need to

account for off-fault seismicity as well (often referred to as “background” seismicity). Such seismicity is usually modeled with a grid of Gutenberg Richter (GR) sources, where the a-values and perhaps b-values are spatially variable. Regardless of the magnitude frequency distribution, we can write the rate of background seismicity as:

BRi,j,m = background rate of mth magnitude at the ith latitude and jth longitude An example of background seismicity from the National Seismic Hazard Mapping

Program 1996 model (NSHMP, 1996) is shown in Figure 1 along with their fault ruptures. The

Page 38: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

38

spatial variation in their background rates is determined from smoothed historical seismicity, although stressing rates from a deformation model could be used as well.

One problem with using gridded-seismicity models is that they don’t explicitly provide a finite rupture surface for the events, as needed for seismic-hazard analysis (SHA), but rather they provide hypocentral locations (and treating ruptures as point sources underestimates hazard). One must, therefore, either construct and consider all possible finite rupture surfaces, or assign a single surface at random as done in the NSHMP 1996 and 2002 models (NSHMP, 1996 & 2002). This highlights one of the primary advantages of fault-based models, as they provide valuable information about rupture-surfaces.

Figure 1. Fault (red) and background-grid (gray) rupture sources from the 1996 NSHMP model for southern California (the darkness of the grid points is proportional to the a-value of the background GR seismicity).

Page 39: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

39

Relaxing The Assumption of Fixed Rupture Boundaries: Previous WGCEPs have assumed, at least for time-dependent probabilities, that faults are segmented (an example for the Hayward/Rodgers-Creek from the 2002 WGCEP (WGCEP, 2003) is given below in Figure 2). This means that ruptures can occur on one or perhaps more segments, but cannot occur on just part of any segment. Previous models have also precluded ruptures that jump from one fault to another, as occurred in both the 1992 M 7.2 Landers and the 2002 M 7.9 Denali earthquakes. For the latter, rupture began on the Susitna Glacier Fault, jumped onto the Denali fault, and then jumped off onto the Totschunda fault. The following outlines a recipe for building a model that relaxes these assumptions. It is basically a hybrid between the models outlined by Field et al. (1999) and Andrews and Schwerer (2000), although the latter deserves most of the credit. We start with a model of fault sections and slip rates, such as that shown in Figure 1 (although applied statewide here). Note that fault sectioning has been applied only for descriptive purposes, and should not be interpreted as rupture segmentation. The first task is to subsection these faults into smaller increments (~5km lengths) such that further sub-sectioning would not influence SHA (from here on we’ll refer to these ~5km subsections as “sections”). Following Andrews and Schwerer (2000), but for a larger region and using smaller fault sections, we then define every possible earthquake rupture involving two or more contiguous sections (at least two because we consider only events that rupture the entire seismogenic thickness). We use an indicator matrix, Imi, containing 1s and 0s to denote whether the ith section is involved in the mth rupture ( “1” if yes and zero otherwise). The M ruptures so defined can include those that involve fault-to-fault jumps if deemed appropriate (e.g., for faults that are separated by less than 10 km). We then use the slip rate on each section to constrain the rate of each rupture:

!

Imim

" um fm = ri

where fm is the desired long-term rate of the mth rupture, um is the average slip of the mth rupture (e.g., obtained from a magnitude-area relationship), and ri is the known slip rate of the ith section. This system of equations can be solved for all fm. However, it is under determined so an infinite number of solutions exist. Our task now is simply to add equations in order to achieve a unique solution, or to at least significantly narrow the solution space. Again, following Andrews and Schwerer (2000), we can add positivity constraints for all fm, as well as equations that represent the constraint that rates for the region as a whole must exhibit a Gutenber-Richter distribution (taking into consideration uncertainties in the latter at the largest magnitudes). This is the extent to which Andrews and Schwerer (2000) took their model, and they concluded that additional constraints would be needed to get reliable estimates of the rate of any particular event. It is doubtful that our extension of their San Francisco Bay Area model to the entire state (including significantly smaller fault sections) will change that conclusion. Therefore, we need to apply whatever additional constraints we can find to narrow the solution space as much as possible. One approach is to penalize any ruptures that are thought to be less probable. For example, there might be reasons to believe that ruptures do not pass certain points on a fault. Such segmentation could easily be imposed. In addition, dynamic rupture modeling might be able to support the assertion that certain fault-to-fault rupture geometries are less probable. It

Page 40: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

40

remains to be seen how best to apply such constraints in the inversion, but one simple approach would be to force these rates to be some fraction of their otherwise unconstrained values. We can also add equations representing observed rupture rates from historical seismicity (e.g., Parkfield) or from paleoseismic studies at specific fault locations. Furthermore, the systematic interpretation of all southern San Andreas data by Weldon et al. (2005) will be particularly important to include. Finally, assumptions could be made (and associated constraints applied) as to the magnitude frequency distribution of particular faults (e.g., Field et al. (1999) assumed a percentage of characteristic versus Gutenberg-Richter events on all faults, and tuned this percentage to match the regional rates). However, to the extent that fault-to-fault jumps are allowed, it becomes difficult to define exactly what “the fault” is when it comes to an assumed magnitude-frequency distribution. Also following Field et al. (1999), off-fault seismicity can be modeled with a Guternberg-Richter distribution that makes up for the difference between the regional moment rate and the fault-model moment rate, and where the maximum magnitude is uniquely defined by matching the total regional rate of events above the lowest magnitude. The important point is that we define all possible fault rupture events (with no a priori segment boundaries and allowing fault-to-fault jumps) and solve for the rates via a linear inversion that applies all presently available constraints (including strict segmentation if appropriate). We will thereby have a formal, tractable, mathematical framework for adding additional constraints when they become available in the future. One might be concerned that the solution space will still be large after applying all available constraints. However, this is exactly what we need to know for SHA, as the solution space will represent all viable models, which can and should be accommodated as epistemic uncertainties in the analysis. The size of this inversion may be problematic, although Tom Jordan says it’s no big deal.

In conclusion, the long-term model simply represents an answer to the question: Over a long period of time, what is the rate of every possible discrete rupture? There will, of course, be strong differences of opinion on what such a model should look like, especially when it comes to whether strict fault segmentation is enforced and whether and how different faults can link up to produce even larger events. However, the fact that there is some long-term rate of events cannot be disputed, especially if we specify what that long time span actually is. The question then becomes, given a viable long-term model (of which there will be many for reasons just stated), how do we make a credible time-dependent forecast for a future time span?

Rate Changes in the Long-Term Model Just as everyone would agree that there is some rate of events over a long period of time (given by the long-term model), so too would they agree that these rates vary to some degree on shorter times scales. Because large events occur infrequently, we are only able to demonstrate statistically significant rates of change for all events above some magnitude threshold, which we will take as M=5 here since this is the threshold of interest to hazard. Suppose we have a model that can predict that average rate change for M≥5 events, relative to the long-term model, over a specified time span and as a function of space:

ΔNi,j(timeSpan)

Page 41: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

41

where i and j are latitude and longitude indices, and timeSpan includes both a start time and duration. Then we can simply apply these rate changes to all ruptures in the long-term model in order to make a time-dependent forecast for that specific time span. The only slightly tricky issue is how rate changes are assigned to large fault ruptures. If we assume a uniform probability distribution of hypocenters, then we can simply apply the average ΔNi,j predicted over the entire rupture surface. The obvious assumption here is that the change in rate of M≥5 events implies an equivalent change in the rate of larger events. This seems reasonable in that large events must start out as small events, so that an increase of smaller events implies an increased probability of a large-event nucleation. This is precisely the assumption made in the “empirical” model applied by the 2002 WGCEP (where all long-term rates were scaled down by an amount determined by temporal extrapolation of the post-1906 regional seismicity rate reduction). This assumption is also implicit in the time-dependent models of Stein et al. (e.g., 1997), and in the Short-Term Earthquake Probability (STEP) model of Gerstenberger et al. (2005), which uses foreshock/aftershock statistics. Therefore, the wide application of this assumption implies that it is certainly a viable approach for time-dependent earthquake forecasts. Of course the challenge is to come up with credible models of ΔNi,j(timeSpan). Here again, there is likely to be a wide range of options (three having just been mentioned in the previous paragraph). Fortunately we can, in principle, apply any that are available, including alarm-based predictions (where a declaration is made that an earthquake will occur in some polygon by some date) as long as the probability of being wrong is quantified. Care will be needed to make sure the ΔNi,j(timeSpan) model is consistent with the long-term model (e.g., moment rates must balance over the long term, or stated another way, no double counting).

For models such as STEP that predict changes that are a function of magnitude as well (e.g., by virtue of b-value changes), then rates should be modified on a magnitude-by-magnitude basis using:

ΔNi,j,m(timeSpan)

(where a subscript for magnitude has been added). The advantage here, relative to the current STEP implementation, is that the upper magnitude cutoff is not arbitrary and the largest ruptures are not treated as point sources, but rather come from the fault-based ruptures defined in the long term model.

Some might question the wisdom of applying the time-dependent modifications of the background model described here. The answer simply comes down to, after considering the use of the final outcome and potential problems with the model, whether we are better off applying it than using only Poisson probabilities. For example, if the California Earthquake Authority (CEA) wants yearly forecasts updated on a yearly basis, should we or should we not include foreshock/aftershock statistics? We obviously have to apply the approach presented here to some extent if we are to model the post-1906 siesmicity lull as was done by the 2002 WGCEP.

Page 42: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

42

Time-Dependent Conditional Probabilities From the long-term model we have the rate of all possible earthquake ruptures (FRf,m,r & BRi,j,m), perhaps modified (or not) by a rate change model ΔNi,j(timeSpan) as just described. The question here is how to modify the fault-rupture probabilities based on information such as the date of, or amount of slip, in previous events. As demonstrated the 1988 WGCEP, computing conditional probabilities is relatively straight forward if strict segmentation is applied (ruptures are confined to specific segments, with no possibility of overlap or multi-segment ruptures). Using the average recurrence interval (perhaps determined by moment balancing), a coefficient of variation, the date of the most recent event, and an assumed distribution (e.g., lognormal) one can easily compute the conditional probability of occurrence for a given time span. Unfortunately the procedure is not so simple if one allows multi-segment ruptures (sometimes referred to as cascade events). The WGCEP-2002 approach: The long-term model developed by the 2002 WGCEP resulted in a moment-balanced relative frequency of occurrence for each single and multi-segment rupture combination on each fault. A simplified example of the possible ruptures and their frequencies of occurrence for the Hayward/Rodgers-Creek fault is shown in Figure 2 (see the caption for details, and note that none of the simplifications influence the conclusions drawn here). Again, the frequency of each rupture has been constrained to honor the long-term moment rate of each fault section. Let’s now look at how they computed conditional probabilities for each earthquake rupture. We will focus here on their Brownian Passage Time (BPT) model, but the basic conclusions drawn will apply to their other two conditional probability models as well (the BPT-step and time-predictable models). They used the BPT model to compute the conditional probability that each segment will rupture using the long-term rupture rate for each segment, which is simply the sum of the rates of all ruptures that include that segment, and the date of last event on the segment. They then partitioned each segment probability among the events that include that segment, along with the probability that the rupture would nucleate in that segment, to give the conditional probability of each rupture. In other words, each segment was treated as a point process. Therefore, if a segment has just ruptured by itself, for example, there should be a near-zero probability that it will rupture again soon after (according to their renewal model). However, there is nothing in their model stopping a neighboring segment from triggering a multi-segment event that re-ruptures the segment that just ruptured. In other words, one point process can be reset by a different, independent point process, which seems to violate the concept of a point process.

Page 43: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

43

Figure 2. Example of a long-term, moment balanced rupture model for the Hayward/Rodgers-Creek fault, obtained from the WGCEP-2002 Fortran code. The image, taken from the WGCEP-2002 report, shows the segments on the left and the possible ruptures on the right. The tables below give information including the rate of each earthquake rupture and the rate that each segment ruptures. Note that this example represents a single iteration from their code, where modal values in the input file were given exclusive weight, aseismicity parameters were set to 1.0, no GR-tail seismicity was included, sigma of the characteristic magnitude-frequency distribution was set to zero, and the floating ruptures were given zero weight (specifically, the line on the input-file that specified the segmentation model that was given exclusive weight read: “0.11 0.56 0.26 0.07 0.00 0.00 0.00 0.00 0.00 0.00 model-A-modified”)

Segment Info

Name Length (km)

Width (km)

slip-rate (mm/yr)

Rupture Rate (/yr)

Date of Last Event

HS 52.54 12 9 3.87e-3 1868 HN 34.89 12 9 3.95e-3 1702 RC 62.55 12 9 4.08e-3 1740

Rupture Info Name mag rate HS 7.00 1.28e-3 HN 6.82 1.02e-3 HS+HN 7.22 2.16e-3 RC 7.07 3.32e-3 HN+RC 7.27 0.32e-3 HS+HN+RC 7.46 0.44e-3 floating 6.9 0

This issue is illustrated by the Monte-Carlo-simulations shown in Figure 3, where WGCEP-2002 probabilities of rupture were computed for 1-year time intervals, ruptures were allowed to occur at random according to their probabilities for that year, dates of last events were updated on each relevant segment when a rupture occurred, and probabilities were updated for the next year. This process was repeated for 10,000 1-year time steps. The simulation results show that segment ruptures occur more often than they should soon after previous ruptures, which is at odds with the model used to predict the segment probabilities in the first place. Thus, there seems to be a logical inconsistency with the WGCEP-2002 approach. If one does not allow multi-segment ruptures, then the simulated recurrence intervals match the BPT distributions exactly (as expected). However, strict segmentation is exactly what we are trying to relax. Another way to state the problem with the WGCEP-2002 approach is that the probability of one segment triggering a rupture that extends into a neighboring segment is completely independent of when that neighboring segment last ruptured.

Page 44: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

44

Figure 3. The BPT distribution of recurrence intervals used to compute segment rupture probabilities (red line), as well as the distribution of segment recurrence intervals obtained by simulating ruptures according the WGCEP-2002 methodology (gray bins). The BPT probabilities assume a coefficient of variation of 0.5 and the segment rates given in Figure 2. Note the relatively high rate of short recurrence intervals in the simulations.

An Alternative Approach: What we seem to lack is a logically consistent way to apply time-dependent, conditional probabilities where both single and various multi-segment ruptures are allowed. The first question we should ask is whether a strict segmentation model might be adequate (making conditional probability calculations trivial). Certainly the segmentation model of the 1988 WGCEP is inconsistent with two most important historical earthquakes, namely the 1857 and 1906 San Andreas Fault (SAF) events. The only salvation for strict segmentation is if those historical events represent the only ruptures that occur on those sections of the SAF (or that other ruptures can be safely ignored). The best test of this hypothesis, at least that this author is aware of, is the paleoseismic data analysis of Weldon et al. (2005) for the southern SAF. They use both dates and amounts of slip inferred for previous events at points along the fault, along with Bayesian inference, to constrain the spatial and temporal distribution of previous events. Uncertainties inevitably allow more than one interpretation, two of which are shown in Figure 4 (the image and caption were provided by Weldon via personal communication). Figure 4A represents and interpretation what is largely consistent with a two-segment model, with 1857-type ruptures to the north and separate ruptures to the south. Figure 4B, which is also consistent with the data, is an alternative interpretation where no systematic rupture boundaries are apparent (not even if multi-segment ruptures are acknowledged). Thus, it appears that one viable interpretation is that no meaningful, systematic rupture boundaries can be identified on the one fault that has been studied the most.

Page 45: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

45

Figure 4. Rupture scenarios for the Southern San Andreas fault. Vertical bars represent the age range of paleoseismic events recognized to date, and horizontal bars represent possible ruptures. Gray shows regions/times without data. In (A) all events seen on the northern 2/3 of the fault are constrained to be as much like the 1857 AD rupture as possible, and all other sites are grouped to produce ruptures that span the southern ½ of the fault; this model is referred to the North Bend/South Bend scenario. In (B) ruptures are constructed to be as varied as possible, while still satisfy the existing age data. The results for the southern SAF imply that not only do we need a logically consistent way of applying conditional probabilities in the case where we have multi-segment ruptures, but also in the case where no persistent rupture boundaries exist. The most general case would be where a variety of rupture sizes are allowed to occur anywhere along the fault (as outlined in the previous section for the long-term model). Again, the question is how to sensibly apply conditional probabilities in this situation. More specifically, given a viable interpretation of the past history of events on the fault (for which Figure 4 shows only two of perhaps thousands for the southern SAF), how do we compute a conditional probability for each possible future rupture. The challenge arises from the fact that these probabilities are not independent, as the degree to which one rupture is more likely might imply that another, somewhat overlapping rupture is less likely. The approach outlined below makes use of Bayesian methods. Before introducing the proposed solution, however, it’s probably worth quoting from an excellent manuscript that’s available on-line (D’Agostini, 2003, http://www-zeus.roma1.infn.it/~agostini/rpp/):

‘… two crucial aspects of the Bayesian approach [are]:

1) As it is used in everyday language, the term probability has the intuitive meaning of “the degree of belief that an event will occur.”

2) Probability depends on our state of knowledge, which is usually different for

different people. In other words, probability is unavoidably subjective.’ The concept of subjective probabilities is not new to SHA. Indeed, the different branches of a logic tree, which represent epistemic uncertainties, explicitly embody such differences of opinion (where at most only one can be correct).

Page 46: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

46

Our present understanding of what influences the timing and growth of ruptures in complex fault systems suggests that a simple model (e.g., strict segmentation with point renewal processes) will not be reliable. On the other hand, our most sophisticated physics based models (Ward, 2000; Rundle et al., 2004) are too both too simplistic and have too many free parameters to be of direct use in forecasting (although they probably constitute our best hope for the future). Society nevertheless has to make informed decisions, so we are obligated to assimilate our present knowledge (however imperfect) and make the best forecast we can, including an honest assessment of the uncertainties therein. It is hoped that the Bayesian approach outlined below constitutes a credible framework for doing just this.

(note that in what follows it is assumed that the reader has a basic understanding of Bayesian methods; the above reference provides an excellent introduction and overview if not). The long-term fault model given above, FRf,m,r, gives the rate of each fault rupture over a

long period of time (perhaps modified by ΔNi,j(timeSpan) if appropriate). Therefore, if we have no additional information, then the consequent Poisson probabilities, which we write as Ppois(FRf,m,r), represent the best forecast we can make. What we would like to do is improve this forecast based on additional information or data, written generically as “D”. Fortunately, Bayes’ theorem gives us exactly what we need to make such an improvement. Specifically:

!

P(FRf ,m,r |D) =Ppois(FRf ,m,r )P(D |FRf ,m,r)

Ppois(FRf ,m,r )P(D |FRf ,m,r)f ,m,r

"

This says that the relative probability of a given rupture, FRf,m,r, is just the Poisson probability of that rupture (the prior distribution) scaled by probability that the additional data is consistent with that rupture. One immediate issue is that the additional data (D) might not be available for all possible ruptures. However, if it’s available for a sufficiently large set, such that the collective probability of these ruptures doesn’t change (only their relative values), then I think the above can be applied. For example, we might be confident that the total probability of a rupture on all faults with date-of-last-event information is 0.1; application of Bayes’ theorem would constitute a rational approach for modifying the relative long-term probabilities that each will be the one that occurs. If the above approach is legitimate, then the challenge becomes finding appropriate model(s) for the conditional probability (also called the likelihood):

!

P(D |FRf ,m,r )

Again, this simply gives the probability that the data is consistent with the occurrence of that rupture (and not that the data is caused by the rupture). This likelihood is essentially used in Bayes’ theorem to assign a probability gain relative to the long-term Poisson probabilities.

For example, in the spirit of the time-dependent renewal models applied by previous WGCEPs, we might want to apply a model that captures the notion that events are less probably where one has recently occurred, and more probable where one has not (in the seismic gaps).

Page 47: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

47

Looking at Figure 4, this reasoning would identify the southern most part of the southern SAF as the most likely place for a large rupture.

Recall that in the long-term model we have defined the rupture extent of every possible earthquake (as well as its magnitude, from a magnitude-area relationship, which uniquely defines the average slip as well). Suppose we could look in a crystal ball and know exactly which one would occur next, but we were left with figuring out when it would occur. The following are two approaches for defining a probability distribution for the next occurrence (and therefore could constitute the basis for the likelihood function in Bayes’ theorem above): Method 1 (Average Slip-Predictable Model): Here we say the best estimate of the date of the next event is when sufficient moment has accumulated since the last event(s) to match the moment, Mo, of the next event (the latter coming from the long-term model). If vi,

!

toi, and Ai are the slip rate, date of last event, and area of each

fault section comprising the rupture, then this best-estimate time of occurrence,

!

t , satisfies the following:

!

Mo

= µ vi

i= 0

I

" Ai(t # t

oi

)

where µ is the shear modulus. Solving for

!

t we have:

!

t =

Mo

+ µ vi

i= 0

I

" Ait

oi

µ vi

i= 0

I

" Ai

or

!

t = "t + t o

where

!

"t =M

o

µ vi

i= 0

I

# Ai

and

!

t o

=

vi

i= 0

I

" Ait

oi

vi

i= 0

I

" Ai

.

!

"t is the time needed to accumulate the required moment, and

!

t o is a weight-averaged time of

the last event(s) – where the plural indicates that the date of last event,

!

toi

, may vary between sections.

Page 48: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

48

Of course there are both uncertainties and intrinsic variability, so we can represent

!

"t as a random variable (using a BPT model or any other reasonable distribution), giving a range of possible next-event times. However, this is not a renewal process in the sense of the exact same rupture occurring repeatedly (characteristic earthquakes), as we have made no assumptions regarding persistent rupture boundaries.

The expression for

!

"t above shows that the time of next event depends on the size, or more specifically the amount of slip, of the next event. The implicit assumption is, therefore, that a rupture brings each fault-section, on average, back to some base level of stress (as in a slip-predictable model). Thus, if we know, or are willing to hypothesize the size of the event, then we can predict it’s most likely time of occurrence. There is no dependence on the amount of slip in previous event(s). This model will lead us astray if each large event does not, on average, exhaust the stress available to produce subsequent events. Method 2 (Average Time-Predictable Model):

If we also know the amount of slip produced by the last event in each fault section (Di), then we could apply the time-predictable model on a section-by-section basis. That is, the best estimate for the time of next event on each section is:

!

ti=Di

vi

+ toi

Then for a hypothesized rupture including multiple sections, we can define the best estimate of the rupture time for the next event as a weighted-average of ti for each section:

!

ˆ t =

Ai

Di

vi

+ to

i

"

# $

%

& '

i= 0

I

(

Ai

i= 0

I

(

or

!

ˆ t = "ˆ t + ˆ t o

where

!

"ˆ t =

AiD

i

vii= 0

I

#

Ai

i= 0

I

#

and

Page 49: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

49

!

ˆ t o

=

Ait

oi

i= 0

I

"

Ai

i= 0

I

".

Note that we are using hats over the symbols here to distinguish them from the symbols with bars in method 1 above. In contrast to method 1, the timing of the next event is not dependent on how large it will be, but rather dependent on how large the last event(s) were (implicitly assuming some triggering threshold for each section, as in the time-predictable model). This model will interpret a relatively low amount of slip in an event as indicative that another event should occur soon (because significant stress remains), rather than indicating the fault section had already been depleted.

Therefore, the differing assumptions between methods 1 and 2 above are that the time of

next event depends either on the size of the next event, or on the size of the last event, respectively (albeit both with some intrinsic variability as modeled by a BPT-type distribution). We have made no presumptions regarding the persistence of repeated, identical ruptures, but rather have presumed knowledge of the location of the next event. We will return to how this can be used in Bayes’ theorem shortly, but lets diverge for a moment to examine whether the assumptions in methods 1 and/or 2 above are testable.

The question is whether either of these predicted intervals,

!

"t or

!

"ˆ t , correlate with actual observed intervals (

!

t " t o or

!

t " ˆ t o , respectively, where t is the observed time) and whether

normalized difference between the two,

!

(t " t o) /#t or

!

(t " ˆ t o) /#ˆ t , exhibit something like a BPT

distribution. Perhaps worldwide observations are not abundant enough to provide an adequate test if this, so for clues we turn to the physics-based “Virtual California” simulations of Rundle et al. (2004).

Virtual California is a cellular-automata-type earthquake simulator comprised of 650 fault sections, each of which is about 10 km in length. Each section is loaded according to its long-term slipping rate and is allowed to rupture according to a friction law. Both static and quasi-dynamic stress interactions are accounted for between sections, giving rise to a self-organization of the statistical dynamics. In particular, the distribution of events for the entire region (California) exhibits a Gutenberg Richter distribution. The interesting question is whether this complex interaction effectively erases any predictability hope for by elastic-rebound-type considerations. Specifically, do methods 1 or 2 above provide any predictability with respect to Virtual California events? The results shown in Figure 5 are quite encouraging, as the distribution of events for method 1 (the average slip-predictable approach shown in 5a) are fit well by a BPT distribution with a coefficient of variation of less than 0.2. The results for method 2 are even better (the average time-predictable approach) with a coefficient of variation of less than 0.15. That method 2 would exhibit more predictability is not surprising, however, because Virtual California’s stochastic element is applied as a random 10% over- or under-shoot of the amount of slip in each event; therefore, knowing the size of the last event is more helpful than knowing the size of the next event.

These results are encouraging in that one of the most sophisticated physics-based earthquake simulators implies there is some predictability in the system. The relevant question is, however, how robust is this conclusion is given all the assumptions and simplifications embodied in Virtual California. What we need is to propagate all the uncertainties in this model,

Page 50: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

50

as well as examine the results of other earthquake simulators, to see what predictability remains (e.g., the variability of the coefficients of variation in methods 1 and 2 above). Results obtained using the Ward (2000) earthquake simulator (not shown here) reveal predictability as well, but with significantly higher coefficients of variation (~0.8).

(a) VC distribution of

!

(t " t o) /#t

(b) VC distribution of

!

(t " ˆ t o) /#ˆ t

Figure 5. This shows the distribution of observed versus predicted recurrence intervals from the Virtual California (VC) simulation of Rundle et al. (2004). The red bins in (a) and (b) are for prediction methods 1 and 2 (average slip and time predictable methods, respectively), and various PDF fits are shown as well. (c) rupture locations on the northern San Andreas Fault for a 3000 year time slice, showing a lack of persistent rupture boundaries. (d) and (e) show similar plots after assigning a random time of occurrence for each of the VC ruptures, revealing Poissonian behavior as expected.

(c) VC events on N. SAF

(d) VC distribution of

!

(t " t o) /#t

for randomized events

(e) VC events randomized on N. SAF

Also shown in Figure 5 is a space-time diagram for VC ruptures on the northern San

Andreas Fault (5c), as well as the distribution of method-1 recurrence intervals after assigning a random time of occurrence for all VC ruptures (5d, which shows an approximately exponential distribution as expected for a Poisson model). Figure 5e shows a space-time diagram for the time-randomized events on the northern SAF. Note that the Poissonian behavior in Figure 5e exhibits much more clustering and gaps in seismicity compared to Figure 5c. This exemplifies how the physics in Virtual California (specifically, the frictional strength of faults) effectively regularizes the occurrence of earthquakes to avoid too much stress buildup or release. An interesting corollary is whether the Poisson model, with its associated random clustering and gaps in event times, can be ruled out on the basis of simple fault strength arguments.

Page 51: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

51

Returning now to the application of Bayes’ theorem, recall that the long-term model gives the long-term (Poissonian) probability of every possible rupture. We can then use either method 1 or 2 above to compute a time-dependent conditional probability (modified by stress change calculations if desired, either as a clock change or a step in the BPT distribution), and apply this as the likelihood function in Bayes’ theorem. This will effectively increase the probability of ruptures that involve sections that are overdue, and reduce the probability of events on sections that have ruptured recently (with intermediate probabilities for ruptures that overlap both recently ruptured and overdue sections). Note that both methods 1 and 2 will maintain a balance of moment release on the faults.

Here’s another way to view this Bayesian solution. Suppose our task is to predict the next event on one of the faults where we have additional information. This can be considered an epistemic uncertainty in that only one next event can actually occur. For every fault rupture defined in the long-term model (now a branch on a logic tree), we can ask the following question – if we knew for sure that this is the next event to go, how would we define its probability in a given time span. Again, one approach would be to apply methods 1 or 2 above. In this way we can assign a probability for each event in the long-term model (which are, again, on different branches of a logic tree). We now have the task of weighting the logic-tree branches in order to define the relative likelihood that each is the next to go. One approach would be to assign weights based on the relative rate of occurrence from the long-term model, which would lead to the same solution as outlined above using Bayes’ theorem.

Some might take issue with the assumptions implied in methods 1 and 2 above. What’s important here, however, is that a wide variety of alternative likelihood models could be defined and accommodated in the framework, such as the Bowman-type accelerating-moment-release-based predictions (i.e., ruptures in the long-term model that exhibit accelerating seismicity are more likely).

One issue is whether simulated catalogs obtained using this approach might lead to long-term behavior that differs from the long-term model (e.g., if the likelihood systematically favors some events over others). If so, is this bad, or is it actually good in that the likelihood model effectively pushes us toward more physically consistent results? As an alternative to the Bayesian approach presented here, one might be tempted to extend the WGCEP-2002 approach to accommodate many more segments (to allow all important discrete rupture possibilities) and to solve the interacting point processes problem discussed above (e.g., with a BPT-step model that tracks stress interactions from occurring events). However, it seems like any such solution might end up as complicated as a Ward (2000) or Rundle et al. (2004) type model, which would beg the question of whether our efforts would be better spent on improving the latter model types. What I’ve attempted to define here is a simple, justifiable framework that can accommodate a range of ideas on how to make credible time-dependent earthquake forecasts. Again, our most advanced physics-based simulation models (e.g., Ward’s or Rundle’s) are not yet sophisticated enough to be used directly, and also present challenges with respect to defining probabilities based on present conditions. Until these problems are solved, we need an interim rational basis for stating the degree of belief that various events might occur, and it looks like Bayes’ theorem provides precisely such a framework. Some might dislike this approach because it doesn’t provide a model of how the earth actually works and/or because it starts with the Poisson model (as the prior distribution) for which many believe is wrong in the first place. However, until someone figures out how the earth does work, the approach outlined does provide

Page 52: USC-SCEC/CEA Technical Report #1 - WGCEPwgcep.org/sites/wgcep.org/files/SCEC_CEA_Report1.pdf · mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of

52

a rational basis for incorporating existing, and perhaps physics based, constraints for defining the probability of various earthquakes. Assuming someone doesn’t find a fatal flaw in my reasoning, I think the next step is to build such a model and simulate earthquakes from it. As I hope was demonstrated in Figure 3 above, we can learn a lot by simulating earthquakes from even statistics-based models.

References

Andrews, D. J. and E. Schwerer (2000). Probability of Rupture of Multiple Fault Segments, Bull. of the Seism. Soc. Am. 90, 1498-1506.

Field, E. H., D. D. Jackson, and J. F. Dolan (1999). A mutually consistent seismic-hazard source model for southern California, Bull. Seism. Soc. Am. 89, 559–578.

Gerstenberger, M.C., S. Wiemer, L.M. Jones, and P.A. Reasenberg (2005). Real-time forecasts of tomorrow's earthquakes in California, Nature 435, 328-331.

NSHMP (1996). http://earthquake.usgs.gov/research/hazmaps/publications/docmaps.php. NSHMP (2002). http://earthquake.usgs.gov/research/hazmaps/publications/of02-420/OFR02-

420.pdf Rundle, J.B., P.B. Rundle, A. Donnellan, and G. Fox (2004). Gutenberg-Richter statistics in

topologically realistic system-level earthquake stress-evolution simulations, Earth Planets Space 56, 761-771.

Stein, R. S., Barka, A. A., Dieterich, J. H., (1997). Progressive failure on the North Anatolian fault since 1939 by earthquake stress triggering. Geophys. J. Int. 128, 594-604.

Ward, S.N. (2000). San Francisco Bay Area Earthquake Simulations: A Step Toward a Standard Physical Earthquake Model, Bull. of the Seism. Soc. Am. 90, 370–386.

Weldon R.J., T.E. Fumal, G.P. Biasi, and K.M. Scharer (2005). Past and Future Earthquakes on the San Andreas Fault, Science 308, 966

WGCEP (2003). http://pubs.usgs.gov/of/2003/of03-214/.