+ All Categories
Home > Documents > by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris ... · by Ram Kulkarni, Ivan Wong, Judith...

by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris ... · by Ram Kulkarni, Ivan Wong, Judith...

Date post: 03-Jul-2020
Category:
Upload: others
View: 3 times
Download: 0 times
Share this document with a friend
17
Statistical Analyses of Great Earthquake Recurrence along the Cascadia Subduction Zone by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris Goldfinger, and Martin Lawrence Abstract Goldfinger et al. (2012) interpreted a 10,000 year old sequence of deep sea turbidites at the Cascadia subduction zone (CSZ) as a record of clusters of plate-boundary great earthquakes separated by gaps of many hundreds of years. We performed statistical analyses on this inferred earthquake record to test the temporal clustering model and to calculate time-dependent recurrence intervals and probabil- ities. We used a Monte Carlo simulation to determine if the turbidite recurrence in- tervals follow an exponential distribution consistent with a Poisson (memoryless) process. The latter was rejected at a statistical significance level of 0.05. We performed a cluster analysis on 20 randomly simulated catalogs of 18 events (event T2 excluded), using ages with uncertainties from the turbidite dataset. Results indicate that 13 cata- logs exhibit statistically significant clustering behavior, yielding a probability of clus- tering of 13=20 or 0.65. Most (70%) of the 20 catalogs contain two or three closed clusters (a sequence that contains the same or nearly the same number of events) and the current cluster T1T5 appears consistently in all catalogs. Analysis of the 13 cat- alogs that manifest clustering indicates that the probability that at least one more event will occur in the current cluster is 0.82. Given that the current cluster may not be closed yet, the probabilities of an M 9 earthquake during the next 50 and 100 years were estimated to be 0.17 and 0.25, respectively. We also analyzed the sensitivity of results to including event T2, whose status as a full-length rupture event is in doubt. The inclusion of T2 did not change the probability of clustering behavior in the CSZ turbidite data, but did significantly reduce the probability that the current cluster would extend to one more event. Based on the statistical analysis, time-independent and time-dependent recurrence intervals were calculated. Introduction Goldfinger et al. (2012) observed that deep-sea cores collected along the northern and central Cascadia margin, in Cascadia Channel, Juan de Fuca Channel off Washington, Hydrate Ridge slope basin, and Astoria Fan off northern and central Oregon contain 13 post-Mazama (< 7:6 ka) turbidites (Fig. 1). They inferred the turbidites to have been triggered by strong shaking during great Cascadia subduction zone (CSZ) megathrust earthquakes (moment magnitude M 9), which they refer to as events; each event may be recorded by one or more turbidites. All 13 events are also recorded on the Rogue Apron of southern Oregon. Other smaller local events (< M 9) are recorded in the Rogue Apron cores by additional silt or mud turbidites. In total, 1920 laterally extensive tur- bidites in the last 10 ka were found along the northern Cas- cadia margin and are also recorded in cores off northern California, along with 22 additional smaller events (Goldfin- ger et al., 2012). Knowledge of the recurrence of great CSZ earthquakes is critical for estimating the probabilistic seismic hazard in the Pacific Northwest and British Columbia and driving earth- quake-hazard mitigation policies and actions. This is particu- larly true because we know that the most recent great CSZ earthquake occurred on 26 January 1700. Traditionally prob- abilistic seismic-hazard analysis (PSHA) has been performed assuming time-independent (Poisson) earthquakes processes. For example, the U.S. Geological Surveys National Seismic Hazard Maps are developed assuming a time-independent process (e.g., Petersen et al., 2008). However, time-dependent PSHAs are increasingly being performed for important and critical facilities based on knowledge of the most recent earth- quake and recurrence intervals and are now being integrated into engineering practice (Wong et al., 2007; BC Hydro, un- published manuscript, 2012, see Data and Resources). Also, probabilistic forecasts of the next M 9 CSZ earthquake have been available in the past decade (e.g., Mazzotti and Adams, 2004). The average age of the oldest Holocene turbidite along the northern and central Cascadia margin (Table 1) is 9830 BSSA Early Edition / 1 Bulletin of the Seismological Society of America, Vol. 103, No. 6, pp. 117, December 2013, doi: 10.1785/0120120105
Transcript
Page 1: by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris ... · by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris Goldfinger, and Martin Lawrence Abstract Goldfinger et al. (2012)

Statistical Analyses of Great Earthquake Recurrence along

the Cascadia Subduction Zone

by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris Goldfinger, and Martin Lawrence

Abstract Goldfinger et al. (2012) interpreted a 10,000 year old sequence ofdeep sea turbidites at the Cascadia subduction zone (CSZ) as a record of clusters ofplate-boundary great earthquakes separated by gaps of many hundreds of years. Weperformed statistical analyses on this inferred earthquake record to test the temporalclustering model and to calculate time-dependent recurrence intervals and probabil-ities. We used a Monte Carlo simulation to determine if the turbidite recurrence in-tervals follow an exponential distribution consistent with a Poisson (memoryless)process. The latter was rejected at a statistical significance level of 0.05. We performeda cluster analysis on 20 randomly simulated catalogs of 18 events (event T2 excluded),using ages with uncertainties from the turbidite dataset. Results indicate that 13 cata-logs exhibit statistically significant clustering behavior, yielding a probability of clus-tering of 13=20 or 0.65. Most (70%) of the 20 catalogs contain two or three closedclusters (a sequence that contains the same or nearly the same number of events) andthe current cluster T1–T5 appears consistently in all catalogs. Analysis of the 13 cat-alogs that manifest clustering indicates that the probability that at least one more eventwill occur in the current cluster is 0.82. Given that the current cluster may not beclosed yet, the probabilities of an M 9 earthquake during the next 50 and 100 yearswere estimated to be 0.17 and 0.25, respectively. We also analyzed the sensitivity ofresults to including event T2, whose status as a full-length rupture event is in doubt.The inclusion of T2 did not change the probability of clustering behavior in the CSZturbidite data, but did significantly reduce the probability that the current clusterwould extend to one more event. Based on the statistical analysis, time-independentand time-dependent recurrence intervals were calculated.

Introduction

Goldfinger et al. (2012) observed that deep-sea corescollected along the northern and central Cascadia margin,in Cascadia Channel, Juan de Fuca Channel off Washington,Hydrate Ridge slope basin, and Astoria Fan off northern andcentral Oregon contain 13 post-Mazama (<7:6 ka) turbidites(Fig. 1). They inferred the turbidites to have been triggeredby strong shaking during great Cascadia subduction zone(CSZ) megathrust earthquakes (moment magnitude �M� ∼ 9),which they refer to as events; each event may be recorded byone or more turbidites. All 13 events are also recorded on theRogue Apron of southern Oregon. Other smaller local events(<M 9) are recorded in the Rogue Apron cores by additionalsilt or mud turbidites. In total, 19–20 laterally extensive tur-bidites in the last 10 ka were found along the northern Cas-cadia margin and are also recorded in cores off northernCalifornia, along with 22 additional smaller events (Goldfin-ger et al., 2012).

Knowledge of the recurrence of great CSZ earthquakes iscritical for estimating the probabilistic seismic hazard in the

Pacific Northwest and British Columbia and driving earth-quake-hazard mitigation policies and actions. This is particu-larly true because we know that the most recent great CSZearthquake occurred on 26 January 1700. Traditionally prob-abilistic seismic-hazard analysis (PSHA) has been performedassuming time-independent (Poisson) earthquakes processes.For example, the U.S. Geological Survey’s National SeismicHazard Maps are developed assuming a time-independentprocess (e.g., Petersen et al., 2008). However, time-dependentPSHAs are increasingly being performed for important andcritical facilities based on knowledge of the most recent earth-quake and recurrence intervals and are now being integratedinto engineering practice (Wong et al., 2007; BC Hydro, un-published manuscript, 2012, see Data and Resources). Also,probabilistic forecasts of the next M 9 CSZ earthquake havebeen available in the past decade (e.g., Mazzotti andAdams, 2004).

The average age of the oldest Holocene turbidite alongthe northern and central Cascadia margin (Table 1) is 9830�

BSSA Early Edition / 1

Bulletin of the Seismological Society of America, Vol. 103, No. 6, pp. 1–17, December 2013, doi: 10.1785/0120120105

Page 2: by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris ... · by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris Goldfinger, and Martin Lawrence Abstract Goldfinger et al. (2012)

180 cal yr B.P. (before 1950A.D.) and the youngest is attributedto the 1700 A.D. earthquake that is widely recorded in theonshore marsh stratigraphic record along much of the lengthof the CSZ. The northern events define an average Holocenegreat earthquake recurrence interval of about 530 years (Gold-finger et al., 2012). Goldfinger et al. (2012) argued that theturbidite record indicates a repeating pattern of clusteredHolocene earthquakes that includes four or five cycles of twoto five earthquakes, each separated by unusually long intervals(Fig. 2). As will be shown in this study, temporal clustering of

full-rupture earthquakes along the CSZ is a key issue incalculating time-dependent recurrence intervals.

In this paper, we performed statistical analyses of theHolocene earthquake record as interpreted by Goldfingeret al. (2012) to evaluate the CSZ recurrence of M 9 mega-thrust earthquakes. This paleoseismic record, which is alsosupported by the onshore paleoseismic record of coseismicsubsidence stratigraphy and tsunami deposits, is the longestand most complete record available for the CSZ and oneof the best paleoseismic chronologies available worldwide;

-130°

48°

46°

44°

42°

40°

-128° -126° -124° -122°

Figure 1. Cascadia margin turbidite canyons, channels, and 1999–2002 core locations. Inset of Effingham inlet shows site of PacificGeoscience Centre (PGC) collected piston cores. Source: Goldfinger et al. (2012).

2 R. Kulkarni, I. Wong, J. Zachariasen, C. Goldfinger, and M. Lawrence

BSSA Early Edition

Page 3: by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris ... · by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris Goldfinger, and Martin Lawrence Abstract Goldfinger et al. (2012)

hence the use of the Goldfinger et al. (2012) dataset and theirinterpretations. However, we also recognize that there is atleast one alternative interpretation of the turbidite record, thatof Atwater and Griggs (2012), as discussed below.

A logic-tree approach was used to represent alternativeplausible processes of earthquake recurrence for the CSZmegathrust earthquakes. The principal objectives of the stat-istical analyses were to: (1) estimate the level of confidence(weight) in each alternative process of earthquake recur-rence; (2) estimate recurrence intervals at 5%, 50%, and 95%confidence under each alternative process; and (3) estimatethe probability of the next great megathrust earthquake alongthe CSZ.

The following alternative processes of earthquake recur-rence were considered in this analysis.

Time-Independent Process

Under this process, it is assumed that earthquake recur-rence is memoryless. That is, the time elapsed since the lastearthquake has no effect on the time of the next earthquake.We assumed a Poisson model for this branch of the logic treebecause it is appropriate for a memoryless process and is thecommon choice in PSHA (e.g., McGuire, 2004).

Table 1Data on Ages of Cascadia Events Extracted from

Goldfinger et al. (2012)

Turbidite Age

Event ID Best Estimate �2 Sigma Bound −2 Sigma Bound

T1 270 94 91275 91 118269 103 90

T2* 469 79 75398 68 108473 99 103495 96 109438 81 62520 28 70

T3 862 124 95736 104 119822 137 179808 105 109828 84 91754 66 65810 83 93796 87 105

T4 1204 119 1201328 134 1321231 90 101

T5 1519 271 2721573 146 1381584 116 1021543 173 1601611 129 106

T6 2537 129 1582503 98 1682642 140 1682540 159 1502598 149 65

T7 3065 120 1453007 115 1363081 162 201

T8 3496 180 1553458 165 1733518 175 1823493 169 1763503 171 1803365 138 166

T9 4043 176 1674110 118 1284123 183 2694172 174 1804190 173 1834149 204 1894241 189 198

T10 4773 152 1634733 167 1754761 178 1974845 185 224

T11 5937 147 1335727 86 1265929 158 1456024 179 1985928 132 1275999 127 122

T12 6283 136 1556367 138 1186562 165 124

(continued)

Table 1 (Continued)Turbidite Age

Event ID Best Estimate �2 Sigma Bound −2 Sigma Bound

T13 7080 111 1107227 140 1307248 84 837152 105 1107183 150 1507085 155 1187173 92 105

T14 7607 174 1667576 99 1107645 124 1377668 143 131

T15 8155 179 1988052 149 1498247 103 1058253 105 92

T16 8682 258 2888995 151 1308908 151 1558827 143 148

T17 9095 213 2099076 122 1429155 115 155

T17a 9241 279 3019200 105 122

T18 9993 78 2309679 208 1899873 94 1999741 292 3159837 154 1869773 186 244

*Base case for 18 events without event T2; sensitivity case for 19events with event T2.

Statistical Analyses of Great Earthquake Recurrence along the Cascadia Subduction Zone 3

BSSA Early Edition

Page 4: by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris ... · by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris Goldfinger, and Martin Lawrence Abstract Goldfinger et al. (2012)

Time-Dependent Process

Under this process, it is assumed that the time elapsed sincethe last earthquake does affect the time to the next earthquake.We considered two alternative models under this process.

Clustered Model. It is assumed that earthquakes occur inclusters, and time intervals between clusters define gaps.To be clustered, the recurrence intervals within a cluster aresignificantly shorter than the gap intervals. For this model,we defined two additional alternative logic-tree branches:(1) the process is currently in a cluster and (2) the processis currently in a gap. The probability of an earthquake foreach branch occurring in a specified time window dependson the time elapsed since the last earthquake and the distri-bution of recurrence intervals for earthquakes correspondingto that branch.

Nonclustered Model. It is assumed that earthquakes followa time-dependent process, but do not occur in clusters.Because of the assumed time dependency, the probabilityof an earthquake in a specified time window would dependon time elapsed since the last earthquake and the distributionof interevent times for all events. This is the model that istypically assumed for earthquake behavior.

In the following we adhere to the following terminol-ogy: we use interevent time when referring to the time inter-val between successive events and recurrence interval as theaverage of several interevent times.

The CSZ Turbidite Record

Turbidite systems occur along the extent of the con-tinental margin overlying the CSZ from Vancouver Islandto Cape Mendocino. Goldfinger et al. (2012) collected cores

050010001500200025003000350040004500500055006000650070007500800085009000950010000

Tur

bidi

te m

ass

(sca

led)

0

200

400

600

800

1000

T1

T2T3

T4T5T6

T7

T8

T9

T10

T11

T12

T13

T14T15

T16

T17

T17a

T18

cluster avg. = 330 yr(150-510)

] cluster] cluster] cluster

cluster avg. = 560 yr(340-740)

cluster avg. = 510 yr(330-690)

] cluster

Gap980

(800-1130)years

Gap 1190

(1020-1380) years

cluster

Gap720

(570-870)years

Gap730

(540-910)years

cluster avg. = 500 yr(320-580)

cluster avg. = 300 yr(120-480)

(b)

(a)

Years Cal B.P. (1950)

050010001500200025003000350040004500500055006000650070007500800085009000950010000

Tur

bidi

te m

ass

(sca

led)

0

200

400

600

800

1000

T1

T2T3

T4T5T6

T7

T8

T9

T10

T11

T12

T13

T14T15

T16

T17

T17a

T18

cluster avg. = 330 yr(150-510)

] cluster] cluster] cluster

cluster avg. = 560 yr(340-740)

] cluster

cluster avg. = 550 yr(370-730)

cluster avg. = 300 yr(120-480)

]Gap 980

(800-1130)years

Gap 1190

(1020-1380) years

Gap730

(540-910)years

Figure 2. Interevent times and turbidite mass. (a) Bars are scaled with height representing turbidite mass (taller bars are larger turbidites).Bar widths are the 2 sigma error range from OxCal that combine for each event. The time series suggests a history of clusters of earthquakes(mean interevent times shown) separated by gaps of ∼750–1150 years. Gaps appear to have a tendency to conclude with a large event.(b) Four cluster model. Source: Goldfinger et al. (2012).

4 R. Kulkarni, I. Wong, J. Zachariasen, C. Goldfinger, and M. Lawrence

BSSA Early Edition

Page 5: by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris ... · by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris Goldfinger, and Martin Lawrence Abstract Goldfinger et al. (2012)

from multiple continental rise channels along the length ofthe margin in order to identify and date turbidites emplacedin response to shaking associated with CSZ megathrust earth-quakes. The methodology they used in selecting and analyz-ing cores was intended to ensure a complete record ofturbidite events that could be distinguished as having beenseismically triggered.

Goldfinger et al. (2012) analyzed channel systems so asto identify recently active turbidite pathways. They selectedcore sites that were likely to experience deposition, espe-cially of fine-grained deposits rather than erosion, and tohave few unconformities, the better to maximize the likeli-hood of creating a complete record of turbidite sedimentation(Goldfinger et al., 2012). A key element in determining that aturbidite was triggered by an earthquake of a given magni-tude is correlating turbidites between different locations.Goldfinger et al. (2012) used a variety of techniques, includ-ing visual logging, P-wave velocity, gamma-ray density,magnetic susceptibility, tomography, and X-radiography tointerpret the stratigraphy of each core based on unique physi-cal properties, such as grain size and mineralogy, and to iden-tify key stratigraphic fingerprints for each turbidite. Thefingerprints include criteria such as turbidite mass and num-ber of fine and coarse pulses. They used these fingerprints tocorrelate between cores at a site and between sites over largedistances, using both visual correlation and mathematicalsignal correlation of individual events and sequences ofevents. By combining the stratigraphic fingerprints with ex-tensive radiocarbon dating and analysis of sedimentationrates and hemipelagic thickness, Goldfinger et al. (2012)were able to identify distinct large-scale turbidite events; thatis, episodes of turbidite emplacement that occurred synchro-nously over large distances along the continental margin.

Using the results of their core analyses and correlations,Goldfinger et al. (2012) identified a correlated series of19–20 events in the northern part of the margin, west of Van-couver Island and Washington, with additional events in thesouthern part of the margin, in southern Oregon and northernCalifornia (Fig. 1; Table 1). The correlation of events acrosslarge distances suggests a regional triggering mechanism.Goldfinger et al. (2012) used several different methods todistinguish a seismic trigger from other possible regionaltriggers such as storm waves, tsunamis, and sediment load-ing. Sedimentological data, such as stacking of distinct min-eralogies suggesting different sources at different distances,and synchroneity of events revealed in confluence tests(Adams, 1990) across a large spatial extent support a seismicorigin for the turbidites. The spatial extent of the correlatedturbidites and the mass and distance from the source arguefor a large, nearby earthquake source, that is, a megathrustrather than intraslab or crustal faults. Finally, there is an as-sociation of more massive turbidites with larger spatial extentand vice versa, such that the additional turbidites identifiedonly in southern Oregon tend to be less massive as well asless extensive than the margin-wide events, consistent with asmaller earthquake source. Based on these results, Goldfin-

ger et al. (2012) inferred that the margin-wide correlatedevents were triggered by an ∼M 9 megathrust earthquakesrupturing the entire margin, while the additional events insouthern Oregon and California were triggered by smallerearthquakes caused by rupture of only the southern part ofthe megathrust.

The consistency of the turbidite records across large dis-tances suggests that the turbidites represent a relatively com-plete record of seismically triggered events and thus of large-magnitude megathrust earthquakes. Goldfinger et al. (2012)consider their record fairly complete for earthquakes ∼M 8and larger. There is, however, some uncertainty in the com-pleteness of the record. A second event shortly following afirst event might not have a signature if not enough time haspassed to build up a sediment load on the continental shelf/edge, leading to possible undercounting of events, and, ofcourse, events with magnitude below a certain threshold willnot trigger turbidites and thus will leave no record. However,the precautions taken by Goldfinger et al. (2012) in site se-lection and analysis of multiple cores at a site and the goodcorrelation of onshore and offshore records supports the con-clusion that this record is relatively complete for large mega-thrust events in the Holocene.

To compute event ages, radiocarbon and hemipelagicages of well-correlated beds were averaged after outlier rejec-tion of a few reversed ages (Goldfinger et al., 2012). The priorassumption of the averaging is that the turbidite beds are cor-rectly correlated (Goldfinger et al., 2012). Goldfinger et al.(2012) tested the temporal correlation using OxCal statisticalfunctions to test the assumption of coevality (as the resolutionof radiocarbon) with several statistical tests (Ramsey, 2001).OxCal applies two tests when combining radiocarbon datathat are assumed to be coeval (from the same sample, samehorizon), or have external evidence to support coevality.The tests are a standard χ2 test, and a second test calledAcomb. All 19 correlated turbidite beds pass these statisticaltests, most by wide margins (Goldfinger et al., 2012, appen-dix 8). The time series overall passes additional tests of coher-ence from site to site (Amodel and Aoverall) also given inappendix 8 of Goldfinger et al. (2012). These tests and theirsignificance are further described in the OxCal documenta-tion. Whereas radiocarbon dating alone may not be able todistinguish between closely spaced events, and places onlymoderate constraints on individual event ages, the strati-graphic framework of bed correlations, together with coherenttemporal tests that also are in agreement with land paleoseis-mic data, provide confidence in the age series used here.

Goldfinger et al. (2012) estimated the age of each earth-quake based on dating two or more turbidites collected fromdifferent locations along the subduction zone margin (Table 1).Radiocarbon dates were obtained from sediment immediatelybelow turbidite events (close maximum age). Goldfinger et al.(2012) attempted to remove the systematic bias to older agesthat this method produces by correcting for sedimentation rate,basal erosion during turbidite emplacement, hemipelagicintervals, and reservoir effects. For each turbidite associated

Statistical Analyses of Great Earthquake Recurrence along the Cascadia Subduction Zone 5

BSSA Early Edition

Page 6: by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris ... · by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris Goldfinger, and Martin Lawrence Abstract Goldfinger et al. (2012)

with an earthquake, the mean event age and the upper- (�2

sigma) and lower- (−2 sigma) bounds were determined basedon the analysis of several factors including the uncertainty incorrelation of turbidite radiocarbon ages and in the correctionfactors noted previously. Table 1 shows the data compiled byGoldfinger et al. (2012), which was used in the presentanalysis. The �= − 2 sigma bounds generally are not sym-metric about the best estimate of event age.

The Goldfinger et al. (2012) turbidite dataset suggests arecurrence interval of about 530 years for northern events.The recurrence interval for the southern section, which alsoruptures in smaller, geographically restricted earthquakes, isabout 260 years. Goldfinger et al. (2012) did not recognizeearthquakes restricted to the northern section and interpret allevents recorded in the north to be full-rupture events, that is,to have ruptured almost all if not the full length of the plateboundary. The recurrence interval for full-rupture events ob-tained from turbidites is similar to that derived from onshorepaleoseismic records of subsidence and tsunami inundationrecorded in estuaries and coastal marshes (e.g., Atwater andHemphill-Haley, 1997; Witter et al., 2003; Kelsey et al.,2005; Nelson et al., 2006, 2008). Goldfinger et al. (2012)compare their records with onshore records of coastal sub-sidence and CSZ tsunamis and find that both datasets recordapproximately the same number of events, although corre-lated events occasionally show a large difference in age upto 100 years in mean land and mean marine ages of margin-wide events and over 200 years in mean ages at specific sites(e.g., event T3). They attribute some systematic agedifferences between onshore and offshore records to marine14C reservoir variability, arguing that the likelihood of thealternative explanation, that onshore and offshore datasetsare recording entirely different but closely timed earthquakeevents, is low. In compiling their interpretation of the age andrupture extent of CSZ earthquakes, they considered both theonshore and offshore records.

Atwater and Griggs (2012) have disputed the conclu-sions of Goldfinger et al. (2012) regarding the number ofmargin-wide ruptures. Addressing a number of assumptionsin the Goldfinger et al. (2012) analysis, they conclude thatthe turbidite data cannot distinguish between margin-wideruptures and a series of shorter ruptures closely spaced intime. For example, Atwater and Griggs argue that turbiditycurrents can die within tributary channels, affecting the totalnumber of turbidites preserved upstream and downstream ofthe confluence, and thus the Adams confluence test does notprove coeval rupture at the heads of different tributaries.Consequently, they state that there remains greater uncer-tainty in the size of ruptures recorded in Cascadia Channelturbidites that Goldfinger et al. (2012) acknowledge. Theyalso dispute the use by Goldfinger et al. (2012) of geophysicalsignatures to correlate turbidites. Goldfinger et al. (2012) pro-pose that logs of gamma density and magnetic susceptibilityact as seismograms, recording pulses of strong shaking, andthat similar geophysical signatures between turbidites provideevidence of event correlation. Atwater and Griggs (2012)

argue that details of strong-motion data from the 2004 Anda-man and 2011 Tohoku earthquakes show distinct variability instrong-shaking characteristics along strike, rendering ques-tionable correlations made across long distances based onthese signatures.

Based on these issues, as well as questions they raiseabout the methods Goldfinger et al. (2012) use for determin-ing ages based on radiocarbon dates and sedimentation rates,Atwater and Griggs (2012) conclude that the evidence forsingle, margin-wide events as opposed to shorter serial rup-tures is not as strong as Goldfinger et al. (2012) conclude.They do not, however, provide a complete alternative inter-pretation of turbidite data that can be tested, but only high-light the issues to ensure that the uncertainty is considered indeveloping seismic-hazard assessments. In the absence ofan alternative interpretation, we restrict ourselves using theGoldfinger et al. (2012) chronology of events.

Previous Studies of Clustering

Jurney (2002) first suggested that ruptures along the CSZexhibited clustering based on a statistical analysis of the in-terevent times derived from onshore paleoseismic data. Sub-sequently, Goldfinger et al. (2003) recognized clustering inthe turbidite data. Mazzotti and Adams (2004) also recog-nized CSZ earthquake clustering behavior in the turbiditedata of the last 13 events but concluded that a bimodal dis-tribution of 12 interevent times could not be distinguishedfrom a unimodal distribution at the 95% confidence interval.They stated that “on statistical grounds alone, it would re-quire a long time series of reliable ages to prove” a clusteringmodel. Kelsey et al. (2005) also suggested clustering ofearthquakes of variable size based on 12 tsunami inundationevents recorded in Bradley Lake in southern Oregon (Fig. 1).Goldfinger et al. (2012) performed several tests to test theclustering model including a simple hierarchical clusteranalysis but concluded that these statistical tests could notprove that the pattern is not random.

Earthquake clustering has been observed along otherplate boundaries. Sieh et al. (1989) and Grant and Sieh(1994) suggested clustered behavior along the San Andreasfault although Scharer et al. (2011) argued against clusteringand states the behavior along the fault is quasi-periodic.Clustering has also been observed in the south Iceland seis-mic zone (Einarsson et al., 1981), around the circum-Pacific(Thatcher, 1989), the Anatolian fault (Ambraseys, 1970), theNankai trough (Ando, 1975) often considered a good analogto the CSZ, Mongolia (Chéry et al., 2001), and the Dead Seatrough (Marco et al., 1996). In the Dead Sea trough, a 50,000year paleoseismic record recorded laminated sediments (seis-mite beds) indicating clustering of earthquakes in 10,000year intervals separated by periods of quiescence. Sieh et al.(2008) observed earthquake super cycles inferred from sea-level changes recorded in corals of West Sumatra. Theysuggest that sequences or clusters of great earthquakes have

6 R. Kulkarni, I. Wong, J. Zachariasen, C. Goldfinger, and M. Lawrence

BSSA Early Edition

Page 7: by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris ... · by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris Goldfinger, and Martin Lawrence Abstract Goldfinger et al. (2012)

occurred about every 200 years for the past 700 years along a700 km long section of the Sunda megathrust.

A simple model for the cause of temporal earthquakeclustering is one of incomplete strain release. Heaton (1990)proposed that rupture occurs as a self-healing slip pulse thatpropagates so quickly along a fault that the release of all ac-cumulated strain may be incomplete. This phenomenoncould lead to multiple clustered ruptures of the fault nearthe end of an interseismic cycle (Grant and Sieh, 1994). Maz-zotti and Adams (2004) invoke fault interaction through vis-coelastic stress transfer as a hypothesis for clustering alongthe CSZ. In their model, a viscoelastic mantle wedge betweenthe subducting slab and the upper plate allows stress transferamong different faults of the subduction system includingsegments of the subduction zone, large crustal faults, andneighboring plate boundaries. Goldfinger et al. (2010) sug-gest that clustering in the CSZ may be due to variations inenergy release where some earthquakes release less whilethe others release more energy than available from plateconvergence.

Statistical Analysis of Recurrence Intervals

Although clustering seems apparent in the turbidite rec-ord, the question arises as to how significant is the clusteringstatistically. Given the overall consistency of the onshore andoffshore records, the ability to correlate turbidite events usingstratigraphy in addition to radiocarbon ages, and the longerperiod of record provided by the turbidites, we adopted theturbidite record as interpreted by Goldfinger et al. (2012) toprovide event ages for the statistical analysis. All events con-sidered by Goldfinger et al. (2012) to represent full-rupturemegathrust earthquakes were included with exception oftheir event T2.

Event T2 is clear in the turbidite record and Goldfingeret al. (2012) consider it to be a full rupture based on its north–south lateral extent, but it has not been identified at manyonshore sites, most notably at Willapa Bay, one of the bestdocumented land paleoseismic sites (Atwater and Hemphill-Haley, 1997). Event T2 is the only turbidite event in the past∼3500 years recorded offshore but not at Willapa Bay.Wherepossible onshore correlatives to event T2 do exist—at Tofino,Ucluelet, Johns River, Discovery Bay, Netarts Bay, and EcolaCreek—they suggest a smaller event (<M 9), perhaps withminimal subsidence. The lack of onshore evidence for thisevent and its likely small size suggest that it may constituteone or more smaller events; Goldfinger et al. (2012) concludethat it was a small event that did not pass the threshold forrecording or preserving events in many onshore locations.Because of the uncertainty that this event constitutes a singlemargin-wide event, we excluded it from our initial analysis.The effect of including event T2 was assessed as a part of thesensitivity analysis discussed later in this paper.

The statistical analysis of the Goldfinger et al. (2012)chronology shown in Table 1 was performed in the followingsteps: (1) test the hypothesis that events follow a Poisson

process; (2) if there is evidence for non-Poisson behaviorfor events, perform cluster analysis to identify potentialclusters; (3) evaluate the probability of clustering in theCSZ data; and (4) calculate recurrence intervals for eachplausible process at 5%, 50%, and 95% confidence.

For the present analysis, we define clustering as multiplesequences of consecutive events in which each closedsequence contains the same or nearly the same number ofevents and the time between successive sequences isstatistically greater than the intracluster interevent times.

The Poisson hypothesis was tested first because if thishypothesis cannot be rejected (because of weak or inconclu-sive evidence to the contrary), there would be no need to per-form any further investigation of a time-dependent (eitherclustered or nonclustered) model. If a Poisson hypothesisis rejected, it does mean that the process is time dependent,but that does not necessarily mean that events are clustered.One further needs to test the hypothesis that the events arenonclustered. If there is insufficient evidence to reject thenonclustered hypothesis, a time-dependent probabilitymodel (such as lognormal or normal) can be fitted to the dataof interevent times. Only if both the Poisson hypothesis andnonclustered hypothesis are rejected, is it appropriate to ex-plore the statistical significance of a clustered model. Theburden of proof is on demonstrating the validity of a non-Poisson, clustered model with statistically significant data.

Non-Poisson unimodal probability models for intereventtimes (such as normal or lognormal) should not be tested be-fore testing for clustering. Even if the total set of intereventtimes fits a normal distribution, the use of this pooled distri-bution to estimate the probability of the time to the next eventwould be incorrect if the high-end values of the distribution(i.e., gaps) occur systematically following each well-definedcluster of events with shorter interevent times. If events are infact clustered, the time to the next event would strongly de-pend on whether the system is currently in a cluster or in agap. The pooled distribution of all interevent times wouldoverestimate the time to the next event if one is currentlyin a cluster. Conversely, the pooled distribution would under-estimate the time to the next event if one is currently in a gap.If the nonclustering hypothesis is not rejected, one could usea model such as one based on normal (Gaussian) or a log-normal distribution that provides a good fit to the observedinterevent times.

For the present analysis, we assume that the Goldfingeret al. (2012) catalog of full-rupture events minus event T2 iscomplete. The following sections describe the statisticalmethods used, and the results obtained, in each step.

Step 1: Test the Poisson Hypothesis

The standard statistical procedure of hypothesis testingwas used to test the Poisson hypothesis (Helsel and Hirsch,2002). A brief description of each step and the resultsobtained by applying the step to the present analysis are pre-sented below.

Statistical Analyses of Great Earthquake Recurrence along the Cascadia Subduction Zone 7

BSSA Early Edition

Page 8: by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris ... · by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris Goldfinger, and Martin Lawrence Abstract Goldfinger et al. (2012)

Define the Null and Alternative Hypotheses. The hypoth-esis that receives the benefit of the doubt in the absence ofcontrary data is defined to be the null hypothesis. Conversely,the hypothesis one wants to prove with the data is defined tobe the alternative hypothesis. The burden of proof is on thealternative hypothesis.

For the present analysis, the Poisson process is the con-ventional choice for seismic-hazard analysis. It is also simpleto implement. Therefore, it is reasonable to give the benefitof the doubt to a Poisson process unless there is convincingevidence to the contrary. Based on these reasons, the Poissonprocess was defined to be the null hypothesis and a non-Poisson process was defined to be the alternative hypothesis.

Select an Appropriate Test Statistic. If events are generatedwith a Poisson process, it follows that interevent times wouldfollow an exponential distribution. For an exponential distri-bution, the coefficient of variation (COV) is one. Values ofCOV very different from one would suggest a non-Poissonbehavior. For the present analysis, COV was selected asan appropriate test statistic.

Develop Sampling Distribution of Test Statistic under NullHypothesis. If the null hypothesis was true and repeatedsamples of a given sample size were drawn randomly basedon the null hypothesis, the test statistic would vary from onesample to the next. The resulting distribution of the teststatistic is characterized in this step.

For the present analysis, Monte Carlo simulation wasused to generate 1000 sets of 17 interevent times drawn ran-domly from an exponential distribution, and the COV foreach set was calculated. The resulting distribution of the1000 COV values is shown in Figure 3. This distribution onlydepends on the sample size; that is, the number of intereventtimes in each simulation sample. The sample size was 17 forthe present analysis, because the actual data used for theanalysis (Table 1) contain 17 interevent times calculatedfrom 18 events. The mean of the 1000 COV values is one, asone would expect for recurrence generated from an exponen-tial distribution. The range of COV is from 0.5 to 2.7 and the1st and 5th percentile values from the distribution are 0.60and 0.69, respectively.

Specify an Acceptable Limit on the Probability of FalsePositive Error. The probability of a false positive error, de-noted by α, is the probability of rejecting the null hypothesiswhen it happens to be true. A common limiting value of α is5%, and we adopted this value.

Calculate the Test Statistics from the Data. In this step, theCOVof the actual data on interevent times is calculated. Onechallenge we face in completing this step for the presentanalysis is the uncertainty in the data on interevent times.Uncertainty is present in at least three components in thedata: (1) in the estimated turbidite age, (2) about which ofthe multiple turbidites identified for an event represents the

true age of the event, and (3) about the completeness of thecatalog of full-rupture events identified by Goldfinger et al.(2012). For the present analysis, the uncertainty in the firsttwo components was formally analyzed using Monte Carlosimulation, as described below. The uncertainty in the thirdcomponent is addressed through alternative branches on thelogic tree.

The uncertainty in the turbidite age is characterized bythe “�2 sigma” and “−2 sigma” bounds shown in Table 1. Asimple triangular distribution was assumed for turbidite agebecause data were only available for the mean age and thebounds. The mode was set equal to the mean age, the upperbound set equal to (mean� �“� 2 sigma”�) and the lowerbound was set equal to (mean − �“ − 2 sigma”�). The age ofeach turbidite was simulated by random sampling from thetriangular distribution.

We address the uncertainty regarding which turbiditetruly represents the age of the event in question with boot-strap random sampling. In this method, a randomly selectedturbidite from the set of applicable turbidites is assumed torepresent the true age of the event. This method assumes thateach turbidite has the same chance of representing the so-called true age of a given event.

A Monte Carlo simulation is used to generate 1000 cat-alogs of earthquakes with each catalog containing 18 events.The date of event T1 is fixed at 250 years for all catalogsbecause the event is known historically (B.P. 1950; Satakeet al., 2003; Atwater et al., 2005). Based on the minimumsedimentation observed between consecutive turbidites(table 8 in Goldfinger et al., 2012), the simulated intereventtime was constrained to be no less than 100 years. Table 2illustrates key portions of the data generated for one simu-lated sample. Consider, for example, event T3, for whicheight turbidites are associated in the Goldfinger data. Thesimulated ages of the eight turbidites range from 731 to872 years in this sample (Table 2). The value of 826 yearsis randomly selected in this sample to represent the “true”age of this event. As noted previously, the age of the eventT1 is fixed at 250 years. The interevent time between T1 andT3 is, therefore, calculated to be 576 (826 − 250) years. Eachsimulated sample generates ages of 18 events from which 17interevent times are calculated. This process is repeated 1000

Figure 3. Distribution of sample COV assuming exponentialdistribution of interevent times (based on 1000 Monte Carlo Sim-ulation trials, each trial with 17 interevent times).

8 R. Kulkarni, I. Wong, J. Zachariasen, C. Goldfinger, and M. Lawrence

BSSA Early Edition

Page 9: by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris ... · by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris Goldfinger, and Martin Lawrence Abstract Goldfinger et al. (2012)

times to obtain 1000 catalogs of earthquakes with each cata-log providing ages of 18 events and the resulting 17 inter-event times.

For each simulated catalog, 17 interevent times and aCOV for the catalog are calculated. Figure 4 shows the dis-tribution of the COV calculated from the simulated data. Themean COV is 0.5, the range is from 0.3 to 0.69, and the 95thand 99th percentile values are 0.58 and 0.62, respectively.The data-derived COV distribution in Figure 4 can be com-pared with that for an exponential distribution shown inFigure 3. Figure 5 shows a direct comparison of the cumu-lative frequency plots of the two distributions. Almost all ofthe data-derived COV values are below the smallest COVsampled from an exponential distribution.

Calculate the Significance Probability (p). The signifi-cance probability (p) is the probability of getting a COVvalue at least as extreme as that found in the sample databy chance alone if the null hypothesis was true. Low valuesof p suggest that the null hypothesis is unlikely to be true andshould be rejected in favor of the alternative hypothesis. Forthe present analysis, p is less than 0.05 for each of the 1000simulated catalogs of events.

Draw an Appropriate Conclusion. In this step, the signifi-cance probability, p is compared to the limiting probabilityof false positive error, α. The decision rule is specified asfollows:

if p < α, reject the null hypothesis in favor of thealternative hypothesis;

if p ≥ α, do not reject the null hypothesis.Using this rule, the null hypothesis (of exponential dis-

tribution) is rejected for all 1000 simulated catalogs at α of

0.05. Thus, this analysis shows strong evidence for nonex-ponential distribution of recurrence intervals, or equivalently,for non-Poisson behavior for the generation of events.

Step 2: Perform Cluster Analysis

We emphasize that our analyses of clustering are purelystatistical. We are fully aware that bringing in other geologicconstraints as suggested by Goldfinger et al. (2012), and, asdiscussed previously, would be valuable. Each such se-quence defines a cluster and the time between successiveclusters defines a gap. If a catalog has multiple clusters andeach cluster contains exactly the same number of events, thiswould provide strong evidence of clustering. On the otherhand, if the number of intracluster events is highly variable,this would suggest that the clusters are likely to be spurious;that is, they may occur randomly and hence clustering shouldnot be assumed.

Step 1 shows strong evidence for non-Poisson behaviorof the events. Two additional conditions must be met beforeone can infer clustering. One condition is that the data shouldexhibit multiple sequences of events each separated byunusually long-time intervals. The second condition is thatthe clustering behavior observed in a given catalog of eventsshould be distinguishable from spurious clusters that couldoccur in a catalog in which interevent times are drawn ran-domly from a continuous distribution such as a Gaussian dis-tribution. We address these conditions in sequence.

First, cluster analysis is performed to assess whetherevents occur in clusters. The commercial software packageJMP, developed by the SAS Institute, is used to perform clus-ter analysis. The specific method is called hierarchical clus-tering and is described by JMP (2005).

Table 2Portion of One Simulated Catalog of 18 Events

Turbidite Age

Event IDTurbiditeNumber Best Estimate �2 Sigma −2 Sigma

SimulatedTurbidite Age Simulated Event Age

IntereventTime (Years)

T1 1 270 94 91 N/A 250 5762 275 91 118 N/A (Age is fixed because event

is known historically)3 269 103 90 N/A

T3 1 862 124 95 872 8262 736 104 119 7313 822 137 179 8084 808 105 109 8075 828 84 91 8266 754 66 65 7557 810 83 93 8078 796 87 105 790

T18 1 9993 78 230 9860 98602 9679 208 189 97313 9873 94 199 98534 9741 292 315 95475 9837 154 186 97636 9773 186 244 9707

Statistical Analyses of Great Earthquake Recurrence along the Cascadia Subduction Zone 9

BSSA Early Edition

Page 10: by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris ... · by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris Goldfinger, and Martin Lawrence Abstract Goldfinger et al. (2012)

Hierarchical clustering is an exploratory data analysistool to identify whether data points share similar valuesand hence can be grouped. The analysis starts with eachpoint as its own cluster. At each step, the clustering processcalculates the distance between successive clusters, and com-bines the two clusters that are closest together. This combin-ing process continues until all points are in one final cluster.

The joining of different data points into clusters isshown in the form of a tree, called a dendogram, with thesingle data points as leaves, the final single cluster of allpoints as the trunk, and the intermediate cluster combinationsas branches (Fig. 6). A table of clustering history is gener-ated, which shows the distance bridged as a function of thenumber of clusters. A scree plot is also produced in which theordinate is the distance that was bridged to join the clusters ateach step. Often there is a change in slope in the scree plotwhere the distance jumps up suddenly. Such a break helps indetermining an appropriate number of clusters that should beassumed for the data.

The single linkage method as described in the JMP(2005) guide is appropriate for evaluating clustering of eventsin time. It defines the distance between two clusters as theminimum distance (along the time axis for this analysis) be-tween an event in one cluster and an event in the other cluster.

Cluster analysis is performed for each of 20 randomlysimulated catalogs with each catalog containing 18 events. Asample size of 20 is generally considered to be adequate forestimating key statistics such as the mean and standarddeviation. Two key findings of this analysis are as follows.

Six out of the 20 catalogs contain one or zero closed clus-ters; 8 catalogs contain two closed clusters; 4 contain threeclosed clusters; and 2 contain four closed clusters. No catalogcontains more than four closed clusters. Thus, a majority(60%) of the catalogs contain two or three closed clusters.

The most recent cluster T1–T5 appears consistently ineach of the 20 catalogs. In addition, the preceding clusterT6–T10 appears in 14 of the 20 catalogs. The appearanceof the older clusters varies among the catalogs. This is pos-sibly a reflection of the greater reliability of the estimatedages of the more recent events relative to those of the olderevents or the actual variability in the rupture process. Theconsistent occurrence of the same recent clusters in the ma-jority of catalogs suggests that the clustering of recent eventsis stable in spite of the uncertainty in the event ages.

Figure 6 shows the results of the cluster analysis for onetypical catalog of events. The scree plot shows a sharp jumpwhen the number of clusters is reduced from four to three.This suggests that a large distance is bridged when four clus-ters are collapsed into three. This is confirmed in the tableshowing the (normalized) distance between clusters (Fig. 6).Based on these results, four clusters are tentatively identifiedfor this catalog, subject to confirmation by the statistical test-ing described in the next step. The four clusters identified forthe catalog in Figure 6 are: cluster 1: events T1–T5; cluster 2:events T6–T10; cluster 3: events T11–T15; and cluster 4:events T16–T18.

These four clusters define the following three gaps: gap1: between events T6 and T5; gap 2: between events T11 andT10; and gap 3: between events T16 and T15.

In cluster analysis, the decision of where to cut-off thescree plot in order to identify the number of potential clustersis based on judgment. To confirm the validity of the judg-ments made in the present analysis, we assess whether theclusters identified for each of the 13 CSZ catalogs are in factseparated by intervals (gaps) that are statistically greater thanthe intracluster interevent times. An appropriate statisticalprocedure for this assessment is the upper prediction limit(UPL) derived from the distribution of intracluster recurrenceintervals.

The UPL is the upper limit of a statistical interval calcu-lated to include one or more observations from the same pop-ulation with a specified confidence (USEPA, 1989; Gibbons,1994). A common choice for the confidence is 95%. If eachgap identified for a simulated CSZ catalog is longer than theUPL of the intracluster intervals, this would validate the cutoff

Figure 4. Distribution of sample COV using CSZ intereventtimes (based on 1000 Monte Carlo Simulation trials, each trial with17 interevent times).

Figure 5. Comparison of cumulative frequency distributions ofCOV. The solid and dashed lines are the COVs of the exponential andsimulated CSZ event catalogs, respectively.

10 R. Kulkarni, I. Wong, J. Zachariasen, C. Goldfinger, and M. Lawrence

BSSA Early Edition

Page 11: by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris ... · by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris Goldfinger, and Martin Lawrence Abstract Goldfinger et al. (2012)

points on the scree plots that are used to identify an appropri-ate number of plausible clusters. If, on the other hand, severalgap intervals are shorter than the UPL, this would suggest thatthe assumed cutoff points of the scree plot are incorrect.

If intracluster recurrence intervals follow a normal dis-tribution, then a parametric UPL is derived based on that dis-tribution. If the assumption of a normal distribution is notappropriate, a nonparametric UPL is used, which is set equalto the maximum of the intracluster recurrence intervals.

For the present analysis, the normality of the intraclusterrecurrence intervals is checked using the Shapiro–Wilk test(Gilbert, 1987; Gibbons, 1994; JMP, 2005). For 10 of the 13statistically significant catalogs, it was reasonable to assumea normal distribution for the intracluster recurrence intervals.

A 95% parametric UPL is calculated using the normal distri-bution. For the other three catalogs, the nonparametric UPL isused. Each gap interval is compared against the UPL.

To illustrate this procedure, consider the catalog shownpreviously in Figure 6. For this catalog, four clusters areidentified. Figure 7 shows a histogram and a box plot ofthe intracluster interevent times for this catalog. The p valuefor the normality test is 0.17, suggesting that the assumptionof a normal distribution was not unreasonable for these data.A parametric 95% UPL using the normal distribution is cal-culated to be 823 years. The three gap intervals for this si-mulated catalog are: gap 1, between events T5 and T6 � 851

years; gap 2, between events T10 and T11 � 1179 years; andgap 3, between events T15 and T16 � 998 years.

Each of the three gap intervals exceeds the UPL of 823years, thus confirming the validity of the judgment to definethe four clusters identified in Figure 6 for this catalog basedon the cluster analysis.

All but 1 out of 47 gap intervals exceed the correspond-ing UPL of the intracluster interevent times. This finding val-idates the decisions made regarding where the scree plotshould be cut-off in order to identify an appropriate numberof clusters for each of the 20 simulated CSZ catalog. How-ever, this finding does not necessarily mean that the clustersthus identified are distinguishable from spurious clusters thatcould occur by chance alone. That evaluation is made in thenext step.

Step 3: Evaluate Probability of Clustering Behavior inCSZ Data

The cluster analysis helps to identify plausible clustersin each simulated catalog. It is possible, however, that spu-rious clusters could appear by chance alone, even in catalogsof interevent times drawn randomly from a continuous dis-tribution such as a Gaussian distribution. The objective ofthis step is to evaluate whether the clustering pattern ob-served in each CSZ catalog in step 2 is statistically significant(i.e., nonspurious). The probability of clustering behavior ina CSZ catalog is then evaluated using the proportion of allsimulated CSZ catalogs for which clustering was statisticallyconfirmed.

A catalog drawn from the CSZ data is considered toshow a statistically significant clustering only if both ofthe following criteria are met: (1) the catalog contains at leasttwo closed clusters identified through the cluster analysis instep 1. As discussed below, fewer than two closed clusterscan occur frequently by chance alone in catalogs of 17 randomlydrawn interevent times and hence would not support the clus-tering hypothesis; and (2) for those catalogs containing atleast two closed clusters, the standard deviation of the num-ber of intracluster events is less than a threshold value, whichis derived as described below.

The threshold value of the standard deviation is selectedfrom the lower end of the distribution of the standard devia-tions in catalogs of randomly drawn interevent times. The

Figure 6. Results of cluster analysis of one simulated CSZ cata-log of events. This is a screen shot of the JMP software output. Itincludes a dendrogram, scree plot, and clustering history. A dendro-gram is a tree diagram that lists each event and shows which clustercontains the event. The dendogram shown in this figure identifiesfour clusters and the events in each cluster are identified with acommon symbol. A scree plot is a plot of the number of clustersin reverse order (from the maximum to the minimum) on the x axisand the (statistical) distance that was bridged in that step on the yaxis. A break in the scree plot where there is a change in slope helpsto determine the number of clusters in the given data. The scree plotshown in this figure suggests a break when the number of clusters is4. The clustering history shows the details of each clustering step(the number of clusters, the minimum statistical distance bridged,and the pair of events with the minimum statistical distance).

Statistical Analyses of Great Earthquake Recurrence along the Cascadia Subduction Zone 11

BSSA Early Edition

Page 12: by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris ... · by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris Goldfinger, and Martin Lawrence Abstract Goldfinger et al. (2012)

selection of the threshold standard deviation is made suchthat the probability of meeting both criteria in catalogs ofrandomly drawn interevent times would be less than a speci-fied statistical significance level, p. A common choice for the

significance level in a statistical test is 0.05 (Benjamin andCornell, 1970). This significance level is the probability thatthe clustering behavior observed in a CSZ catalog could haveoccurred by chance alone in catalogs of randomly drawninterevent times. Only if the probability of getting the clus-tering behavior observed in a given CSZ catalog by chancealone is less than 0.05, the catalog is assumed to show sta-tistically significant clustering. Otherwise, the evidence forclustering is considered to be not strong enough and, bydefault, the catalog is assumed to follow a nonclusteringbehavior.

To estimate the threshold standard deviation for thisevaluation, we simulate 40 catalogs with randomly drawninterevent times from a Gaussian distribution with a meanrecurrence interval of 563 years and standard deviation of252 years. These statistics of the recurrence interval are cal-culated using the best estimates of the event ages in the Gold-finger et al. (2012) data.

The same cluster analysis that is used for the CSZ cata-logs is performed on the random Gaussian catalogs. Anyclustering pattern identified in these 40 catalogs is consid-ered to be spurious because the events are drawn randomlyfrom a Gaussian distribution. Out of 40 catalogs, 26 havezero or one closed clusters, nine have two closed clusters,three have three closed clusters, one has four closed clusters,and one has five closed clusters. As is stated above, the prob-ability of getting a spurious clustering pattern with one orzero closed clusters is high (26=40 � 0:65). This is the basisof the first criterion that requires at least two closed clustersin a catalog before it could be considered as a candidate forclustering. The probability of meeting this criterion for theGaussian catalogs is (14=40 � 0:35).

This probability, commonly referred to as the probabil-ity of a false positive error, is still well above the requiredstatistical significance level of 0.05 and hence a second cri-terion of a minimum standard deviation is necessary. Toachieve an overall significance level of 0.05, the probabilityof meeting the second criterion in random Gaussian catalogsneeds to be (0:05=0:35 � 0:143). Thus, we need a standarddeviation threshold such that the probability of getting alower value in random Gaussian catalogs with at least twoclosed clusters is 0.143.

To derive this threshold standard deviation, we analyzethe distribution of the number of intracluster events in each ofthe 14 Gaussian catalog that met the first criterion (i.e., has atleast two closed clusters). The standard deviation of the num-ber of intracluster events identified in each catalog is calcu-lated. Note that the oldest and most recent clusters in acatalog are open clusters. That is because we do not knowwhen the oldest cluster began and when the most recent clus-ter would end. For these open clusters, we only know thelower bound on the number of events. For such censoreddata, the standard equation for calculating the standarddeviation of a dataset cannot be applied.

To calculate the standard deviation, we use the Kaplan–Meier (KM) method of survival analysis that is designed to

Figure 7. Calculation of upper prediction limit (UPL) for onesimulated CSZ catalog of events. This is a screen shot of the JMPsoftware output. The left side of the figure at the top shows a histo-gram and the fitted normal distribution. The right side of the figureshows a box plot, which displays the data distribution. The two edgesof the box are 25th and 75th percentiles of the data, the line in themiddle of the box is the median (50th percentile), and the lines out-side the box extend to the maximum and minimum data values. Thediamond in the box shows the 95% confidence limits on the median.The summary statistics including the quantiles are shown next. Be-low the summary statistics are the parameters of the fitted normaldistribution. The results of the goodness-of-fit test (specifically,the Shapiro–Wilk test) for checking normality distribution are shownnext. The “Prob < W” column shows the significance level of thenormality test. If the significance level is less than 0.05, the normalityassumption is rejected; otherwise, the data may be assumed to benormally distributed. This figure shows a significance level of0.1688, suggesting that the assumption of a normal distribution isreasonable for this dataset. The last part of the figure shows the cal-culation of the 95% UPL assuming a normal distribution. The calcu-lated 95% UPL for this dataset is shown to be 822.8798.

12 R. Kulkarni, I. Wong, J. Zachariasen, C. Goldfinger, and M. Lawrence

BSSA Early Edition

Page 13: by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris ... · by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris Goldfinger, and Martin Lawrence Abstract Goldfinger et al. (2012)

handle such right-censored data. The procedure described inHelsel (2005) is followed to estimate the standard deviationwhen the KM method is used. If the highest count of intra-cluster events in a catalog is in an open cluster and this countis specified as a left-censored value (i.e., as a lower bound),this results in assuming infinity as the upper bound on thecount of intracluster events. This, in turn, generates biasedestimates of parametric statistics such as the mean and stan-dard deviation. To avoid this bias, if the maximum count ofintracluster events is greater than the count of events in allclosed clusters, the maximum count in the open cluster isredefined to be a noncensored value.

The estimated standard deviation of the number of intra-cluster events for the 14 Gaussian catalogs ranges from 1.9 to7 with an average of 4.3. The estimated 0.143 quartile of theset of standard deviations is 2.96. If we apply the same twocriteria of statistical significance to the 40 random Gaussiancatalogs, only two out of the 40 catalogs would meet bothcriteria. That is, only two random Gaussian catalogs haveat least two closed clusters and have a standard deviationof less than 2.96. For this set of 40 random Gaussian cata-logs, the probability of incorrectly declaring a catalog to havea statistically significant clustering behavior would be�2=40� � 0:05, which confirms that the statistical signifi-cance level of 0.05 is achieved.

Next, the standard deviation of the number of intraclus-ter events is calculated for each of the 14 CSZ catalogs con-taining at least two closed clusters. The same KM procedurethat is used for the random Gaussian catalogs is used to ana-lyze censored data in the CSZ catalogs. The estimated stan-dard deviation of the number of intracluster events for the 14CSZ catalogs ranges from 0 to 3.5 with an average of 1.8. Theaverage standard deviation of the number of intraclusterevents for the CSZ catalogs is substantially less than thatfor the random Gaussian catalogs (1.8 versus 4.3). Applyingthe standard deviation threshold of 2.96, only 1 of the 14qualifying CSZ catalogs has a higher standard deviation.Therefore, clustering cannot be assumed for this one catalog.However, the default hypothesis of no clustering is rejectedfor the other 13 catalogs (out of a total of 20 catalogs). Hence13 of the 20 catalogs are statistically distinguishable fromspurious clusters that could occur by chance alone. The num-ber of clusters for the different simulated catalogs is not fixeda priori; it varies based on the results of the cluster analysisof each simulated catalog. The proportion of CSZ catalogswith statistically significant clustering behavior is then esti-mated as �13=20� � 0:65. This is taken as the probabilitythat the CSZ data exhibit clustering.

Step 4: Calculate Recurrence Intervals for thePlausible Recurrence Processes

The analysis in the previous steps shows strong evidencefor time-dependent earthquake recurrence. However, a time-independent process (with a Poisson distribution of recur-rence intervals) may also be considered, recognizing that

the Goldfinger et al. (2012) data might be incomplete. Equiv-alent recurrence intervals are calculated for all alternativeprocesses at 5%, 50%, and 95% confidence. An equivalentrecurrence interval for an alternative process is defined as theinterval that, when used in a Poisson model, would reproducethe probability of an event in the next 100 years calculatedfor that process. A summary of results for all processes ispresented in Table 3 for the base case (18 events withoutevent T2) and a sensitivity analysis case (19 events withevent T2). The details of the calculation for each processare provided below.

Calculation of Mean Recurrence Intervals for a PoissonProcess. A Poisson process is characterized in terms ofthe mean recurrence interval. The mean recurrence intervalis calculated for each of the 20 CSZ catalogs. The overallmean recurrence interval for the base case averaged overthe 20 catalogs is 559 years (Table 3). For a Poisson process(which means recurrence intervals are exponentially distrib-uted), the standard deviation of recurrence interval is equal tothe mean value. The estimate of the mean recurrence interval,which is based on a sample of 17 data points in the CSZcatalog, is subject to statistical uncertainty. The standarddeviation of the mean recurrence interval is approximatelyestimated as the standard deviation of the recurrence intervaldivided by the square root of the sample size n. The esti-mated standard deviation of the mean recurrence is, there-fore, �559=p17� � 135:7 years. The mean recurrence isassumed to follow a normal distribution with a mean of559 years and a standard deviation of 135.7 years (Table 3).Using these statistics, the 5th, 50th, and 95th percentiles ofthe mean recurrence interval are calculated.

Calculation of Equivalent Recurrence Intervals for ClusteredProcess. Recurrence intervals for this process depend onthe following factors: (1) time elapsed since the last full-rupture event in the study area; (2) time window duringwhich the probability of an event is of interest; and (3) currentsystem state (whether the system is within a cluster or ina gap).

For the present analysis, the last full-rupture event is inJanuary 1700, about 310 years ago. We adopt a time windowof 100 years. A probabilistic evaluation of the current systemstate is made for each simulated catalog as described below.

As noted previously, the current open cluster T1–T5 offour events appears consistently in all of the CSZ catalogs.The probability that this cluster would continue for at leastone more event is estimated as the conditional probabilitythat the current cluster would contain at least five eventsgiven that it contains at least four events. Because of the cen-sored data from open clusters, the KM method is again usedto estimate the probability of exceeding a specified numberof events.

For this estimation, we use the closed and open clustersfrom the 13 CSZ catalogs that were assessed to exhibit sta-tistically significant clustering. These 13 catalogs contained

Statistical Analyses of Great Earthquake Recurrence along the Cascadia Subduction Zone 13

BSSA Early Edition

Page 14: by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris ... · by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris Goldfinger, and Martin Lawrence Abstract Goldfinger et al. (2012)

a total of 60 clusters out of which 34 were closed and 26 wereopen. The results of the KM analysis showed that the prob-ability of getting at least four events in a cluster is 0.70and the probability of getting at least five events in a clusteris 0.573. Then, the probability that the current cluster willcontinue for at least one more event is calculated as�0:573=0:70� � 0:82. The probability that the current clusterclosed at four events (i.e., the system is currently in a gap) isthen �1 − 0:82� � 0:18.

If the system were currently within a cluster, the prob-ability of having an event within the next 100 years is equalto the probability that the intracluster recurrence interval isless than 410 years (the time elapsed since the last event of310 years plus the assumed time window of 100 years). Thisprobability is calculated from the empirical distribution ofintracluster recurrence intervals for the given catalog. Con-tinuing with the example of the catalog shown previously inFigure 6, there are 14 intracluster recurrence intervals andfour of them are less than 410 years. Thus, the probabilityof having an event in the next 410 years is calculated tobe �4=14� � 0:29 similar to what Goldfinger et al. (2012)estimated for the northern CSZ. An equivalent Poisson recur-rence interval is then calculated that would reproduce thisprobability of 0.29 of another event in the next 100 yearsif a Poisson model were used. This equivalent recurrence in-terval is �100=0:29� � 350 years. Equivalent recurrence in-

tervals are calculated in this manner for all 13 simulatedcatalogs with statistically confirmed clustering (Table 3).The resulting 13 equivalent recurrence intervals are usedto empirically estimate the 5th, 50th, and 95th percentiles.As shown in Table 3, the 50th percentile of the equivalentrecurrence interval is 260 years. The probability of anevent in the next 100 years, therefore, is estimated to be�100=260� � 0:38.

If the system were currently in a gap, the simulateddata show that the gap interval is substantially higher than410 years for all of the 47 gaps that are identified. Conse-quently, the probability of another event would be practicallyzero and the equivalent recurrence interval would be verylarge. To obtain a conservative estimate of the recurrence in-terval, we assume that the distribution of the recurrence inter-val derived for the Gaussian distribution would be shifted tothe right such that the 95th percentile of the recurrence intervalfrom the Gaussian distribution is assumed to be the 5th per-centile of the recurrence interval when the system is in a gap.The 50th and 95th percentiles of recurrence are than calculatedby scaling up the 5th percentile using the ratios of the otherGaussian percentiles to the 5th percentile (Table 3).

Calculation of Equivalent Recurrence Intervals for Time-Dependent, Nonclustered Process. If no clustering is as-sumed, all 17 interevent times in each CSZ catalog can be

Table 3Summary of Results

ScenarioBase Case

(Without Event T2)Sensitivity Analysis

Case (Double Sigma)*

P (Time-independent model) 0.05 0.05P (Time-dependent model) 0.95 0.95

Given time-independent model (assume Poisson process)5% of mean recurrence (years) 336 33850% of mean recurrence (years) 559 56295% of mean recurrence (years) 783 787

Given time-dependent modelP (clustered behavior) 0.65 0.35P (nonclustered behavior) 0.35 0.65

Given time-dependent, clustered modelP (currently in a cluster) 0.82 0.9P (currently in a gap) 0.18 0.1

Given time-dependent, clustered model and currently in a cluster5% of equivalent mean recurrence (years) 210 16550% of equivalent mean recurrence (years) 260 26095% of equivalent mean recurrence (years) 350 411

Given time-dependent, clustered model and currently in a gap5% of equivalent mean recurrence (years) 856 87550% of equivalent mean recurrence (years) 1012 103895% of equivalent mean recurrence (years) 1168 1202

Given time-dependent model; no clustering (assume Gaussian model)5% of equivalent mean recurrence (years) 627 63750% of equivalent mean recurrence (years) 741 75695% of equivalent mean recurrence (years) 856 875

*The effect of increasing the standard deviation of each turbidite age by a factor of two.

14 R. Kulkarni, I. Wong, J. Zachariasen, C. Goldfinger, and M. Lawrence

BSSA Early Edition

Page 15: by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris ... · by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris Goldfinger, and Martin Lawrence Abstract Goldfinger et al. (2012)

used to calculate the probability that the recurrence intervalwould be less than 410 years. The 17 interevent times in eachcatalog fit a Gaussian distribution well. Because a time-dependent process is assumed, the probability that an eventwould occur in the next 100 years is calculated as the condi-tional probability that an event would occur in the next 100years, given that 310 years have elapsed since the last earth-quake. This conditional probability is calculated for eachCSZ catalog. The average conditional probability over all20 catalogs is 0.135. The corresponding equivalent mean re-currence interval is then calculated as �100=0:135� � 741

years. The average standard deviation of recurrence intervalsover the 20 catalogs is 287 years. For a Gaussian distribution,the sample mean value also follows a Gaussian distributionwith a mean equal to the sample mean and standard deviationequal to the standard deviation divided by the square root ofthe sample size. For the present analysis, the standard devia-tion of the mean recurrence is calculated as �287=p17� �70 years. Using a mean of 741 years and a standard deviationof 70 years, the 5th and 95th percentiles of the mean recur-rence were then calculated (Table 3). Using the mean recur-rence of 740 years, the probability of an event in the next100 years for this branch of the logic tree is estimated tobe �100=741� � 0:14.

Calculation of Overall Probability of an Event in Next 100Years. In step 3, the probability of clustering is estimated tobe 0.65. In step 4, four important probabilities were esti-mated. First, the probability that the current cluster will con-tinue for at least one more event given clustering is estimatedto be 0.82. Second, the probability of an event in the next 100years given clustering and given that the current cluster willcontinue is estimated to be 0.38. Third, the probability of anevent in the next 100 years given clustering and given that thesystem is currently in a gap is assessed to be practically zero.Fourth, given no clustering, the probability of an event innext 100 years is estimated to be 0.14. Now all these prob-abilities can be combined to estimate the overall probabilityof an event in next 100 years.

The inputs needed for this calculation are as follows:P�clustering� � 0:65, P�current cluster will continue givenclustering� � 0:82, P�an event in next 100 years givenclustering and current cluster will continue� � 0:38, P�anevent in next 100 years given clustering and system is cur-rently in a gap� ∼ 0, P�no clustering� � 1 − 0:65 � 0:35,and P�an event in next 100 years given noclustering� �0:135. The probability of a full-rupture event in next 100 yearsis then calculated as �0:65 × 0:82 × 0:38� � �0:35×0:135� � 0:25 (Table 4). Probabilities for 30 and 50 yearsare also shown in Table 4.

Sensitivity Analysis to Uncertainty in Turbidite Ages

Two sensitivity analysis scenarios were evaluated. Thefirst examined the effect of including the event T2, whereas the

second examined the effect of the uncertainty in the turbiditeages. The results of the two scenarios are discussed below.

Effect of Including Event T2

As noted previously, the event T2 is excluded from thebase case because of lack of onshore evidence for this eventand its likely small size. However, Goldfinger et al. (2012)consider event T2 to be a full-rupture event. In this sensitivityanalysis, we examined the effect of including T2. Using thesame analysis steps as those described previously, we foundthat the probability that the earthquake recurrence process isclustered is 0.60. This is similar to the 0.65 probability for thebase case. Thus, the effect of including T2 on the probabilityof clustering behavior is minimal. This is not a surprising re-sult given the fact that T2 fits well within the current cluster ofT1–T5 and has little effect on the hypothesized gaps in thehistoric record. Had T2 occurred in a hypothesized gap(e.g., between T5 and T6), this could have significantly re-duced the probability of clustering. However, the inclusionof T2 does have a significant effect on the probability the cur-rent cluster would continue for at least one more event.

With event T2 included, the number of events in the cur-rent cluster becomes five (rather than four for the base case).Many of the hypothesized clusters that have at least fourevents contain a fifth event, but few clusters have a sixthevent. As a result, the probability of another event in the cur-rent cluster becomes very small (less than 0.01) for this sen-sitivity analysis. For the base case (without event T2), thenumber of events in the current cluster is four and the esti-mated probability of getting another event in the current clus-ter is 0.82. Thus, if T2 is truly a legitimate full-rupture event,the current cluster is most likely closed at T1 (i.e., the systemis currently in a gap) and the probability of another event inthe next 100 years would be small (less than 0.05). Withoutevent T2, this probability is 0.25 (Table 4).

Effect of Uncertainty in Turbidite Ages

A conservative uncertainty in the turbidite ages is ex-pressed in terms of the (�2 sigma) and (−2 sigma) boundsshown in Table 1. A narrower range is justified based on aBayesian combination of multiple ages (Goldfinger et al.,2012). For this sensitivity analysis, the effect of increasingthe standard deviation of each turbidite age by a factor oftwo is evaluated. The results for this sensitivity analysis case

Table 4Time-Dependent Probabilities for M 9 Earthquake

PeriodBase Case

(Without Event T2)Sensitivity AnalysisCase (With Event T2)

30 years 0.15 <0:0250 years 0.17 <0:03100 years 0.25 <0:05

Statistical Analyses of Great Earthquake Recurrence along the Cascadia Subduction Zone 15

BSSA Early Edition

Page 16: by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris ... · by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris Goldfinger, and Martin Lawrence Abstract Goldfinger et al. (2012)

are derived using the same methods of analysis as describedpreviously (Table 3).

The main impact of doubling the standard deviation ofturbidite ages is on the probability of clustering in the CSZdata, which reduced from 0.65 for the base case to 0.35 forthe sensitivity analysis case (Table 3). This result shows that,as the uncertainty in the turbidite ages increases, the cluster-ing inferred for the CSZ data becomes less distinguishablefrom spurious clusters that could occur by chance alone. Arelatively minor impact of the sensitivity analysis is on theprobability of continuing in a cluster, which increases from0.82 to 0.90. Correspondingly, the probability that the systemis currently in a gap decreases from 0.18 to 0.10.

Conclusions

The data developed by Goldfinger et al. (2012) on CSZfull-rupture earthquakes were used to statistically assess therelative credibility of alternative earthquake recurrence proc-esses and to estimate recurrence intervals under the alterna-tive processes. We recognize that the total reliance on theinterpretations of Goldfinger et al. (2012) begs the question,would we reach the same conclusions if the turbidite datawere interpreted differently by another set of eyes? Althoughthe Goldfinger et al. (2012) study has undergone extensivereview, that question cannot be answered until future inter-pretations become available.

The statistical analysis of the data is performed in foursteps. In the first step, we test the hypothesis of a Poissonprocess using the standard procedure of statistical hypothesistesting. The results of this step show strong evidence for anon-Poisson process.

In the second step, we perform a cluster analysis to iden-tify plausible clusters in the CSZ data. We used the MonteCarlo simulation to generate 20 simulated catalogs, eachof 18 events. The simulation is the best estimate of eventturbidite age and the “�2 sigma” and “−2 sigma” boundsaround that age as reported by Goldfinger et al. (2012).For each simulated catalog, cluster analysis is performedto identify plausible clusters of events.

In the third step, the statistical significance of the plau-sible clusters identified in step 2 is evaluated to distinguishthem from spurious clusters that could occur by chancealone. The results of this analysis demonstrate that 13 out ofthe 20 simulated catalogs show statistically significant clus-tering that could be distinguished from spurious clusters.Based on these results, we estimate the probability of clus-tering in the CSZ data to be 0.65.

In the fourth step, we calculate the probability of anevent in the next 100 years for each alternative process.For the clustering process, the probability that the systemwould continue to be in a cluster for at least one more eventis estimated to be 0.82. We then calculate equivalent recur-rence intervals that would reproduce the calculated probabil-ities if used in a Poisson process.

The sensitivity of the results to including event T2, thestatus for which as a full-rupture event is in doubt, and dou-bling the standard deviation of turbidite ages were eachevaluated. The inclusion of event T2 does not change theprobability of clustering, but does significantly reduce theprobability that the current cluster would extend to one moreevent. Consequently, the overall probability of an event in thenext 100 years reduces from 0.25 to 0.05. If T2 is a full-rupture event, it is much more likely that the system is cur-rently in a gap following the last event. Doubling the stan-dard deviation of turbidite ages decreases the probability ofclustering in the CSZ data from 0.65 to 0.35. This resultshows that, as the uncertainty in the turbidite ages increases,the clustering inferred for the CSZ data becomes less distin-guishable from spurious clusters that could occur by chancealone.

Data and Resources

The CSZ turbidite data are from Goldfinger et al. (2012).The cluster analysis used the software package JMP (2005).BC Hydro, 2012, Dam Safety—PSHA model, v. 2, SeismicSource Characterization (SSC) model, Engineering ReportE658, unpublished report.

Acknowledgments

The approach described in this paper has been recently incorporatedinto a regional PSHA model for hydroelectric facilities in British Columbia.This study is supported and funded by the BC Hydro & Power Corporation(BC Hydro) and URS Corporation. Our thanks to Kofi Addo and ZeljkoCecic of BC Hydro for their support and to Marty McCann, Dean Ostenaa,Roland LaForge, and David Perkins for their valuable input. Our appreci-ation to Melinda Lee and Danielle Lowenthal-Savy for their assistance in thepreparation of this paper and to Kate Scharer, an anonymous reviewer, andAssociate Editor Kelvin Berryman for their critical reviews that greatly im-proved the paper.

References

Adams, J. (1990). Paleoseismicity of the Cascadia subduction zone—Evidence from turbidites off the Oregon-Washington margin,Tectonics 9, no. 4, 569–583.

Ambrayseys, N. (1970). Some characteristic features of the Anatolian faultzone, Tectonophysics 9, 143–165.

Ando, M. (1975). Source mechanisms and tectonic significance of historicearthquakes along the Nankai trough, Japan, Tectonophysics 27, 119–140.

Atwater, B. F., and G. B. Griggs (2012). Deep-sea turbidites as guides toHolocene earthquake history at the Cascadia subduction zone—Alternative views for a seismic hazard workshop, U.S. Geol. Surv.Open-File Rept. 2012-1043.

Atwater, B. F., and E. Hemphill-Haley (1997). Recurrence intervals for greatearthquakes of the past 3,500 years at northeastern Willapa Bay,Washington, U.S. Geol. Surv. Professional Paper 1576, 108 pp.

Atwater, B. F., S. Musumi-Rokkaku, K. Satake, Y. Tsuji, K. Ueda, andD. K. Yamaguchi (2005). The orphan tsunami of 1700—Japaneseclues to a parent earthquake in North America, U.S. Geol. Surv.Professional Paper 1707, 133 pp.

Benjamin, J., and C. Cornell (1970). Probability, Statistics, and Decision forCivil Engineers, McGraw-Hill, New York, 684 pp.

16 R. Kulkarni, I. Wong, J. Zachariasen, C. Goldfinger, and M. Lawrence

BSSA Early Edition

Page 17: by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris ... · by Ram Kulkarni, Ivan Wong, Judith Zachariasen, Chris Goldfinger, and Martin Lawrence Abstract Goldfinger et al. (2012)

Chéry, J., S. Carretier, and J. F. Ritz (2001). Postseismic stress transfer ex-plains time clustering of large earthquakes in Mongolia, Earth Planet.Sci. Lett. 194, 277–286.

Einarsson, P., S. Bjornsson, G. Foulger, R. Stefansson, and T. Skaftadottir(1981). Seismicity pattern in the south Icelan seismic zone, inEarthquake Prediction: An International Review, D. W. Simpsonand P. G. Richards (Editors), Maurice Ewing Series, Vol. 4, AmericanGeophysical Union, 141–152.

Gibbons, R. (1994). Statistical Methods for Groundwater Monitoring, JohnWiley & Sons, Inc., New York, 286 pp.

Gilbert, R. O. (1987). Statistical Methods for Environmental PollutionMonitoring, Wiley, New York, 320 pp.

Goldfinger, C., C. H. Nelson, J. E. Johnson, and Shipboard Scientific Party(2003). Deep-water turbidites as Holocene earthquake proxies: TheCascadia subduction zone and Northern San Andreas Fault systems,Ann. Geophys. 46, 1169–1194.

Goldfinger, C., C. H. Nelson, J. E. Johnson, A. E. Morey, J. Gutiérrez-Pastor,E. Karabanov, A. T. Eriksson, E. Gràcia, G. Dunhill, J. Patton,R. Enkin, A. Dallimore, T. Vallier, and the Shipboard Scientific Parties(2012). Turbidite event history: Methods and implications forHolocene paleoseismicity of the Cascadia Subduction Zone, U.S.Geol. Surv. Professional Paper 1661-F, 170 pp.

Goldfinger, C., R. Witter, G. R. Priest, K. Wang, and Y. Zhang (2010).Cascadia, supercycles: Energy management of the long Cascadiaearthquake series (abs.), Seismol. Res. Lett. 81, 290.

Grant, L., and K. Sieh (1994). Paleoseismic evidence of clusteredearthquakes on the San Andreas fault in the Carrizo plain, Calif.J. Geophys. Res. 99, 6819–6841.

Heaton, T. H. (1990). Evidence for and implications of self-healing pulses ofslip in earthquake rupture, Phys. Earth Planet. Int. 64, 1–20.

Helsel, D. R. (2005). Nondetects and Data Analysis, Wiley, New York,250 pp.

Helsel, D. R., and R. M. Hirsch (2002). Statistical methods in water resour-ces, in Hydrologic Analysis and Interpretation, Chapter A3, Book 4,U.S. Geol. Surv.

JMP (2005). Statistics and Graphics Guide, Release 6. SAS Institute, Inc.Jurney, C. (2002). Recurrence of great earthquakes: Evidence of double

periodicity along the Cascadia subduction zone (abs.), Eos Trans.83, Fall Meeting Supplement.

Kelsey, H. M., A. R. Nelson, E. Hemphill-Haley, and R. C. Witter (2005).Tsunami history of an Oregon coastal lake reveals a 4600 yr record ofgreat earthquakes on the Cascadia subduction zone, Geol. Soc. Am.Bull. 117, 1009–1032.

Marco, S., M. Stein, and A. Agnon (1996). Long term earthquake clustering:A 50,000-year paleoseismic record in the Dead Sea graben,J. Geophys. Res. 101, 6179–6191.

Mazzotti, S., and J. Adams (2004). Short notes: Variability of near-termprobability for the next great earthquake on the Cascadia subductionzone, Bull. Seismol. Soc. Am. 94, 1954–1959.

McGuire, R. K. (2004). Seismic hazard and risk analysis, EERI MonographMNO-10, Earthquake Engineering Research Institute.

Nelson, A. R., H. M. Kelsey, and R. C. Witter (2006). Great earthquakes ofvariable magnitude at the Cascadia subduction zone, Quaternary Res.65, 354–365.

Nelson, A. R., Y. Sawai, A. E. Jennings, L. Bradley, L. Gerson, B. L. Sher-rod, J. Sabean, and B. P. Horton (2008). Great-earthquake paleogeod-esy and tsunamis of the past 2000 years at Alsea Bay, central Oregoncoast, USA: Quaternary Sci. Rev. 27, 747–768.

Petersen, M. D., A. D. Frankel, S. C. Harmsen, C. S. Mueller, K. M.Haller, R. L. Wheeler, R. L. Wesson, Y. Zeng, O. S. Boyd, D. M.Perkins, N. Luco, E. H. Field, C. J. Wills, and K. S. Rukstales

(2008). Documentation for the 2008 update of the United States Na-tional Seismic Hazard Maps, U.S. Geol. Surv. Open-File Rept. 2008-1128, 61 pp.

Ramsey, C. B. (2001). Development of the Radiocarbon Program OxCal,Radiocarbon 43, 355–363.

Satake, K., K. Wang, and B. F. Atwater (2003). Fault slip and seismicmoment of the 1700 Cascadia earthquake inferred from Japanesetsunami descriptions, J. Geophys. Res. 108, 2535.

Scharer, K. M., G. P. Biasi, and R. J. Weldon (2011). A reevaluation of thePallett Creek earthquakes chronology based on new AMS radiocarbondates, San Andreas fault, California, J. Geophys. Res. 116, no. B12111,doi: 10.1029/2010JB008099.

Sieh, K., D. H. Natawidjaja, A. J. Meltzner, C.-C. Shen, H. Cheng, K.-S. Li,B. W. Suwargadi, J. Galetzka, B. Philibosian, and R. L. Edwards(2008). Earthquake supercycles inferred from sea-level changes re-corded in the Corals of West Sumatra, Science 322, 1674–1678.

Sieh, K. E., M. Stuiver, and D. Brillinger (1989). A more precise chronologyof earthquakes produced by the San Andreas fault in SouthernCalifornia, J. Geophys. Res. 94, 603–623.

Thatcher, W. (1989). Earthquake recurrence and risk assessment incircum-Pacific seismic gaps, Nature 341, 432–434.

USEPA (1989). Statistical analysis of ground-water monitoring data atRCRA facilities, Interim Final Guidance, Office of Solid Waste,Washington, D.C. EPA/530-SW-89-026.

Witter, R. C., H. M. Kelsey, and E. Hemphill-Haley (2003). Great Cascadiaearthquakes and tsunamis of the past 6700 years, Coquille River es-tuary, southern coastal Oregon, Geol. Soc. Am. Bull. 115, 1289–1306.

Wong, I. G., P. A. Thomas, and S. S. Olig (2007). Why time-dependenthazard should be incorporated into the National Hazard Maps(abs.), Seismol. Res. Lett. 78, 263.

URS CorporationEnvironmental Division1333 Broadway, Suite 800Oakland, California 94612

(R.K.)

URS CorporationSeismic Hazards Group1333 Broadway, Suite 800Oakland, California 94612

(I.W., J.Z.)

Oregon State UniversityCollege of Oceanic and Atmospheric Sciences104 Ocean Administration BuildingCorvallis, Oregon 97331

(C.G.)

BC Hydro, Generation Engineering6911 Southpoint DriveBurnaby, British ColumbiaCanada V6K 2T3

(M.L.)

Manuscript received 23 March 2012;Published Online 8 October 2013

Statistical Analyses of Great Earthquake Recurrence along the Cascadia Subduction Zone 17

BSSA Early Edition


Recommended