Journal articles on the topic 'Earthquake interactions and probability'

To see the other types of publications on this topic, follow the link: Earthquake interactions and probability.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Earthquake interactions and probability.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Jones, Lucile M. "Foreshocks, aftershocks, and earthquake probabilities: Accounting for the landers earthquake." Bulletin of the Seismological Society of America 84, no. 3 (June 1, 1994): 892–99. http://dx.doi.org/10.1785/bssa0840030892.

Full text
Abstract:
Abstract The equation to determine the probability that an earthquake occurring near a major fault will be a foreshock to a mainshock on that fault is modified to include the case of aftershocks to a previous earthquake occurring near the fault. The addition of aftershocks to the background seismicity makes its less probable that an earthquake will be a foreshock, because nonforeshocks have become more common. As the aftershocks decay with time, the probability that an earthquake will be a foreshock increases. However, fault interactions between the first mainshock and the major fault can increase the long-term probability of a characteristic earthquake on that fault, which will, in turn, increase the probability that an event is a foreshock, compensating for the decrease caused by the aftershocks.
APA, Harvard, Vancouver, ISO, and other styles
2

Tao, Zheng Ru, Xia Xin Tao, and Wei Jiang. "A Review on Long-Term Evaluation of Occurrence Probability for Subduction-Zone Earthquakes in Eastern Japan." Applied Mechanics and Materials 166-169 (May 2012): 2190–96. http://dx.doi.org/10.4028/www.scientific.net/amm.166-169.2190.

Full text
Abstract:
Evaluation approach of occurrence probability for subduction-zone earthquakes adopted in “National Seismic Hazard Maps for Japan” is reviewed, especially for the area of the 2011 off the Pacific coast of Tohoku Earthquake (2011 Tohoku Earthquake in short). One problem is pointed that the occurrence probability of such a large earthquake cannot be predicted just from seismicity in a region small like Miyagi-ken-Oki area or southern Sanriku-Oki. The whole subduction zone in eastern Japan is suggested to be taken into account with the interaction between the energy released in quakes. Finally, a simple test to predict the next large earthquake in the subduction-zone by means of Artificial Neural Network is presented, and the result for the years of 2008-2018 shows there may be an earthquake with magnitude up to 8.8 in the zone.
APA, Harvard, Vancouver, ISO, and other styles
3

Dan Tian, Dan Tian, Yong-Jie Xu Dan Tian, Tong-Lei Qu Yong-Jie Xu, Rong-Guang Jia Tong-Lei Qu, Hao Zhang Rong-Guang Jia, and Wen-Jie Song Hao Zhang. "A Bayesian Network Model for Rough Estimations of Casualties by Strong Earthquakes in Emergency Mode." 電腦學刊 33, no. 6 (December 2022): 083–90. http://dx.doi.org/10.53106/199115992022123306007.

Full text
Abstract:
<p>Rough estimations in emergency mode are now playing an important role in making key decisions for managing disasters including search and rescue. Most of the studies only paid attention to the earthquakes and ignored the presence of disaster chains and the hazard interactions in earthquakes. Bayesian Networks are ideal tools to explore the causal relationships between events, combine prior knowledge and observed data, and are integrated to solve uncertain problems. In such situations, we present improvements based on a Bayesian Network Model in approaches to estimations of casualties in earthquakes. According to the development of the earthquake disaster chain in literature, the proposed model extracts the key events of earthquakes, considers the hazard interactions, and constructs the Bayesian Networks based on a scenario-based method, to deal with the events in the earthquakes. In the model, lifeline system damages, fires, landslides, and debris flow have been integrated into the networks. The conditional probability tables are encoded by using the collected cases. Validations in the Netica allow the simulation of expected shaking intensity and estimation of the expected casualties by strong earthquakes in emergency mode. Compared to the literature, the method is closer to the fact in the rough estimations, providing important information for our response to earthquakes. Further, rough estimations are started when only seismic intensity or fewer earthquake source parameters are available.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Gang, Qinjin Fan, Weile Li, Gianvito Scaringi, Yujie Long, Jing He, and Zheng Li. "Spatio-temporal network modelling and analysis of global strong earthquakes (Mw ≥ 6.0)." Journal of the Geological Society 177, no. 5 (June 2, 2020): 883–92. http://dx.doi.org/10.1144/jgs2019-151.

Full text
Abstract:
We employ a spatio-temporal network modelling approach to identify possible relations between strong earthquakes and spatial regions worldwide. A global strong earthquake dataset containing 7736 events (Mw ≥ 6.0) from 1964 to 2018 is used. Statistical results identify power-law relationships and heavy tail phenomena in the spatial patterns of strong earthquakes. The interactions between regions follow the same law, with a few regions that may be hit by successive strong earthquakes with high probability. Also, we find that the interconnections between regions are mainly related to the succession of events in time, whereas the distribution of events is extremely inhomogeneous in space. This study provides a research prototype for the spatio-temporal analysis of global strong earthquakes, laying a foundation for obtaining insights into the network modelling approach for global strong earthquakes.
APA, Harvard, Vancouver, ISO, and other styles
5

Mangira, O., E. Papadimitriou, G. Tsaklidis, and G. Vasiliadis. "SEISMIC HAZARD ASSESSMENT FOR THE CORINTH GULF AND CENTRAL IONIAN ISLANDS BY MEANS OF THE LINKED STRESS RELEASE MODEL." Bulletin of the Geological Society of Greece 50, no. 3 (July 27, 2017): 1369. http://dx.doi.org/10.12681/bgsg.11850.

Full text
Abstract:
Εarthquake generation causes spatio-temporal stress changes on adjacent fault segments that can alter the occurrence probability of subsequent earthquakes onto them. The interaction is investigated with the Linked Stress Release Model, applied to fit historical data from two areas that accommodate high seismicity, the Corinth Gulf and the Central Ionian Islands. These two areas are divided in two subareas, based on seismotectonic features; Corinth Gulf is divided in the western and eastern part, whereas the area of Central Ionian Islands is divided in Kefalonia and Lefkada subareas. The results establish interactions between the subareas, especially in the Central Ionian Islands, and underline the differences in tectonic structures and earthquake mechanisms between these areas. Particularly, the seismicity in the Central Ionian Islands is proved to be more complex and active and yet more difficult to be examined, whereas the LSRM fits the Corinth Gulf data more easily.
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Yuxuan, Mian Liu, and Gang Luo. "Complex Temporal Patterns of Large Earthquakes: Devil’s Staircases." Bulletin of the Seismological Society of America 110, no. 3 (April 14, 2020): 1064–76. http://dx.doi.org/10.1785/0120190148.

Full text
Abstract:
ABSTRACT Periodic or quasiperiodic earthquake recurrence on individual faults, as predicted by the elastic rebound model, is not common in nature. Instead, most earthquake sequences are complex and variable, and often show clusters of events separated by long but irregular intervals of quiescence. Such temporal patterns are especially common for large earthquakes in complex fault zones or regional and global fault networks. Mathematically described as the Devil’s Staircase, such temporal patterns are a fractal property of nonlinear complex systems, in which a change of any part (e.g., rupture of a fault or fault segment) could affect the behavior of the whole system. We found that the lengths of the quiescent intervals between clusters are inversely related to tectonic-loading rates, whereas earthquake clustering can be attributed to many factors, including earthquake-induced viscoelastic relaxation and fault interaction. Whereas the underlying causes of the characteristics of earthquake sequences are not fully known, we attempted to statistically characterize these sequences. We found that most earthquake sequences are burstier than the Poisson model commonly used in probabilistic seismic hazard analysis, implying a higher probability of repeating events soon after a large earthquake.
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Zhang Jun, Xing Fang, Yong Wan, and Yan Fu Xing. "Probability Density Evolution Method for Stochastic Earthquake Response and Reliability Analysis of Large-Scale Aqueduct Structures." Applied Mechanics and Materials 193-194 (August 2012): 1230–33. http://dx.doi.org/10.4028/www.scientific.net/amm.193-194.1230.

Full text
Abstract:
The probability density evolution method (PDEM) has been used to research the stochastic earthquake response and reliability analysis of large-scale aqueducts structures, with the changeable water level in tub and under the condition of the interaction between water and boundary of the tub. The results show that,the probability distribution of stochastic seismic responses about aqueduct structure is rules distribution, the response and failure probability of large-scale aqueduct structures under the stochastic earthquake are increased as the rising of water level, however, with the water level unchangeably, the seismic response is distinctness smaller while the seismic reliability much higher after considering the water sloshing effect on the aqueduct structures. Within the calculation of seismic resistance about the large aqueduct structures, the impact should be considered including the changeable water level and water sloshing.
APA, Harvard, Vancouver, ISO, and other styles
8

Toda, Shinji, and Ross S. Stein. "Long- and Short-Term Stress Interaction of the 2019 Ridgecrest Sequence and Coulomb-Based Earthquake Forecasts." Bulletin of the Seismological Society of America 110, no. 4 (July 14, 2020): 1765–80. http://dx.doi.org/10.1785/0120200169.

Full text
Abstract:
Abstract We first explore a series of retrospective earthquake interactions in southern California. We find that the four Mw≥7 shocks in the past 150 yr brought the Ridgecrest fault ∼1 bar closer to failure. Examining the 34 hr time span between the Mw 6.4 and Mw 7.1 events, we calculate that the Mw 6.4 event brought the hypocentral region of the Mw 7.1 earthquake 0.7 bars closer to failure, with the Mw 7.1 event relieving most of the surrounding stress that was imparted by the first. We also find that the Mw 6.4 cross-fault aftershocks shut down when they fell under the stress shadow of the Mw 7.1. Together, the Ridgecrest mainshocks brought a 120 km long portion of the Garlock fault from 0.2 to 10 bars closer to failure. These results motivate our introduction of forecasts of future seismicity. Most attempts to forecast aftershocks use statistical decay models or Coulomb stress transfer. Statistical approaches require simplifying assumptions about the spatial distribution of aftershocks and their decay; Coulomb models make simplifying assumptions about the geometry of the surrounding faults, which we seek here to remove. We perform a rate–state implementation of the Coulomb stress change on focal mechanisms to capture fault complexity. After tuning the model through a learning period to improve its forecast ability, we make retrospective forecasts to assess model’s predictive ability. Our forecast for the next 12 months yields a 2.3% chance of an Mw≥7.5 Garlock fault rupture. If such a rupture occurred and reached within 45 km of the San Andreas, we calculate it would raise the probability of a San Andreas rupture on the Mojave section by a factor of 150. We therefore estimate the net chance of large San Andreas earthquake in the next 12 months to be 1.15%, or about three to five times its background probability.
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Haibin, Mark Quigley, and Tamarah King. "Surface slip distributions and geometric complexity of intraplate reverse-faulting earthquakes." GSA Bulletin 133, no. 9-10 (January 13, 2021): 1909–29. http://dx.doi.org/10.1130/b35809.1.

Full text
Abstract:
Abstract Earthquake ground surface ruptures provide insights into faulting mechanics and inform seismic hazard analyses. We analyze surface ruptures for 11 historical (1968–2018) moment magnitude (Mw) 4.7–6.6 reverse earthquakes in Australia using statistical techniques and compare their characteristics with magnetic, gravity, and stress trajectory data sets. Of the total combined (summative) length of all surface ruptures (∼148 km), 133 km (90%) to 145 km (98%) align with the geophysical structure in the host basement rocks. Surface rupture length (SRL), maximum displacement (MD), and probability of surface rupture at a specified Mw are high compared with equivalent Mw earthquakes globally. This is attributed to (1) a steep cratonic crustal strength gradient at shallow depths, promoting shallow hypocenters (∼1–6 km) and limiting downdip rupture widths (∼1–8.5 km), and (2) favorably aligned crustal anisotropies (e.g., bedrock foliations, faults, fault intersections) that enhanced lateral rupture propagation and/or surface displacements. Combined (modeled and observed) MDs are in the middle third of the SRL with 68% probability and either the ≤33rd or ≥66th percentiles of SRL with 16% probability. MD occurs proximate to or directly within zones of enhanced fault geometric complexity (as evidenced from surface ruptures) in 8 of 11 earthquakes (73%). MD is approximated by 3.3 ± 1.6 (1σ) × AD (average displacement). S-transform analyses indicates that high-frequency slip maxima also coincide with fault geometric complexities, consistent with stress amplifications and enhanced slip variability due to geometric and kinematic interactions with neighboring faults. Rupture slip taper angles exhibit large variations (−90% to +380% with respect to the mean value) toward rupture termini and are steepest where ruptures terminate at obliquely oriented magnetic lineaments and/or lithology changes. Incremental slip approximates AD between the 10th and 90th percentiles of the SRL. The average static stress drop of the studied earthquakes is 4.8 ± 2.8 MPa. A surface rupture classification scheme for cratonic stable regions is presented to describe the prevailing characteristics of intraplate earthquakes across diverse crustal structural-geophysical settings. New scaling relationships and suggestions for logic tree weights are provided to enhance probabilistic fault displacement hazard analyses for bedrock-dominated intraplate continental regions.
APA, Harvard, Vancouver, ISO, and other styles
10

Sari, Devni Prima, Dedi Rosadi, Adhitya Ronnie Effendie, and Danardono Danardono. "Discretization methods for Bayesian networks in the case of the earthquake." Bulletin of Electrical Engineering and Informatics 10, no. 1 (February 1, 2021): 299–307. http://dx.doi.org/10.11591/eei.v10i1.2007.

Full text
Abstract:
The Bayesian networks are a graphical probability model that represents interactions between variables. This model has been widely applied in various fields, including in the case of disaster. In applying field data, we often find a mixture of variable types, which is a combination of continuous variables and discrete variables. For data processing using hybrid and continuous Bayesian networks, all continuous variables must be normally distributed. If normal conditions unsatisfied, we offer a solution, is to discretize continuous variables. Next, we can continue the process with the discrete Bayesian networks. The discretization of a variable can be done in various ways, including equal-width, equal-frequency, and K-means. The combination of BN and k-means is a new contribution in this study called the k-means Bayesian networks (KMBN) model. In this study, we compared the three methods of discretization used a confusion matrix. Based on the earthquake damage data, the K-means clustering method produced the highest level of accuracy. This result indicates that K-means is the best method for discretizing the data that we use in this study.
APA, Harvard, Vancouver, ISO, and other styles
11

Fidani, Cristiano. "Transfer Entropy of West Pacific Earthquakes to Inner Van Allen Belt Electron Bursts." Entropy 24, no. 3 (March 2, 2022): 359. http://dx.doi.org/10.3390/e24030359.

Full text
Abstract:
Lithosphere-ionosphere non-linear interactions create a complex system where links between different phenomena can remain hidden. The statistical correlation between West Pacific strong earthquakes and high-energy electron bursts escaping trapped conditions was demonstrated in past works. Here, it is investigated from the point of view of information. Starting from the conditional probability statistical model, which was deduced from the correlation, the Shannon entropy, the joint entropy, and the conditional entropy are calculated. Time-delayed mutual information and transfer entropy have also been calculated analytically here for binary events: by including correlations between consecutive earthquake events, and between consecutive earthquakes and electron bursts. These quantities have been evaluated for the complex dynamical system of lithosphere-ionosphere; although the expressions calculated by probabilities resulted in being valid for each pair of binary events. Peaks occurred for the same time delay as in the correlations, Δt = 1.5–3.5 h, and as well as for a new time delay, Δt = −58.5–−56.5 h, for the transfer entropy; this last is linked to EQ self-correlations from the analysis. Even if the low number of self-correlated EQs makes this second peak insignificant in this case, it is of interest to separate the non-linear contribution of the transfer entropy of binary events in the study of a complex system.
APA, Harvard, Vancouver, ISO, and other styles
12

Zhou, Ai Hong, Ying Yuan, and Bai Qing Xu. "The Stochastic Dynamic Reliability Research on Nonlinear Pile-Soil-Structure Interaction System with Uncertain Parameters." Advanced Materials Research 243-249 (May 2011): 5764–67. http://dx.doi.org/10.4028/www.scientific.net/amr.243-249.5764.

Full text
Abstract:
According to the damage characteristics of pile-soil-structure interaction system subjected to the earthquake, the seismic design method of using the dual design guideline of strength and deformation and taking the same reliability for both pile foundation and superstructure was put forward. The stochastic dynamic reliability of pile-soil-structure interaction system with uncertain parameters was studied on the basis of the randomness of earthquake, the nonlinearity of soil material parameters, and especailly the variability of soil material parameters. The results show that the control indexes of pile foundation and superstructure decreases with the increases of failure probability and the variation of material parameters will make the pile foundation structure partial unsafe.
APA, Harvard, Vancouver, ISO, and other styles
13

Gu, Quan. "Performance and Risk Assessment of Soil-Structure Interaction Systems Based on Finite Element Reliability Methods." Mathematical Problems in Engineering 2014 (2014): 1–16. http://dx.doi.org/10.1155/2014/704804.

Full text
Abstract:
In the context of performance-based earthquake engineering, reliability method has been of significant importance in performance and risk assessment of structures or soil-structure interaction (SSI) systems. The finite element (FE) reliability method combines FE analysis with state-of-the-art methods in reliability analysis and has been employed increasingly to estimate the probability of occurrence of failure events corresponding to various hazard levels (e.g., earthquakes with various intensity). In this paper, crucial components for FE reliability analysis are reviewed and summarized. Furthermore, recent advances in both time invariant and time variant reliability analysis methods for realistic nonlinear SSI systems are presented and applied to a two-dimensional two story building on layered soil. Various time invariant reliability analysis methods are applied, including the first-order reliability method (FORM), importance sampling method, and orthogonal plane sampling (OPS) method. For time variant reliability analysis, an upper bound of the failure probability is obtained from numerical integration of the mean outcrossing rate (MOCR). The MOCR is computed by using FORM analysis and OPS analysis. Results by different FE reliability methods are compared in terms of accuracy and computational cost. This paper provides valuable insights for reliability based probabilistic performance and risk assessment of SSI systems.
APA, Harvard, Vancouver, ISO, and other styles
14

Yousefpour, Amir, Hamid Mazidababdi Farahani, and Mohsen Ali Shayanfar. "Seismic Performance Assessment of Ordinary Moment Resisting Frame Equipped with Viscous Dampers under Pulse-Like Earthquakes." Shock and Vibration 2022 (March 31, 2022): 1–19. http://dx.doi.org/10.1155/2022/2924836.

Full text
Abstract:
In conventional structures, the earthquake-resistant design is based on flexibility after yielding of structural members to provide a loss of earthquake input energy, while, using dampers, the input energy loss can be concentrated in predetermined points to prevent the nonlinear behavior of the main members that are also in the gravity bearing path. However, near-fault earthquakes might cause unexpected failure and severe structural damage, especially those with the pulse-like effect. A pulsed movement in the near field records in this system will, however, result in unusual behavior. Technology advances and the creation of vibration control systems have helped control this type of behavior since the earthquake forces are applied indirectly to the structure. For further investigation of this issue, in this study, some traditional two-dimensional frames were modeled for 3, 8, and 12 floors. Seven near-fault pulsed and seven far-fault nonpulsed accelerometers are applied. The structural behavior in four modes is examined: (1) without damper and soil-structure interaction effect, (2) without damper and considering soil-structure interaction effect, (3) with damper and considering soil-structure interaction, and (4) with damper without considering soil-structure interaction. Each model is analyzed in OpenSees software under incremental dynamic analysis. Then, the fragility curve is plotted based on the results. The results indicate that frame (4) reaches the failure level at a higher spectral acceleration, which means that the performance of the viscous damper in reducing drift between floors is one of the main criteria for predicting damages. It also shows the effect of soil-structure interaction on increasing the drift between floors and reaching failure at lower spectral acceleration in all models. Also, by comparing the fragility curves of the models under near-field and far-field records, it is found that the probability of failure under far-field documents (without pulse) is less than that under near-field documents (with a pulse).
APA, Harvard, Vancouver, ISO, and other styles
15

Duma, G., and Y. Ruzhin. "Diurnal changes of earthquake activity and geomagnetic Sq-variations." Natural Hazards and Earth System Sciences 3, no. 3/4 (August 31, 2003): 171–77. http://dx.doi.org/10.5194/nhess-3-171-2003.

Full text
Abstract:
Abstract. Statistic analyses demonstrate that the probability of earthquake occurrence in many earthquake regions strongly depends on the time of day, that is on Local Time (e.g. Conrad, 1909, 1932; Shimshoni, 1971; Duma, 1997; Duma and Vilardo, 1998). This also applies to strong earthquake activity. Moreover, recent observations reveal an involvement of the regular diurnal variations of the Earth’s magnetic field, commonly known as Sq-variations, in this geodynamic process of changing earthquake activity with the time of day (Duma, 1996, 1999). In the article it is attempted to quantify the forces which result from the interaction between the induced Sq-variation currents in the Earth’s lithosphere and the regional Earth’s magnetic field, in order to assess the influence on the tectonic stress field and on seismic activity. A reliable model is obtained, which indicates a high energy involved in this process. The effect of Sq-induction is compared with the results of the large scale electromagnetic experiment "Khibiny" (Velikhov, 1989), where a giant artificial current loop was activated in the Barents Sea.
APA, Harvard, Vancouver, ISO, and other styles
16

Singh, NP R., and Hemant Vinayak. "Seismic bridge pier analysis for pile foundation by force and displacement based approaches." Facta universitatis - series: Architecture and Civil Engineering 13, no. 2 (2015): 155–66. http://dx.doi.org/10.2298/fuace1502155s.

Full text
Abstract:
Seismic analysis of bridge pier supported on pile foundation requires consideration of soil-pile-structure (kinematic and inertial) interactions. This paper presents the design forces generated for bridge piers with varying height and constant diameter for medium and soft soils in earthquake probability zones considering contribution of soil-pile-structure interactions by developed analytical approaches. The results have shown that the difference in base shear demand between force based and displacement based approach and that between capacity spectrum and displacement based method in general decreases with the increase in slenderness ratio of the pier. The base shear demand by non-linear time history analysis has been found to be much higher compared to that by other methods. The relationship between height and pier cross-section has been developed for different soils and seismic zones such that the base shear demands by force based and displacement based method are of the same order. The overall value of the slenderness ratio works out to be such that failure of the pile shall be as a short column for both medium and soft soil.
APA, Harvard, Vancouver, ISO, and other styles
17

Tzanis, A., F. Vallianatos, and A. Efstathiou. "Multidimensional earthquake frequency distributions consistent with Non-Extensive statistical physics: The interdependence of magnitude, interevent time and interevent distance in North California." Bulletin of the Geological Society of Greece 47, no. 3 (December 21, 2016): 1326. http://dx.doi.org/10.12681/bgsg.10914.

Full text
Abstract:
It is now accepted that the active tectonic grain comprises a self-organized complex system, therefore its expression (seismicity) should be manifested in the temporal and spatial statistics of energy release rates, and exhibit memory due to long-range interactions in a fractal-like space-time. Such attributes can be properly understood in terms of Non-Extensive Statistical Physics. In addition to energy release rates expressed by the magnitude M, measures of the temporal and spatial interactions are the time (Δt) and hypocentral distance (Δd) between consecutive events. Recent work indicated that if the distributions of M, Δt and Δd are independent so that the joint probability p(M, Δ t, Δd) factorizes as p(M) p( Δt) p( Δd), earthquake frequency is related to M, Δt and Δd by well defined power-laws consistent with NESP. The present work applies these concepts to investigate the self-organization and temporal/spatial dynamics of North Californian seismicity. The results indicate that the statistical behaviour of seismicity in this area is consistent with NESP predictions and has attributes of universality, as its holds for a very broad range of spatial, temporal and magnitude scales. They also indicate that the expression of the regional active tectonic grain comprises a mixture of processes significantly dependent on Δd, which include near (<100km) and far (>400km) field interactions.
APA, Harvard, Vancouver, ISO, and other styles
18

Main, Ian G. "Earthquakes as critical phenomena: Implications for probabilistic seismic hazard analysis." Bulletin of the Seismological Society of America 85, no. 5 (October 1, 1995): 1299–308. http://dx.doi.org/10.1785/bssa0850051299.

Full text
Abstract:
Abstract Earthquake populations have recently been postulated to be an example of a self-organized critical (SOC) phenomenon, with fractal spatial and temporal correlations and a power-law distribution of seismic energy or moment corresponding to the Gutenberg-Richter (G-R) frequency-magnitude law. In fact, strict SOC behavior is not seen in all models and is confined to those with weak annealed (permanent) heterogeneity and an intermediate tectonic driving velocity or strain energy rate. Depending on these conditions, distributions may also occur that are subcritical, where the largest events have a reduced probability of occurrence compared to the G-R trend, or supercritical, where the largest events have an elevated probability of occurrence, leading to “characteristic” earthquakes. Here we show type examples of all three types of behavior, lending support to a generalization of the Gutenberg-Richter law to a modified gamma distribution (a power law in energy or moment with an exponential tail with positive, zero, or negative argument). If earthquakes are an example of a critical phenomenon, then the a priori assumption of the G-R law in probabilistic hazard analysis is no longer valid in all cases. The appropriate distribution may also depend systematically on the size of the area, with smaller areas concentrating on individual fault segments more likely to produce a characteristic distribution. This previously unexpected effect of Euclidean zoning is an example of the degree of preconditioning inherent in some of the fundamental assumptions of seismic hazard analysis. Other assumptions, such that of stationarity in the process over long time periods, are borne out by SOC. The assumption of a random Poisson process is firmly at odds with SOC as an avalanche process involving strong local and weaker long-range interactions between earthquakes. The gamma distribution for the case of subcritical behavior predicts a maximum “credible” magnitude that may be independently determined from long-term slip rates, defined as the magnitude where the contribution to the total moment or intensity is negligible though finite. This soft maximum replaces the need to independently impose a hard, though somewhat artificial, maximum in distributions such as the G-R law. The same approach can be taken for the overall seismic hazard expressed by negligible contribution to ground motion, with similar results.
APA, Harvard, Vancouver, ISO, and other styles
19

Liu, Ai Rong, and Yong Lin Pi. "Seismic Response of Long Span Continuous Rigid-Framed Steel Arch Bridge." Key Engineering Materials 763 (February 2018): 1087–94. http://dx.doi.org/10.4028/www.scientific.net/kem.763.1087.

Full text
Abstract:
This paper investigates seismic responses of Xinguang Bridge, a 3-span continuous rigid-frame and steel-truss arch bridge. Earthquake excitation input is a key issue for the seismic analysis. This paper uses a finite element method to study the traveling wave effect on Xinguang Bridge and its interaction with the dynamic properties of the bridge under the condition of two steps and two levels probability. The seismic response of the bridge under the coincident earthquake excitation is also analyzed. Comparisons show that the seismic response of the long-span bridge by considering the traveling wave effect is much different from that under consistent earthquake excitation. The influence of the shear wave speed on the seismic response of the long span continuous bridge is also explored and the shear wave speed is found to greatly affect the wave shape and magnitude of the time-history of the longitudinal displacement at the crown of the main arch of the bridge. It is concluded that traveling wave effect and shear wave speed of ground have significant influences on the seismic response of the long span continuous rigid-framed and steel-truss arch bridge.
APA, Harvard, Vancouver, ISO, and other styles
20

Porter, Keith, Gayle Johnson, Robert Sheppard, and Robert Bachman. "Fragility of Mechanical, Electrical, and Plumbing Equipment." Earthquake Spectra 26, no. 2 (May 2010): 451–72. http://dx.doi.org/10.1193/1.3363847.

Full text
Abstract:
A study for the Multidisciplinary Center for Earthquake Engineering Research (MCEER) provides fragility functions for 52 varieties of mechanical, electrical, and plumbing (MEP) equipment commonly found in commercial and industrial buildings. For the majority of equipment categories, the MCEER study provides multiple fragility functions, reflecting important effects of bracing, anchorage, interaction, etc. The fragility functions express the probability that the component would be rendered inoperative as a function of floor acceleration. That work did not include the evidence underlying the fragility functions. As part of the ATC-58 effort to bring second-generation performance-based earthquake engineering to professional practice, we have compiled the original MCEER specimen-level performance data into a publicly accessible database and validate many of the original fragility functions. In some cases, new fragility functions derived by ATC-58 methods show somewhat closer agreement with the raw data. Average-condition fragility functions are developed here; we will address in subsequent work the effect of potentially important—arguably crucial—performance-modifying factors such as poor anchorage and interaction.
APA, Harvard, Vancouver, ISO, and other styles
21

Bakhtiari, Parham, and Khosro Bargi. "Seismic Vulnerability Assessment of High-Speed Railway Bridges Using Fragility Curves and Considering Soil-Structure Interaction." Civil and Environmental Engineering 16, no. 1 (June 1, 2020): 39–48. http://dx.doi.org/10.2478/cee-2020-0005.

Full text
Abstract:
AbstractThe assessment of the seismic behavior of the high-speed railway bridges is necessary because of the strategic essence of these structures. Evaluating and predicting damages of the bridges that originated by earthquakes with various intensities can provide useful information, which is very helpful in the management of the possible crises. One of the most useful mechanisms for estimating earthquake damages to these bridges is the development of fragility curves for them. Studies on the production of fragility curves on the high-speed railway bridges are limited. In this research, the fragility curve is plotted for two high-speed railway bridges with different pier heights. Due to the differences in the height of these bridges, a comparison of the performance of these structures is also shown. The model of the high-speed railway bridge was created for each model separately in the SeismoStruct software. The soil-structure interaction is also modeled as springs, and its effects are considered. Nonlinear models are also used to model concrete and steel materials. Then, the incremental dynamic analysis was performed under different ground motion records. By using the obtained data from the analysis, appropriate damage states were selected, and finally, the fragility curves were plotted for different performance limit states. The results showed that with increasing pier height, the damage index was raised and for a constant probability of exceedance, the taller pier is demanded a lower spectral acceleration to achieve a performance level.
APA, Harvard, Vancouver, ISO, and other styles
22

Awayo, Daniel Dibaba. "Seismic Fragility Analysis of Hollow Concrete Block Infilled Reinforced Concrete Buildings." International Research Journal of Innovations in Engineering and Technology 06, no. 12 (2022): 52–59. http://dx.doi.org/10.47001/irjiet/2022.612008.

Full text
Abstract:
Masonry infills are usually treated as nonstructural elements in buildings, and their interaction with the bounding frame is often ignored in analysis and design of reinforced concrete structures. The main aim of this study is to develop a seismic fragility curves showing the probability of exceeding a damage limit state for a given structure type subjected to a seismic excitation. For the purpose of this study, three distinct buildings namely, seven-story, eleven-story and sixteen-story, with typical floor plan were proposed as the case study. Each building cases are explicitly modeled as a bare frame and HCB infilled model with varying percentage of infill configurations. All building models under the case study were analyzed using Seismo-Struct software to assess seismic vulnerabilities. Non-linear dynamic time history and pushover analysis were employed to generate fragility curves. 30 generated artificial accelerograms were employed in the nonlinear dynamic time history analysis. Accordingly, for developing a fragility curve, nonlinear dynamic analyses of 30 building models for each case are conducted and the maximum roof displacement (ID) for each ground motion is recorded. Results of the study showed that bare frame has a highest probability of failure and building models with a larger percentage of infill configurations have lesser failure probability than slightly infilled building models. Basically these infills have significant contribution in arresting large lateral deflections and results in lower and most tolerable story displacements under excited earthquake motion and eventually reducing the structure’s probability of failure at life safety and collapse prevention limit states
APA, Harvard, Vancouver, ISO, and other styles
23

Robinson, Russell, Rafael Benites, and Russ Van Dissen. "Evidence for temporal clustering of large earthquakes in the wellington region from computer models of seismicity." Bulletin of the New Zealand Society for Earthquake Engineering 31, no. 1 (March 31, 1998): 24–32. http://dx.doi.org/10.5459/bnzsee.31.1.24-32.

Full text
Abstract:
Temporal clustering of large earthquakes in the Wellington region, New Zealand, has been investigated with a computer model that generates long synthetic seismicity catalogues. The model includes the elastic interactions between faults. Faults included in the model, besides the subduction thrust between the Australian and Pacific plates, are segments of the four major strike-slip faults that overlie the plate interface (Wairarapa, Wellington, Ohariu, and Wairau faults). Parameters of the model are adjusted to reproduce the geologically ohserved slip rates of the strike-slip faults. The seismic slip rate of the subduction thrust, which is unknown, is taken as 25% of the maximum predicted by the plate tectonic convergence rate, and its position fixed according to recent geodetic results. For comparison, the model was rerun with the elastic interactions suppressed, corresponding to the usual approach in the calculation of seismic hazard where each fault is considered in isolation. Considering earthquakes of magnitude 7.2 or more ("characteristic" events in the sense that they rupture most of a fault plane). the number of short (0-3 years) inter-event times is much higher with interactions than for the corresponding case without interactions (46% vs. 2% or all inter-event times). This reduces to 9% vs. 2% if the subduction thrust is removed from the models. Paleoseismic studies of the past seismic behaviour of the subduction thrust are clearly needed if the degree of clustering is to be tightly constrained. Although some other aspects of our model can he improved in future, we think that the probability of significant short-term clustering of large events, normally neglected in hazard studies, is very high. This has important implications for the engineering, insurance and emergency response communities.
APA, Harvard, Vancouver, ISO, and other styles
24

Lemsara, Foudhil, Tayeb Bouzid, Djarir Yahiaoui, Belgacem Mamen, and Mohamed Saadi. "Seismic Fragility of a Single Pillar-Column Under Near and Far Fault Soil Motion with Consideration of Soil-Pile Interaction." Engineering, Technology & Applied Science Research 13, no. 1 (February 1, 2023): 9819–24. http://dx.doi.org/10.48084/etasr.5405.

Full text
Abstract:
The soil-structure interaction is a significant challenge faced by civil engineers due to the complexity potential in terms of seismic fragility evaluation. This paper presents a seismic fragility estimation of a single pier considering seismic ground motion types. Furthermore, sand type, pile diameter, pier height, and mass variation were considered to estimate their effect on the seismic fragility of the concrete pier. Incremental dynamic analysis was performed using a beam on a nonlinear Winkler foundation model. The analysis model condition compared near- and far-ground motion effects. Dynamic analysis and fragility assessment of the single-pier structure showed that low mass center produced less vulnerability of the concrete pier in the two cases of the sand type under near- and far-ground motions. The near and far earthquake simulations at complete failure probability had a difference of less than 5% when 0.65s<T1<1s and 2.4<T1/T2, but the opposite was shown when T1<0.5s and 3<T1/T2 were present together.
APA, Harvard, Vancouver, ISO, and other styles
25

Guo, Xuan, Zheyu Zhang, and ZhiQiang Chen. "Mainshock-Integrated Aftershock Vulnerability Assessment of Bridge Structures." Applied Sciences 10, no. 19 (September 29, 2020): 6843. http://dx.doi.org/10.3390/app10196843.

Full text
Abstract:
Seismic fragility analysis is often conducted to quantify the vulnerability of civil structures under earthquake excitation. In recent years, besides mainshocks, strong aftershocks have been often witnessed to induce structural damage to engineered structures, including bridges. How to accurately and straightforwardly quantify the vulnerability of bridges due to sequential mainshocks and aftershocks is essential for an efficient assessment of bridge performance. While recognizing the limitation of existing methods, this paper proposes a mainshock integrated aftershock fragility function model, which empirically encodes the effects of mainshocks and retains the simple form of traditional fragility curves. A pile foundation-supported bridge system is modeled considering seismic soil-structure interaction to demonstrate the proposed fragility model. Numerical examples show that the resulting fragility curves incorporate the initial value for the probability of collapse of the bridge system due to a mainshock and the effects of the variable aftershocks conditional on the mainshock. Statistical analysis confirms that the proposed model fits the simulated vulnerability data (e.g., seismic intensities of aftershocks and the response demands conditional a select mainshock ground motion) both accurately and robustly.
APA, Harvard, Vancouver, ISO, and other styles
26

Hu, Hongqiang, Gang Gan, Yangjuan Bao, Xiaopeng Guo, Min Xiong, Xu Han, Kepeng Chen, Linyong Cui, and Lin Wang. "Nonlinear Stochastic Seismic Response Analysis of Three-Dimensional Reinforced Concrete Piles." Buildings 13, no. 1 (December 30, 2022): 89. http://dx.doi.org/10.3390/buildings13010089.

Full text
Abstract:
A reliable assessment and design of engineering structures requires appropriate estimation and consideration of different sources of uncertainty. The randomness of seismic ground motion is one major uncertainty that needs to be considered from the perspective of performance-based earthquake engineering. To properly account for this uncertainty and its corresponding effect on pile foundations, a stochastic seismic response analytical framework based on a probability density evolution method and a stochastic ground motion model is proposed for the nonlinear stochastic seismic response analysis of pile foundations. A three-dimensional finite element model of a reinforced concrete pile with a large diameter embedded in the soil foundation is firstly established and calibrated using the full-scale lateral load test results from the literature, in which the nonlinear behavior of soil, the soil–pile interaction, the concrete damaged plasticity, and the steel yielding of the pile are properly modeled. Then, the calibrated three-dimensional numerical model is employed as an illustrative example for the stochastic seismic response analysis using the proposed framework. The mean, standard deviation, and probability density function of pile settlement and the dynamic reliabilities of the pile concerning various performance requirements are obtained in the present study. The settlements of the pile show great variability under the excitation of various stochastic ground motions, and the maximum mean value of the pile settlements is about 8 mm. The changing shape of the settlement probability density functions in every moment demonstrates that it is unreasonable to use assumed probabilistic distribution models to characterize the seismic responses of pile foundations during a seismic reliability analysis. Based on the proposed method, it is found that the dynamic reliabilities of the selected reinforced concrete pile concerning four different performance levels are 0, 0.1293, 0.8247, and 1, respectively. The proposed stochastic seismic response analytical method in the present study can provide a proper and comprehensive way to quantify various uncertainties and their corresponding effects on the seismic performance of pile foundations. It can also be used to estimate the actual reliability level of pile foundations that are designed by the current codes when they are subjected to seismic loads.
APA, Harvard, Vancouver, ISO, and other styles
27

Avgerinou, Sophia-Ekaterini, Eleni-Apostolia Anyfadi, Georgios Michas, and Filippos Vallianatos. "A Non-Extensive Statistical Physics View of the Temporal Properties of the Recent Aftershock Sequences of Strong Earthquakes in Greece." Applied Sciences 13, no. 3 (February 3, 2023): 1995. http://dx.doi.org/10.3390/app13031995.

Full text
Abstract:
Greece is one of Europe’s most seismically active areas. Seismic activity in Greece has been characterized by a series of strong earthquakes with magnitudes up to Mw = 7.0 over the last five years. In this article we focus on these strong events, namely the Mw6.0 Arkalochori (27 September 2021), the Mw6.3 Elassona (3 March 2021), the Mw7.0 Samos (30 October 2020), the Mw5.1 Parnitha (19 July 2019), the Mw6.6 Zakynthos (25 October 2018), the Mw6.5 Kos (20 July 2017) and the Mw6.1 Mytilene (12 June 2017) earthquakes. Based on the probability distributions of interevent times between the successive aftershock events, we investigate the temporal evolution of their aftershock sequences. We use a statistical mechanics model developed in the framework of Non-Extensive Statistical Physics (NESP) to approach the observed distributions. NESP provides a strictly necessary generalization of Boltzmann–Gibbs statistical mechanics for complex systems with memory effects, (multi)fractal geometries, and long-range interactions. We show how the NESP applicable to the temporal evolution of recent aftershock sequences in Greece, as well as the existence of a crossover behavior from power-law (q ≠ 1) to exponential (q = 1) scaling for longer interevent times. The observed behavior is further discussed in terms of superstatistics. In this way a stochastic mechanism with memory effects that can produce the observed scaling behavior is demonstrated. To conclude, seismic activity in Greece presents a series of significant earthquakes over the last five years. We focus on strong earthquakes, and we study the temporal evolution of aftershock sequences of them using a statistical mechanics model. The non-extensive parameter q related with the interevent times distribution varies between 1.62 and 1.71, which suggests a system with about one degree of freedom.
APA, Harvard, Vancouver, ISO, and other styles
28

Hill, David P., Fred Pollitz, and Christopher Newhall. "Earthquake–Volcano Interactions." Physics Today 55, no. 11 (November 2002): 41–47. http://dx.doi.org/10.1063/1.1535006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Parsons, T. "Heightened Odds of Large Earthquakes Near Istanbul: An Interaction-Based Probability Calculation." Science 288, no. 5466 (April 28, 2000): 661–65. http://dx.doi.org/10.1126/science.288.5466.661.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Regenauer-Lieb, Klaus, Manman Hu, Christoph Schrank, Xiao Chen, Santiago Peña Clavijo, Ulrich Kelka, Ali Karrech, et al. "Cross-diffusion waves resulting from multiscale, multi-physics instabilities: theory." Solid Earth 12, no. 4 (April 16, 2021): 869–83. http://dx.doi.org/10.5194/se-12-869-2021.

Full text
Abstract:
Abstract. We propose a multiscale approach for coupling multi-physics processes across the scales. The physics is based on discrete phenomena, triggered by local thermo-hydro-mechano-chemical (THMC) instabilities, that cause cross-diffusion (quasi-soliton) acceleration waves. These waves nucleate when the overall stress field is incompatible with accelerations from local feedbacks of generalized THMC thermodynamic forces that trigger generalized thermodynamic fluxes of another kind. Cross-diffusion terms in the 4×4 THMC diffusion matrix are shown to lead to multiple diffusional P and S wave equations as coupled THMC solutions. Uncertainties in the location of meso-scale material instabilities are captured by a wave-scale correlation of probability amplitudes. Cross-diffusional waves have unusual dispersion patterns and, although they assume a solitary state, do not behave like solitons but show complex interactions when they collide. Their characteristic wavenumber and constant speed define mesoscopic internal material time–space relations entirely defined by the coefficients of the coupled THMC reaction–cross-diffusion equations. A companion paper proposes an application of the theory to earthquakes showing that excitation waves triggered by local reactions can, through an extreme effect of a cross-diffusional wave operator, lead to an energy cascade connecting large and small scales and cause solid-state turbulence.
APA, Harvard, Vancouver, ISO, and other styles
31

Tseng, Chih-Ming, Yie-Ruey Chen, Chwen-Ming Chang, Ya-Ling Yang, Yu-Ru Chen, and Shun-Chieh Hsieh. "Statistical Analysis of the Potential of Landslides Induced by Combination between Rainfall and Earthquakes." Water 14, no. 22 (November 15, 2022): 3691. http://dx.doi.org/10.3390/w14223691.

Full text
Abstract:
This study analyzed the potential of landslides induced by the interaction between rainfall and earthquakes. Dapu Township and Alishan Township in Chiayi County, southern Taiwan, were included as study areas. From satellite images and the literature, we collected data for multiple years and time series and then used the random forest data mining algorithm for satellite image interpretation. A hazard index for the interaction between earthquakes and rainfall (IHERI) was proposed, and an index for the degree of land disturbance (IDLD) was estimated to explore the characteristics of IHERI under specific natural environmental and slope land use conditions. The results revealed that among the investigated disaster-causing factors, the degree of slope land use disturbance, the slope of the natural environment, and rainfall exerted the strongest effect on landslide occurrence. When IHERI or IDLD was higher, the probability of a landslide also increased, and under conditions of a similar IDLD, the probability of landslides increased as the IHERI value increased, and vice versa. Thus, given the interaction between rainfall and earthquakes in the study area, the effect of the degree of slope land use disturbance on landslides should not be ignored. The results of a receiver operating characteristic (ROC) curve analysis indicated that the areas under the ROC curve for landslides induced by different trigger factors were all above 0.94. The results indicate that the area in which medium–high-level landslides are induced by an interaction between rainfall and earthquakes is large.
APA, Harvard, Vancouver, ISO, and other styles
32

Jiménez, A., A. M. Posadas, T. Hirata, and J. M. García. "Probabilistic seismic hazard maps from seismicity patterns analysis: the Iberian Peninsula case." Natural Hazards and Earth System Sciences 4, no. 3 (June 10, 2004): 407–16. http://dx.doi.org/10.5194/nhess-4-407-2004.

Full text
Abstract:
Abstract. Earthquake prediction is a main topic in Seismology. Here, the goal is to know the correlation between the seismicity at a certain place at a given time with the seismicity at the same place, but at a following interval of time. There are no ways for exact predictions, but one can wonder about the causality relations between the seismic characteristics at a given time interval and another in a region. In this paper, a new approach to this kind of studies is presented. Tools which include cellular automata theory and Shannon's entropy are used. First, the catalogue is divided into time intervals, and the region into cells. The activity or inactivity of each cell at a certain time is described using an energy criterion; thus a pattern which evolves over time is given. The aim is to find the rules of the stochastic cellular automaton which best fits the evolution of the pattern. The neighborhood utilized is the cross template (CT). A grid search is made to choose the best model, being the mutual information between the different times the function to be maximized. This function depends on the size of the cells β on and the interval of time τ which is considered for studying the activity of a cell. With these β and τ, a set of probabilities which characterizes the evolution rules is calculated, giving a probabilistic approach to the spatiotemporal evolution of the region. The sample catalogue for the Iberian Peninsula covers since 1970 till 2001. The results point out that the seismic activity must be deduced not only from the past activity at the same region but also from its surrounding activity. The time and spatial highest interaction for the catalogue used are of around 3.3 years and 290x165 km2, respectively; if a cell is inactive, it will continue inactive with a high probability; an active cell has around the 60% probability of continuing active in the future. The Probabilistic Seismic Hazard Map obtained marks the main seismic active areas (northwestern Africa) were the real seismicity has been occurred after the date of the data set studied. Also, the Hurst exponent has been studied. The value calculated is 0.48±0.02, which means that the process is inherently unpredictable. This result can be related to the incapacity of the cellular automaton obtained of predicting sudden changes.
APA, Harvard, Vancouver, ISO, and other styles
33

Lari, S., P. Frattini, and G. B. Crosta. "Local scale multiple quantitative risk assessment and uncertainty evaluation in a densely urbanised area (Brescia, Italy)." Natural Hazards and Earth System Sciences 12, no. 11 (November 20, 2012): 3387–406. http://dx.doi.org/10.5194/nhess-12-3387-2012.

Full text
Abstract:
Abstract. The study of the interactions between natural and anthropogenic risks is necessary for quantitative risk assessment in areas affected by active natural processes, high population density and strong economic activities. We present a multiple quantitative risk assessment on a 420 km2 high risk area (Brescia and surroundings, Lombardy, Northern Italy), for flood, seismic and industrial accident scenarios. Expected economic annual losses are quantified for each scenario and annual exceedance probability-loss curves are calculated. Uncertainty on the input variables is propagated by means of three different methodologies: Monte-Carlo-Simulation, First Order Second Moment, and point estimate. Expected losses calculated by means of the three approaches show similar values for the whole study area, about 64 000 000 € for earthquakes, about 10 000 000 € for floods, and about 3000 € for industrial accidents. Locally, expected losses assume quite different values if calculated with the three different approaches, with differences up to 19%. The uncertainties on the expected losses and their propagation, performed with the three methods, are compared and discussed in the paper. In some cases, uncertainty reaches significant values (up to almost 50% of the expected loss). This underlines the necessity of including uncertainty in quantitative risk assessment, especially when it is used as a support for territorial planning and decision making. The method is developed thinking at a possible application at a regional-national scale, on the basis of data available in Italy over the national territory.
APA, Harvard, Vancouver, ISO, and other styles
34

TSUKUDA, Tameshige. "Earthquake Forecast based on Probability." Zisin (Journal of the Seismological Society of Japan. 2nd ser.) 56, no. 1 (2003): 11–20. http://dx.doi.org/10.4294/zisin1948.56.1_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Schorlemmer, D., and J. Woessner. "Probability of Detecting an Earthquake." Bulletin of the Seismological Society of America 98, no. 5 (October 1, 2008): 2103–17. http://dx.doi.org/10.1785/0120070105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Cochard, A., O. Lengliné, K. J. Måløy, and R. Toussaint. "Thermally activated crack fronts propagating in pinning disorder: simultaneous brittle/creep behaviour depending on scale." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 377, no. 2136 (November 26, 2018): 20170399. http://dx.doi.org/10.1098/rsta.2017.0399.

Full text
Abstract:
We study theoretically the propagation of a crack front in mode I along an interface in a disordered elastic medium, with a numerical model considering a thermally activated rheology, toughness disorder and long-range elastic interactions. This model reproduces not only the large-scale dynamics of the crack front position in fast or creep loading regimes, but also the small-scale self-affine behaviour of the front. Two different scaling laws are predicted for the front morphology, with a Hurst exponent of 0.5 at small scales and a logarithmic scaling law at large scales, consistently with experiments. The prefactor of these scaling laws is expressed as a function of the temperature, and of the quenched disorder characteristics. The cross-over between these regimes is expressed as a function of the quenched disorder amplitude, and is proportional to the average energy release rate, and to the inverse of temperature. This model captures as well the experimentally observed local velocity fluctuation probability distribution, with a high-velocity tail P ( v )∼ v −2.6 . This feature is shown to arise when the quenched disorder is sufficiently large, whereas smaller toughness fluctuations lead to a lognormal-like velocity distribution. Overall, the system is shown to obey a scaling determined by two distinct mechanisms as a function of scale: namely, the large scales display fluctuations similar to an elastic line in an annealed noise excited as the average front travels through the pinning landscape, while small scales display a balance between thresholds in possible elastic forces and quenched disorder. This article is part of the theme issue ‘Statistical physics of fracture and earthquakes’.
APA, Harvard, Vancouver, ISO, and other styles
37

Sun, Jichun, and Tso-Chien Pan. "The probability of very large earthquakes in Sumatra." Bulletin of the Seismological Society of America 85, no. 4 (August 1, 1995): 1226–31. http://dx.doi.org/10.1785/bssa0850041226.

Full text
Abstract:
Abstract This article presents the results of a preliminary investigation into the risk of very large earthquakes in Sumatra. Data for the study were taken from the Earthquake Data Base System of the National Earthquake Information Center, U.S. Geological Survey. In determining the recurrence interval of large earthquakes, the method of Dong et al. (1984) based on the maximum entropy principle was used. If the maximum magnitude of possible earthquakes in Sumatra is assumed to be 8.75, 9.0, or unlimited, the recurrence interval of a magnitude 8.5 earthquake is found to be 430, 283, or 204 yr, respectively. For the three cases, the magnitude of an earthquake with a 10% probability of exceedance in 50 yr is determined to be 8.52, 8.64, and 8.85, respectively, on the assumption of Poisson's distribution for earthquake occurrence. The results imply that the risk of a very large earthquake is high in Sumatra, and its consequences on the distant metropolitan areas on the Malay Peninsula should be investigated in further research.
APA, Harvard, Vancouver, ISO, and other styles
38

MAEDA, Kenji, and Akio YOSHIDA. "Probability of Earthquake Occurrence Using Weibull Distribution." Zisin (Journal of the Seismological Society of Japan. 2nd ser.) 44, no. 2 (1991): 147–50. http://dx.doi.org/10.4294/zisin1948.44.2_147.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Schertzer, D., and S. Lovejoy. "EGS Richardson AGU Chapman NVAG3 Conference: Nonlinear Variability in Geophysics: scaling and multifractal processes." Nonlinear Processes in Geophysics 1, no. 2/3 (September 30, 1994): 77–79. http://dx.doi.org/10.5194/npg-1-77-1994.

Full text
Abstract:
Abstract. 1. The conference The third conference on "Nonlinear VAriability in Geophysics: scaling and multifractal processes" (NVAG 3) was held in Cargese, Corsica, Sept. 10-17, 1993. NVAG3 was joint American Geophysical Union Chapman and European Geophysical Society Richardson Memorial conference, the first specialist conference jointly sponsored by the two organizations. It followed NVAG1 (Montreal, Aug. 1986), NVAG2 (Paris, June 1988; Schertzer and Lovejoy, 1991), five consecutive annual sessions at EGS general assemblies and two consecutive spring AGU meeting sessions. As with the other conferences and workshops mentioned above, the aim was to develop confrontation between theories and experiments on scaling/multifractal behaviour of geophysical fields. Subjects covered included climate, clouds, earthquakes, atmospheric and ocean dynamics, tectonics, precipitation, hydrology, the solar cycle and volcanoes. Areas of focus included new methods of data analysis (especially those used for the reliable estimation of multifractal and scaling exponents), as well as their application to rapidly growing data bases from in situ networks and remote sensing. The corresponding modelling, prediction and estimation techniques were also emphasized as were the current debates about stochastic and deterministic dynamics, fractal geometry and multifractals, self-organized criticality and multifractal fields, each of which was the subject of a specific general discussion. The conference started with a one day short course of multifractals featuring four lectures on a) Fundamentals of multifractals: dimension, codimensions, codimension formalism, b) Multifractal estimation techniques: (PDMS, DTM), c) Numerical simulations, Generalized Scale Invariance analysis, d) Advanced multifractals, singular statistics, phase transitions, self-organized criticality and Lie cascades (given by D. Schertzer and S. Lovejoy, detailed course notes were sent to participants shortly after the conference). This was followed by five days with 8 oral sessions and one poster session. Overall, there were 65 papers involving 74 authors. In general, the main topics covered are reflected in this special issue: geophysical turbulence, clouds and climate, hydrology and solid earth geophysics. In addition to AGU and EGS, the conference was supported by the International Science Foundation, the Centre Nationale de Recherche Scientifique, Meteo-France, the Department of Energy (US), the Commission of European Communities (DG XII), the Comite National Francais pour le Programme Hydrologique International, the Ministere de l'Enseignement Superieur et de la Recherche (France). We thank P. Hubert, Y. Kagan, Ph. Ladoy, A. Lazarev, S.S. Moiseev, R. Pierrehumbert, F. Schmitt and Y. Tessier, for help with the organization of the conference. However special thanks goes to A. Richter and the EGS office, B. Weaver and the AGU without whom this would have been impossible. We also thank the Institut d' Etudes Scientifiques de Cargese whose beautiful site was much appreciated, as well as the Bar des Amis whose ambiance stimulated so many discussions. 2. Tribute to L.F. Richardson With NVAG3, the European geophysical community paid tribute to Lewis Fry Richardson (1881-1953) on the 40th anniversary of his death. Richardson was one of the founding fathers of the idea of scaling and fractality, and his life reflects the European geophysical community and its history in many ways. Although many of Richardson's numerous, outstanding scientific contributions to geophysics have been recognized, perhaps his main contribution concerning the importance of scaling and cascades has still not received the attention it deserves. Richardson was the first not only to suggest numerical integration of the equations of motion of the atmosphere, but also to attempt to do so by hand, during the First World War. This work, as well as a presentation of a broad vision of future developments in the field, appeared in his famous, pioneering book "Weather prediction by numerical processes" (1922). As a consequence of his atmospheric studies, the nondimensional number associated with fluid convective stability has been called the "Richardson number". In addition, his book presents a study of the limitations of numerical integration of these equations, it was in this book that - through a celebrated poem - that the suggestion that turbulent cascades were the fundamental driving mechanism of the atmosphere was first made. In these cascades, large eddies break up into smaller eddies in a manner which involves no characteristic scales, all the way from the planetary scale down to the viscous scale. This led to the Richardson law of turbulent diffusion (1926) and tot he suggestion that particles trajectories might not be describable by smooth curves, but that such trajectories might instead require highly convoluted curves such as the Peano or Weierstrass (fractal) curves for their description. As a founder of the cascade and scaling theories of atmospheric dynamics, he more or less anticipated the Kolmogorov law (1941). He also used scaling ideas to invent the "Richardson dividers method" of successively increasing the resolution of fractal curves and tested out the method on geographical boundaries (as part of his wartime studies). In the latter work he anticipated recent efforts to study scale invariance in rivers and topography. His complex life typifies some of the hardships that the European scientific community has had to face. His educational career is unusual: he received a B.A. degree in physics, mathematics, chemistry, biology and zoology at Cambridge University, and he finally obtained his Ph.D. in mathematical psychology at the age of 47 from the University of London. As a conscientious objector he was compelled to quit the United Kingdom Meteorological Office in 1920 when the latter was militarized by integration into the Air Ministry. He subsequently became the head of a physics department and the principal of a college. In 1940, he retired to do research on war, which was published posthumously in book form (Richardson, 1963). This latter work is testimony to the trauma caused by the two World Wars and which led some scientists including Richardson to use their skills in rational attempts to eradicate the source of conflict. Unfortunately, this remains an open field of research. 3. The contributions in this special issue Perhaps the area of geophysics where scaling ideas have the longest history, and where they have made the largest impact in the last few years, is turbulence. The paper by Tsinober is an example where geometric fractal ideas are used to deduce corrections to standard dimensional analysis results for turbulence. Based on local spontaneous breaking of isotropy of turbulent flows, the fractal notion is used in order to deduce diffusion laws (anomalous with respect to the Richardson law). It is argued that his law is ubiquitous from the atmospheric boundary layer to the stratosphere. The asymptotic intermittency exponent i hypothesized to be not only finite but to be determined by the angular momentum flux. Schmitt et al., Chigirinskaya et al. and Lazarev et al. apply statistical multifractal notions to atmospheric turbulence. In the former, the formal analogy between multifractals and thermodynamics is exploited, in particular to confirm theoretical predictions that sample-size dependent multifractal phase transitions occur. While this quantitatively explains the behavior of the most extreme turbulent events, it suggests that - contrary to the type of multifractals most commonly discussed in the literature which are bounded - more violent (unbounded) multifractals are indeed present in the atmospheric wind field. Chigirinskaya et al. use a tropical rather than mid-latitude set to study the extreme fluctuations form yet another angle: That of coherent structures, which, in the multifractal framework, are identified with singularities of various orders. The existence of a critical order of singularity which distinguishes violent "self-organized critical structures" was theoretically predicted ten years ago; here it is directly estimated. The second of this two part series (Lazarev et al.) investigates yet another aspect of tropical atmospheric dynamics: the strong multiscaling anisotropy. Beyond the determination of universal multifractal indices and critical singularities in the vertical, this enables a comparison to be made with Chigirinskaya et al.'s horizontal results, requiring an extension of the unified scaling model of atmospheric dynamics. Other approaches to the problem of geophysical turbulence are followed in the papers by Pavlos et al., Vassiliadis et al., Voros et al. All of them share a common assumption that a very small number of degrees of freedom (deterministic chaos) might be sufficient for characterizing/modelling the systems under consideration. Pavlos et al. consider the magnetospheric response to solar wind, showing that scaling occurs both in real space (using spectra), and also in phase space; the latter being characterized by a correlation dimension. The paper by Vassiliadis et al. follows on directly by investigating the phase space properties of power-law filtered and rectified gaussian noise; the results further quantify how low phase space correlation dimensions can occur even with very large number of degrees of freedom (stochastic) processes. Voros et al. analyze time series of geomagnetic storms and magnetosphere pulsations, also estimating their correlation dimensions and Lyapounov exponents taking special care of the stability of the estimates. They discriminate low dimensional events from others, which are for instance attributed to incoherent waves. While clouds and climate were the subject of several talks at the conference (including several contributions on multifractal clouds), Cahalan's contribution is the only one in this special issue. Addressing the fundamental problem of the relationship of horizontal cloud heterogeneity and the related radiation fields, he first summarizes some recent numerical results showing that even for comparatively thin clouds that fractal heterogeneity will significantly reduce the albedo. The model used for the distribution of cloud liquid water is the monofractal "bounded cascade" model, whose properties are also outlined. The paper by Falkovich addresses another problem concerning the general circulation: the nonlinear interaction of waves. By assuming the existence of a peak (i.e. scale break) at the inertial oscillation frequency, it is argued that due to remarkable cancellations, the interactions between long inertio-gravity waves and Rossby waves are anomalously weak, producing a "wave condensate" of large amplitude so that wave breaking with front creation can occur. Kagan et al., Eneva and Hooge et al. consider fractal and multifractal behaviour in seismic events. Eneva estimates multifractal exponents of the density of micro-earthquakes induced by mining activity. The effects of sample limitations are discussed, especially in order to distinguish between genuine from spurious multifractal behaviour. With the help of an analysis of the CALNET catalogue, Hooge et al. points out, that the origin of the celebrated Gutenberg-Richter law could be related to a non-classical Self-Organized Criticality generated by a first order phase transition in a multifractal earthquake process. They also analyze multifractal seismic fields which are obtained by raising earthquake amplitudes to various powers and summing them on a grid. In contrast, Kagan, analyzing several earthquake catalogues discussed the various laws associated with earthquakes. Giving theoretical and empirical arguments, he proposes an additive (monofractal) model of earthquake stress, emphasizing the relevance of (asymmetric) stable Cauchy probability distributions to describe earthquake stress distributions. This would yield a linear model for self-organized critical earthquakes. References: Kolmogorov, A.N.: Local structure of turbulence in an incompressible liquid for very large Reynolds number, Proc. Acad. Sci. URSS Geochem. Sect., 30, 299-303, 1941. Perrin, J.: Les Atomes, NRF-Gallimard, Paris, 1913. Richardson, L.F.: Weather prediction by numerical process. Cambridge Univ. Press 1922 (republished by Dover, 1965). Richardson, L.F.: Atmospheric diffusion on a distance neighbour graph. Proc. Roy. of London A110, 709-737, 1923. Richardson, L.F.: The problem of contiguity: an appendix of deadly quarrels. General Systems Yearbook, 6, 139-187, 1963. Schertzer, D., Lovejoy, S.: Nonlinear Variability in Geophysics, Kluwer, 252 pp, 1991.
APA, Harvard, Vancouver, ISO, and other styles
40

Dieterich, J. H. "Earthquake simulations with time-dependent nucleation and long-range interactions." Nonlinear Processes in Geophysics 2, no. 3/4 (December 31, 1995): 109–20. http://dx.doi.org/10.5194/npg-2-109-1995.

Full text
Abstract:
Abstract. A model for rapid simulation of earthquake sequences is introduced which incorporates long-range elastic interactions among fault elements and time-dependent earthquake nucleation inferred from experimentally derived rate- and state-dependent fault constitutive properties. The model consists of a planar two-dimensional fault surface which is periodic in both the x- and y-directions. Elastic interactions among fault elements are represented by an array of elastic dislocations. Approximate solutions for earthquake nucleation and dynamics of earthquake slip are introduced which permit computations to proceed in steps that are determined by the transitions from one sliding state to the next. The transition-driven time stepping and avoidance of systems of simultaneous equations permit rapid simulation of large sequences of earthquake events on computers of modest capacity, while preserving characteristics of the nucleation and rupture propagation processes evident in more detailed models. Earthquakes simulated with this model reproduce many of the observed spatial and temporal characteristics of clustering phenomena including foreshock and aftershock sequences. Clustering arises because the time dependence of the nucleation process is highly sensitive to stress perturbations caused by nearby earthquakes. Rate of earthquake activity following a prior earthquake decays according to Omori's aftershock decay law and falls off with distance.
APA, Harvard, Vancouver, ISO, and other styles
41

SHINODA, Masahiro. "EARTHQUAKE DAMAGE PROBABILITY AND NATIONWIDE DAMAGE PROBABILITY MAP OF RAILWAY EMBANKMENTS." Journal of Japan Society of Civil Engineers, Ser. C (Geosphere Engineering) 78, no. 4 (2022): 287–305. http://dx.doi.org/10.2208/jscejge.78.4_287.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Fan, Fang, Lingling Ye, Hiroo Kanamori, and Thorne Lay. "Responding to Media Inquiries about Earthquake Triggering Interactions." Seismological Research Letters 92, no. 5 (April 14, 2021): 3035–45. http://dx.doi.org/10.1785/0220200452.

Full text
Abstract:
Abstract In the aftermath of a significant earthquake, seismologists are frequently asked questions by the media and public regarding possible interactions with recent prior events, including events at great distances away, along with prospects of larger events yet to come, both locally and remotely. For regions with substantial earthquake catalogs that provide information on the regional Gutenberg–Richter magnitude–frequency relationship, Omori temporal aftershock statistical behavior, and aftershock productivity parameters, probabilistic responses can be provided for likelihood of nearby future events of larger magnitude, as well as expected behavior of the overall aftershock sequence. However, such procedures generally involve uncertain extrapolations of parameterized equations to infrequent large events and do not provide answers to inquiries about long-range interactions, either retrospectively for interaction with prior remote large events or prospectively for interaction with future remote large events. Dynamic triggering that may be involved in such long-range interactions occurs, often with significant temporal delay, but is not well understood, making it difficult to respond to related inquiries. One approach to addressing such inquiries is to provide retrospective or prospective occurrence histories for large earthquakes based on global catalogs; while not providing quantitative understanding of any physical interaction, experience-based guidance on the (typically very low) chances of causal interactions can inform public understanding of likelihood of specific scenarios they are commonly very interested in.
APA, Harvard, Vancouver, ISO, and other styles
43

Forcellini, Davide, Daniele Mina, and Hassan Karampour. "The Role of Soil Structure Interaction in the Fragility Assessment of HP/HT Unburied Subsea Pipelines." Journal of Marine Science and Engineering 10, no. 1 (January 14, 2022): 110. http://dx.doi.org/10.3390/jmse10010110.

Full text
Abstract:
Subsea high pressure/high temperature (HP/HT) pipelines may be significantly affected by the effects of soil structure interaction (SSI) when subjected to earthquakes. Numerical simulations are herein applied to assess the role of soil deformability on the seismic vulnerability of an unburied pipeline. Overcoming most of the contributions existing in the literature, this paper proposes a comprehensive 3D model of the system (soil + pipeline) by performing OpenSees that allows the representation of non-linear mechanisms of the soil and may realistically assess the induced damage caused by the mutual interaction of buckling and seismic loads. Analytical fragility curves are herein derived to evaluate the role of soil structure interaction in the assessment of the vulnerability of a benchmark HP/HT unburied subsea pipeline. The probability of exceeding selected limit states was based on the definition of credited failure criteria.
APA, Harvard, Vancouver, ISO, and other styles
44

Wang, J. P., and Y. Xu. "A non-stationary earthquake probability assessment with the Mohr–Coulomb failure criterion." Natural Hazards and Earth System Sciences 15, no. 10 (October 23, 2015): 2401–12. http://dx.doi.org/10.5194/nhess-15-2401-2015.

Full text
Abstract:
Abstract. From theory to experience, earthquake probability associated with an active fault should be gradually increasing with time since the last event. In this paper, a new non-stationary earthquake assessment motivated/derived from the Mohr–Coulomb failure criterion is introduced. Different from other non-stationary earthquake analyses, the new model can more clearly define and calculate the stress states between two characteristic earthquakes. In addition to the model development and the algorithms, this paper also presents an example calculation to help explain and validate the new model. On the condition of best-estimate model parameters, the example calculation shows a 7.6 % probability for the Meishan fault in central Taiwan to induce a major earthquake in years 2015–2025, and if the earthquake does not occur by 2025, the earthquake probability will increase to 8 % in 2025–2035, which validates the new model that can calculate non-stationary earthquake probability as it should vary with time.
APA, Harvard, Vancouver, ISO, and other styles
45

Holliday, J. R., J. B. Rundle, K. F. Tiampo, and D. L. Turcotte. "Using earthquake intensities to forecast earthquake occurrence times." Nonlinear Processes in Geophysics 13, no. 5 (October 31, 2006): 585–93. http://dx.doi.org/10.5194/npg-13-585-2006.

Full text
Abstract:
Abstract. It is well known that earthquakes do not occur randomly in space and time. Foreshocks, aftershocks, precursory activation, and quiescence are just some of the patterns recognized by seismologists. Using the Pattern Informatics technique along with relative intensity analysis, we create a scoring method based on time dependent relative operating characteristic diagrams and show that the occurrences of large earthquakes in California correlate with time intervals where fluctuations in small earthquakes are suppressed relative to the long term average. We estimate a probability of less than 1% that this coincidence is due to random clustering. Furthermore, we show that the methods used to obtain these results may be applicable to other parts of the world.
APA, Harvard, Vancouver, ISO, and other styles
46

Imoto, Masajiro. "Earthquake probability based on multidisciplinary observations with correlations." Earth, Planets and Space 58, no. 11 (November 2006): 1447–54. http://dx.doi.org/10.1186/bf03352643.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

De Natale, Giuseppe, Anna Ferraro, and Jean Virieux. "A probability method for local earthquake focal mechanisms." Geophysical Research Letters 18, no. 4 (April 1991): 613–16. http://dx.doi.org/10.1029/91gl00829.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Jena, Ratiranjan, Biswajeet Pradhan, Abdullah Al-Amri, Chang Wook Lee, and Hyuck-jin Park. "Earthquake Probability Assessment for the Indian Subcontinent Using Deep Learning." Sensors 20, no. 16 (August 5, 2020): 4369. http://dx.doi.org/10.3390/s20164369.

Full text
Abstract:
Earthquake prediction is a popular topic among earth scientists; however, this task is challenging and exhibits uncertainty therefore, probability assessment is indispensable in the current period. During the last decades, the volume of seismic data has increased exponentially, adding scalability issues to probability assessment models. Several machine learning methods, such as deep learning, have been applied to large-scale images, video, and text processing; however, they have been rarely utilized in earthquake probability assessment. Therefore, the present research leveraged advances in deep learning techniques to generate scalable earthquake probability mapping. To achieve this objective, this research used a convolutional neural network (CNN). Nine indicators, namely, proximity to faults, fault density, lithology with an amplification factor value, slope angle, elevation, magnitude density, epicenter density, distance from the epicenter, and peak ground acceleration (PGA) density, served as inputs. Meanwhile, 0 and 1 were used as outputs corresponding to non-earthquake and earthquake parameters, respectively. The proposed classification model was tested at the country level on datasets gathered to update the probability map for the Indian subcontinent using statistical measures, such as overall accuracy (OA), F1 score, recall, and precision. The OA values of the model based on the training and testing datasets were 96% and 92%, respectively. The proposed model also achieved precision, recall, and F1 score values of 0.88, 0.99, and 0.93, respectively, for the positive (earthquake) class based on the testing dataset. The model predicted two classes and observed very-high (712,375 km2) and high probability (591,240.5 km2) areas consisting of 19.8% and 16.43% of the abovementioned zones, respectively. Results indicated that the proposed model is superior to the traditional methods for earthquake probability assessment in terms of accuracy. Aside from facilitating the prediction of the pixel values for probability assessment, the proposed model can also help urban-planners and disaster managers make appropriate decisions regarding future plans and earthquake management.
APA, Harvard, Vancouver, ISO, and other styles
49

Nanjo, K. Z., S. Sakai, A. Kato, H. Tsuruoka, and N. Hirata. "Time-dependent earthquake probability calculations for southern Kanto after the 2011 M9.0 Tohoku earthquake." Geophysical Journal International 193, no. 2 (February 26, 2013): 914–19. http://dx.doi.org/10.1093/gji/ggt009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Rahman, Md Habibur, and Md Moyazzem Hossain. "Distribution of Earthquake Magnitude Levels in Bangladesh." Journal of Geography and Geology 11, no. 3 (September 30, 2019): 15. http://dx.doi.org/10.5539/jgg.v11n3p15.

Full text
Abstract:
Earthquakes are one of the main natural hazards which seriously make threats the life and property of human beings. Different probability distributions of the earthquake magnitude levels in Bangladesh are fitted. In terms of graphical assessment and goodness-of-fit criterion, the log-normal distribution is found to be the best fit probability distributions for the earthquake magnitude levels in Bangladesh among the probability distribution considered in this study. The average earthquake magnitude level found 4.67 (in Richter scale) for log-normal distribution and the approximately forty-six percent chance is predicted to take place earthquake magnitude in the interval four to five.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography