Dissertations / Theses on the topic 'Stochastic ground motion model'

To see the other types of publications on this topic, follow the link: Stochastic ground motion model.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Stochastic ground motion model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Yenier, Emrah. "Limitations On Point-source Stochastic Simulations In Terms Of Ground-motion Models." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/3/12610308/index.pdf.

Full text
Abstract:
In this study, the limitations of point-source stochastic simulations are investigated in terms of fundamental geophysical parameters. Within this context, a total of 6000 synthetic ground motions are generated for various magnitude (5.0 &
#8804
Mw &
#8804
7.5), source-to-site distance (less than 100 km), faulting style (shallow dipping and strike-slip) and site class (soft, stiff and rock) bins. The simulations are performed in two main stages: (1) the acceleration time series at outcropping very hard rock sites are simulated based on the stochastic method proposed by Boore (1983, 2003) and (2) they are modified through 1-D equivalent linear site response analysis to generate the free-field motions at soft, stiff and rock sites. Thus, as a part of this study, a probability-based soil profile model that considers the random variation of S-wave slowness as a function of depth is derived. The synthetic ground motions are assessed with several recent empirical ground-motion models to constitute the limitations of the simulation procedure. It is believed that the outcomes of this study will realistically describe the limitations of stochastic point-source simulation approach that can be employed further for the studies on improvements of this simulation technique.
APA, Harvard, Vancouver, ISO, and other styles
2

SCOZZESE, FABRIZIO. "AN EFFICIENT PROBABILISTIC FRAMEWORK FOR SEISMIC RISK ANALYSIS OF STRUCTURAL SYSTEMS EQUIPPED WITH LINEAR AND NONLINEAR VISCOUS DAMPERS." Doctoral thesis, Università degli Studi di Camerino, 2018. http://hdl.handle.net/11581/429547.

Full text
Abstract:
Seismic passive protection with supplemental damping devices represents an efficient strategy to produce resilient structural systems with improved seismic performances and notably reduced post-earthquake consequences. Such strategy offers indeed several advantages with respect to the ordinary seismic design philosophy: structural damages are prevented; the safety of the occupants is ensured and the system remains operational both during and right after the earthquake; no major retrofit interventions are needed but only a post-earthquake inspection (and if necessary, replacement) of dissipation devices is required; a noticeable reduction of both direct and indirect outlays is achieved. However, structural systems equipped with seismic control devices (dampers) may show potentially limited robustness, since an unexpected early disruption on the dampers may lead to a progressive collapse of the actually non-ductile system. Although the most advanced international seismic codes acknowledge this issue and require dampers to have higher safety margins against the failure, they only provide simplified approaches to cope with the problem, often consisting of general demand amplification rules which are not tailored on the actual needs of different device typologies and which lead to reliability levels not explicitly declared. The research activity carried out within this Thesis stems from the need to fill the gaps still present in the international regulatory framework, and respond to the scarcity of specific probabilistic studies geared to characterize and understand the probabilistic seismic response of such systems up to very low failure probabilities. In particular, as a first step towards this goal, the present work aims at addressing the issue of the seismic risk of structures with fluid viscous dampers, a simple and widely used class of dissipation devices. A robust probabilistic framework has been defined for the purposes of the present work, made up of the combination of an advanced probabilistic tool for solving reliability problems, consisting of Subset Simulation (with Markov chain Monte Carlo and Metropolis-like algorithms), and a stochastic ground motion model for statistical seismic hazard characterization. The seismic performance of the system is described by means of demand hazard curves, providing the mean annual frequency of exceeding any specified threshold demand value for all the relevant global and local Engineering Demand Parameters (EDPs). A wide range of performance levels is monitored, encompassing the serviceability conditions, the ultimate limit states, up to very rare performance demand levels (with mean annual frequency of exceedance around 10-6) at which the seismic reliability shall be checked in order to confer the system an adequate level of safety margins against seismic events rarer than the design one. Some original contributions regarding the methodological approaches have been obtained by an efficient combination of the common conditional probabilistic methods (i.e., multiple-stripe and cloud analysis) with a stochastic earthquake model, in which subset simulation is exploited for efficiently generate both the seismic hazard curve and the ground motion samples for structural analysis purposes. The accuracy of the proposed strategy is assessed by comparing the achieved seismic risk estimates with those provided via Subset Simulation, the latter being assumed as reference reliability method. Furthermore, a reliability-based optimization method is proposed as powerful tool for investigating upon the seismic risk sensitivity to variable model parameters. Such method proves to be particularly useful when a proper statistical characterization of the model parameters is not available. The proposed probabilistic framework is applied to a set of single-degree-of-freedom damped models to carry out an extensive parametric analysis, and to a multi-story steel building with linear and nonlinear viscous dampers for the aims of a deeper investigation. The influence of viscous dampers nonlinearity level on the seismic risk of such systems is investigated. The variability of viscous constitutive parameters due to the tolerance allowed in devices’ quality control and production tests is also accounted for, and the consequential effects on the seismic performances are evaluated. The reliability of simplified approaches proposed by the main international seismic codes for dampers design is assessed, the main regulatory gaps are highlighted and proposals for improvement are given as well. Results from this whole probabilistic investigation contribute to the development of more reliable design procedures for seismic passive protection strategies.
APA, Harvard, Vancouver, ISO, and other styles
3

D'Amico, Laura. "Stochastic analysis and design of vibrating barriers under simulated ground motion processes." Thesis, University of Brighton, 2017. https://research.brighton.ac.uk/en/studentTheses/91e41bc5-dbd6-4f79-a133-fcfd5a105f3e.

Full text
Abstract:
Vibration control techniques have developed remarkably over the past thirty years. These solutions are usually employed to protect new rather than existing structures, for which most of the available control devices may be costly and invasive to install. Recently, Vibrating Barriers (ViBas) have been proposed as a solution to protect both new and existing buildings. By exploiting the Structure-Soil-Structure-Interaction (SSSI) phenomenon, the ViBa is constructed away from the structures to be protected allowing the characteristics of the buildings to remain unaltered. The ViBa is envisaged as a vibrating mass placed into the soil, through which the control device interacts with the structure in its proximity and is therefore able to control vibrations for cluster of buildings. Up until now the efficiency of the ViBa has only been demonstrated for simple cases of seismic deterministic input and stationary Gaussian stochastic processes. This research explores the multiple interactions between a building and a ViBa device in order to assess its performance in realistic earthquake scenarios. By means of Direct Stochastic methods, this research presents a methodology to design the ViBa validated through pertinent Monte Carlo Simulation. The effects of the input selection on the ViBa performance are investigated by analysis of the building-soil-ViBa system response under advanced stochastic Ground Motion Models (GMMs). In regard to this, a technique is proposed to simulate earthquake ground motions in agreement with seismic codes and reproducing the non-stationarity and natural variability typical of recorded earthquakes. Initial investigations on the response of linear and non-linear structures, under the proposed ground motion and a traditional quasi-stationary and non-stationary model, have demonstrated that the choice of the ground motion has considerable influence over the study of the reliability of structures also for the simple case of linear behaving structures. From the analyses, the sensitivity of the distribution of the relevant response parameters (e.g. the peak displacement) to the GMMs is shown. All spectrum compatible models adopted fulfil the code provisions, however noticeable differences in the distribution of response parameters are observed. Moreover, studies on the sensitivity of structural responses to damping variation have been performed to address the significance of the GMM selection in relation to the assumptions on the structural damping. Finally, some drawbacks in the current seismic codes have also been identified. In order to establish the methodology for the design of the ViBa under stochastic excitation, the discrete formulation for buildings-soil-ViBa-systems available in the frequency domain has been extended to the time domain. The methodology proposed in this work enables a simplified reliability assessment by defining the mean value of the maxima response displacements under stationary Gaussian stochastic seismic action firstly and successively verified for non-stationary input. From the investigations, the effectiveness of the ViBa is exhibited as having reductions of up to 37.80 % of the mean and up to 41.49% of the fractile 95% of the peak displacements.
APA, Harvard, Vancouver, ISO, and other styles
4

Siebrits, Eduard. "Three-dimensional elastodynamic shear fracture propagation and ground motion simulation model." Master's thesis, University of Cape Town, 1986. http://hdl.handle.net/11427/26137.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kewlani, Gaurav. "Stochastic approaches to mobility prediction, path planning and motion control for ground vehicles in uncertain environments." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/55270.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2009.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 107-111).
The ability of autonomous or semi-autonomous unmanned ground vehicles (UGVs) to rapidly and accurately predict terrain negotiability, generate efficient paths online and have effective motion control is a critical requirement for their safety and use in unstructured environments. Most techniques and algorithms for performing these functions, however, assume precise knowledge of vehicle and/or environmental (i.e. terrain) properties. In practical applications, significant uncertainties are associated with the estimation of the vehicle and/or terrain parameters, and these uncertainties must be considered while performing the above tasks. Here, computationally inexpensive methods based on the polynomial chaos approach are studied that consider imprecise knowledge of vehicle and/or terrain parameters while analyzing UGV dynamics and mobility, evaluating safe, traceable paths to be followed and controlling the vehicle motion. Conventional Monte Carlo methods, that are relatively more computationally expensive, are also briefly studied and used as a reference for evaluating the computational efficiency and accuracy of results from the polynomial chaos-based techniques.
by Gaurav Kewlani.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
6

Zadonina, Ekaterina. "Strong ground motion simulations and assessment of influence of model parameters on waveforms." Master's thesis, Universidade de Évora, 2010. http://hdl.handle.net/10174/21222.

Full text
Abstract:
A modelação de movimentos sísmicos intensos em campo próximo é um importante instrumento da sismologia moderna, usado nos estudos de sismologia e risco sísmico. Existem várias abordagens para calcular os movimentos do solo produzido por fontes sísmicas finitas. Neste trabalho utilizámos um algoritmo de diferenças finitas, desenvolvido para estruturas 3D e modelos cinemáticos de fonte, para calcular os movimentos da Terra em campo próximo produzidos por um evento real. Os sismogramas sintéticos e as correspondentes formas de onda registadas são quantitativamente comparadas para justificar o modelo usado. Foram também ensaiados o efeito das variações de alguns parâmetros que caracterizam a fonte e a estrutura (velocidade de ruptura, dimensão e geometria, modelo de velocidade), sobre as formas de onda. Os resultados obtidos mostraram, em geral, boa concordância entre os dados observados e sintéticos e revelam a diferente capacidade que os parâmetros envolvidos têm para influenciar as formas de onda obtidas. __ Summary: Modeling near-field ground motion is an important and helpful tool of modem seismology. It helps in studies of seismic events and mitigation of seismic hazards. Several approaches are widely used to obtain synthetic ground motion for a finite earthquake source. ln our work we use a finite-difference algorithm, developed for 3D structures and kinematic source models, to compute near-field ground motions from a real moderate event with pre-existing slip distribution model. Lately, synthetic seismograms are quantitatively compared with observed waveforms from near-field seismic stations on order to justify created model. Moreover, we independently changed several source parameters (rupture velocity, source dimension and geometry), and structure (velocity model) to evaluate their influence on the waveforms. Here we also applied quantitative comparison of seismograms. Obtained results showed generally good agreement in magnitudes of motion between observed and synthetic data, and revealed effect of different model parameters on the waveforms.
APA, Harvard, Vancouver, ISO, and other styles
7

Ugurhan, Beliz. "Stochastic Strong Ground Motion Simulations On North Anatolian Fault Zone And Central Italy: Validation, Limitation And Sensitivity Analyses." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612413/index.pdf.

Full text
Abstract:
Assessment of potential ground motions in seismically active regions is essential for purposes of seismic design and analysis. Peak ground motion intensity values and frequency content of seismic excitations are required for reliable seismic design, analysis and retrofitting of structures. In regions of sparse or no strong ground motion records, ground motion simulations provide physics-based synthetic records. These simulations provide not only the earthquake engineering parameters but also give insight into the mechanisms of the earthquakes. This thesis presents strong ground motion simulations in three regions of intense seismic activity. Stochastic finite-fault simulation methodology with a dynamic corner frequency approach is applied to three case studies performed in Dü
zce, L&rsquo
Aquila and Erzincan regions. In Dü
zce study, regional seismic source, propagation and site parameters are determined through validation of the simulations against the records. In L&rsquo
Aquila case study, in addition to study of the regional parameters, the limitations of the method in terms of simulating the directivity effects are also investigated. In Erzincan case study, where there are very few records, the optimum model parameters are determined using a large set of simulations with an error-minimization scheme. Later, a parametric sensitivity study is performed to observe the variations in simulation results to small perturbations in input parameters. Results of this study confirm that stochastic finite-fault simulation method is an effective technique for generating realistic physics-based synthetic records of large earthquakes in near field regions.
APA, Harvard, Vancouver, ISO, and other styles
8

Manko, N. N., and I. A. Lyashenko. "Stochastic Oscillations at Stick-Slip Motion in the Boundary Friction Regime." Thesis, Sumy State University, 2013. http://essuir.sumdu.edu.ua/handle/123456789/35148.

Full text
Abstract:
In this paper, the further development of the synergetic model describing the ultrathin lubricant film state clamped between two atomically smooth solid surfaces operating under boundary friction mode has been done based on the Lorentz model for the approximation of a viscoelastic medium. In all cases, the phase portraits have been built. It has been found that the friction surfaces' temperature increasing leads to the growth of stochasticity in the investigated system. In the phase plane the stochastic oscillation mode can be described as a strange attractor. Also, the behavior of two different types of tribosystems were described using current model. The first was the system with the unidirectional shear of the surfaces and, and the second was the system under an alternating external effect. Obtained results agree qualitatively with known experimental data. When you are citing the document, use the following link http://essuir.sumdu.edu.ua/handle/123456789/35148
APA, Harvard, Vancouver, ISO, and other styles
9

Kelekele, Liloo Didier Joel. "Mathematical model of performance measurement of defined contribution pension funds." University of the Western Cape, 2015. http://hdl.handle.net/11394/4367.

Full text
Abstract:
>Magister Scientiae - MSc
The industry of pension funds has become one of the drivers of today’s economic activity by its important volume of contribution in the financial market and by creating wealth. The increasing importance that pension funds have acquired in today’s economy and financial market, raises special attention from investors, financial actors and pundits in the sector. Regarding this economic weight of pension funds, a thorough analysis of the performance of different pension funds plans in order to optimise benefits need to be undertaken. The research explores criteria and invariants that make it possible to compare the performance of different pension fund products. Pension fund companies currently do measure their performances with those of others. Likewise, the individual investing in a pension plan compares different products available in the market. There exist different ways of measuring the performance of a pension fund according to their different schemes. Generally, there exist two main pension funds plans. The defined benefit (DB) pension funds plan which is mostly preferred by pension members due to his ability to hold the risk to the pension fund manager. The defined contributions (DC) pension fund plan on the other hand, is more popularly preferred by the pension fund managers due to its ability to transfer the risk to the pension fund members. One of the reasons that motivate pension fund members’ choices of entering into a certain programme is that their expectations of maintaining their living lifestyle after retirement are met by the pension fund strategies. This dissertation investigates the various properties and characteristics of the defined contribution pension fund plan with a minimum guarantee and benchmark in order to mitigate the risk that pension fund members are subject to. For the pension fund manager the aim is to find the optimal asset allocation strategy which optimises its retribution which is in fact a part of the surplus (the difference between the pension fund value and the guarantee) (2004) [19] and to analyse the effect of sharing between the contributor and the pension fund. From the pension fund members’ perspective it is to define a optimal guarantee as a solution to the contributor’s optimisation programme. In particular, we consider a case of a pension fund company which invests in a bond, stocks and a money market account. The uncertainty in the financial market is driven by Brownian motions. Numerical simulations were performed to compare the different models.
APA, Harvard, Vancouver, ISO, and other styles
10

Händel, Annabel [Verfasser], Frank [Akademischer Betreuer] Scherbaum, and Frank [Akademischer Betreuer] Krüger. "Ground-motion model selection and adjustment for seismic hazard analysis / Annabel Händel ; Frank Scherbaum, Frank Krüger." Potsdam : Universität Potsdam, 2018. http://d-nb.info/121840406X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Hanowsky, Michael John. "A model to design a stochastic and dynamic ground delay program subject to non-linear cost functions." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/43849.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Engineering Systems Division, 2008.
Includes bibliographical references (p. 245-247).
When inclement weather reduces the arrival capacity of a busy metropolitan airport, it may lead to significant airborne delays. Delaying aircraft in the air consumes additional fuel, increases overall air traffic congestion, and may lead to costly flight diversions. As a result, during periods of inclement weather, the FAA may implement a Ground Delay Program (GDP) to proactively delay flights on the ground before they depart and reduce the possibility of future airborne delays. However, in order to assign ground delays to flights, a GDP must be implemented before they depart, at a time when the future airport arrival capacity may be uncertain. This dissertation discusses two analyses in regards to the design of a GDP. The first analysis proposes a model that solves for the optimal assignment of ground delay to aircraft for a stochastic and dynamic forecast of the airport arrival capacity, with nonlinear delay cost functions, and a capacity of the airborne arrival queue. This model is applied to several hypothetical examples and, in comparison to prior models from the literature, identifies solutions with a lower total expected cost, a smaller maximum observed arrival queue, or both. The second analysis compares the salience, or importance, of various stakeholder groups to their roles in the design of a GDP in practice. Passengers, in particular, are shown to be an important, but under-represented stakeholder group. A second model is proposed that solves for an assignment of ground delay that minimizes the total passenger delay cost. A comparison of these results to those of the first model show that the total cost of delays to passengers could be reduced by more than 30% if the FAA were to directly consider the cost of delays to passengers during the design of a GDP.
by Michael J. Hanowsky.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
12

Rafiou, AS. "Foreign Exchange Option Valuation under Stochastic Volatility." University of the Western Cape, 2009. http://hdl.handle.net/11394/7777.

Full text
Abstract:
>Magister Scientiae - MSc
The case of pricing options under constant volatility has been common practise for decades. Yet market data proves that the volatility is a stochastic phenomenon, this is evident in longer duration instruments in which the volatility of underlying asset is dynamic and unpredictable. The methods of valuing options under stochastic volatility that have been extensively published focus mainly on stock markets and on options written on a single reference asset. This work probes the effect of valuing European call option written on a basket of currencies, under constant volatility and under stochastic volatility models. We apply a family of the stochastic models to investigate the relative performance of option prices. For the valuation of option under constant volatility, we derive a closed form analytic solution which relaxes some of the assumptions in the Black-Scholes model. The problem of two-dimensional random diffusion of exchange rates and volatilities is treated with present value scheme, mean reversion and non-mean reversion stochastic volatility models. A multi-factor Gaussian distribution function is applied on lognormal asset dynamics sampled from a normal distribution which we generate by the Box-Muller method and make inter dependent by Cholesky factor matrix decomposition. Furthermore, a Monte Carlo simulation method is adopted to approximate a general form of numeric solution The historic data considered dates from 31 December 1997 to 30 June 2008. The basket contains ZAR as base currency, USD, GBP, EUR and JPY are foreign currencies.
APA, Harvard, Vancouver, ISO, and other styles
13

Xiang, Gong. "Motion Dynamics of Dropped Cylindrical Objects." ScholarWorks@UNO, 2017. http://scholarworks.uno.edu/td/2340.

Full text
Abstract:
Dropped objects are among the top ten causes of fatalities and serious injuries in the oil and gas industry. Objects may be dropped during lifting or any other offshore operation. Concerns of health, safety, and the environment (HSE) as well as possible damages to structures require the prediction of where and how a dropped object moves underwater. This study of dropped objects is subdivided into three parts. In the first part, the experimental and simulated results published by Aanesland (1987) have been successfully reproduced and validated based on a two-dimensional (2D) theory for a dropped drilling pipe model. A new three-dimensional (3D) theory is proposed to consider the effect of axial rotation on dropped cylindrical objects. The 3D method is based on a modified slender body theory for maneuvering. A numerical tool called Dropped Objects Simulator (DROBS) has been developed based on this 3D theory. Firstly, simulated results of a dropped drilling pipe model using a 2D theory by Aanesland (1987) are compared with results from 3D theory when rolling frequency is zero. Good agreement is found. Further, factors that affect the trajectory, such as drop angle, normal drag coefficient, binormal drag coefficient, and rolling frequency are systematically investigated. It is found that drop angle, normal drag coefficient, and rolling frequency are the three most critical factors determining the trajectories. In the second part, a more general three-dimensional (3D) theory is proposed to physically simulate the dynamic motion of a dropped cylindrical object underwater with different longitudinal center of gravity (LCG). DROBS has been further developed based on this 3D theory. It is initially applied to a dropped cylinder with LCG = 0 (cylinder #1) falling from the surface of calm water. The calculated trajectories match very well with both the experimental and numerical results published in Aanesland (1987). Then DROBS is further utilized to simulate two dropped cylinders with positive LCG (cylinder #2) and negative LCG (cylinder #3) in Chu et al. (2005), respectively. The simulated results from DROBS show a better agreement with the measured data than the numerical results given in Chu et al. (2005). This comparison again validates and indicates the effectiveness of the DROBS program. Finally, it’s applied to investigate the effects of varying LCG on the trajectory and landing points. Therefore, the newly developed DROBS program could be used to simulate the distribution of landing points of dropped cylindrical objects, as is very valuable in the risk-free zone prediction in offshore engineering. The third part investigates the dynamic motion of a dropped cylindrical object under current. A numerical procedure is developed and integrated into Dropped Objects Simulator (DROBS). DROBS is utilized to simulate the trajectories of a cylinder when dropped into currents from different directions (incoming angle at 0o; 90o; 180o; and 270o) and with different amplitudes (0m/s to 1.0m/s). It is found that trajectories and landing points of dropped cylinders are greatly influenced by currents. Cylinders falling into water are modeled as a stochastic process. Therefore, the related parameters, including the orientation angle, translational velocity and rotational velocity of the cylindrical object after fully entering the water, is assumed to follow normal distributions. DROBS is further used to derive the landing point distribution of a cylinder. The results are compared to Awotahegn (2015) based on Monte Carlo simulations. Then the Monte Carlo simulations are used for predicting the landing point distribution of dropped cylinders with drop angles from 0o to 90o under the influence of currents. The plots of overall landing point distribution and impact energy distribution on the sea bed provide a simple way to indicate the risk-free zones for offshore operation.
APA, Harvard, Vancouver, ISO, and other styles
14

Nguyen, Cu Ngoc. "Stochastic differential equations with long-memory input." Thesis, Queensland University of Technology, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
15

Kim, Bumsoo. "Motion control of an autonomous vehicle with loss of wheel-ground contact avoidance using dynamic model based predictive control." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ58286.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Wesselhöfft, Niels. "Utilizing self-similar stochastic processes to model rare events in finance." Doctoral thesis, Humboldt-Universität zu Berlin, 2021. http://dx.doi.org/10.18452/22360.

Full text
Abstract:
In der Statistik und der Mathematik ist die Normalverteilung der am meisten verbreitete, stochastische Term für die Mehrheit der statistischen Modelle. Wir zeigen, dass der entsprechende stochastische Prozess, die Brownsche Bewegung, drei entscheidende empirische Beobachtungen nicht abbildet: schwere Ränder, Langzeitabhängigkeiten und Skalierungsgesetze. Ein selbstähnlicher Prozess, der in der Lage ist Langzeitabhängigkeiten zu modellieren, ist die Gebrochene Brownsche Bewegung, welche durch die Faltung der Inkremente im Limit nicht normalverteilt sein muss. Die Inkremente der Gebrochenen Brownschen Bewegung können durch einen Parameter H, dem Hurst Exponenten, Langzeitabhängigkeiten darstellt werden. Für die Gebrochene Brownsche Bewegung müssten die Skalierungs-(Hurst-) Exponenten über die Momente verschiedener Ordnung konstant sein. Empirisch beobachten wir variierende Hölder-Exponenten, die multifraktales Verhalten implizieren. Wir erklären dieses multifraktale Verhalten durch die Änderung des alpha-stabilen Indizes der alpha-stabilen Verteilung, indem wir Filter für Saisonalitäten und Langzeitabhängigkeiten über verschiedene Zeitfrequenzen anwenden, startend bei 1-minütigen Hochfrequenzdaten. Durch die Anwendung eines Filters für die Langzeitabhängigkeit zeigen wir, dass die Residuen des stochastischen Prozesses geringer Zeitfrequenz (wöchentlich) durch die alpha-stabile Bewegung beschrieben werden können. Dies erlaubt es uns, den empirischen, hochfrequenten Datensatz auf die niederfrequente Zeitfrequenz zu skalieren. Die generierten wöchentlichen Daten aus der Frequenz-Reskalierungs-Methode (FRM) haben schwerere Ränder als der ursprüngliche, wöchentliche Prozess. Wir zeigen, dass eine Teilmenge des Datensatzes genügt, um aus Risikosicht bessere Vorhersagen für den gesamten Datensatz zu erzielen. Im Besonderen wäre die Frequenz-Reskalierungs-Methode (FRM) in der Lage gewesen, die seltenen Events der Finanzkrise 2008 zu modellieren.
Coming from a sphere in statistics and mathematics in which the Normal distribution is the dominating underlying stochastic term for the majority of the models, we indicate that the relevant diffusion, the Brownian Motion, is not accounting for three crucial empirical observations for financial data: Heavy tails, long memory and scaling laws. A self-similar process, which is able to account for long-memory behavior is the Fractional Brownian Motion, which has a possible non-Gaussian limit under convolution of the increments. The increments of the Fractional Brownian Motion can exhibit long memory through a parameter H, the Hurst exponent. For the Fractional Brownian Motion this scaling (Hurst) exponent would be constant over different orders of moments, being unifractal. But empirically, we observe varying Hölder exponents, the continuum of Hurst exponents, which implies multifractal behavior. We explain the multifractal behavior through the changing alpha-stable indices from the alpha-stable distributions over sampling frequencies by applying filters for seasonality and time dependence (long memory) over different sampling frequencies, starting at high-frequencies up to one minute. By utilizing a filter for long memory we show, that the low-sampling frequency process, not containing the time dependence component, can be governed by the alpha-stable motion. Under the alpha-stable motion we propose a semiparametric method coined Frequency Rescaling Methodology (FRM), which allows to rescale the filtered high-frequency data set to the lower sampling frequency. The data sets for e.g. weekly data which we obtain by rescaling high-frequency data with the Frequency Rescaling Method (FRM) are more heavy tailed than we observe empirically. We show that using a subset of the whole data set suffices for the FRM to obtain a better forecast in terms of risk for the whole data set. Specifically, the FRM would have been able to account for tail events of the financial crisis 2008.
APA, Harvard, Vancouver, ISO, and other styles
17

Kardoš, Juraj. "Návrh systému Auto Taxi pro letoun." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2015. http://www.nusl.cz/ntk/nusl-234930.

Full text
Abstract:
Nedávné studie předpovídají nárůst pasažérů využívajících leteckou dopravu. Tento trend bude vyžadovat zavedení nových leteckých linek, důsledkem čeho bude zhuštěn letový provoz s dopadem hlavně na nápor letišť v metropolitních oblastech. Automatizovaně řízení pojíždení letounu umožní menší rozestupy mezi jednotlivými linkami a zvýšení příletové a odletové kapacity letišť. Tato práce se zabývá návrhem modelu pohybu dopravního letounu po zemi s ohledem na různé provozní podmínky jako např.: stav povrchu vzletové a přistávací dráhy za různého počasí a lišící se provozní parametry letounu (tlak v pneumatikách, zatížení podvozků a pod.). Validace modelu byla založena na sledování poloměru zatáčky pro různe uhly natočení přední podvozkové nohy. Výsledky simulace byly validovany vzhledem k analytickému modelu Ackermanovy geometrie a na specifikační dokument od Boeingu určený pro plánovaní pohybu letounu na letišti. Výsledky prokázaly přesnost modelu a potvrdily jeho možné nasazení pro simulace v reálnem čase.
APA, Harvard, Vancouver, ISO, and other styles
18

Curtis, Andrew B. "Path Planning for Unmanned Air and Ground Vehicles in Urban Environments." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2270.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Bari, Md Wasiul. "Modelling of ground improvement by vertical drains in highly variable soils." Thesis, Curtin University, 2012. http://hdl.handle.net/20.500.11937/2593.

Full text
Abstract:
The research presented in this thesis focuses on the probabilistic modelling of soil consolidation via prefabricated vertical drains (PVDs) considering soil spatial variability. Soils are highly variable from one point to another in the ground and yet this is often coupled with inadequate site data, probabilistic analysis is a more rational approach to assess the behaviour of soil consolidation by PVDs. Although the fact that spatial variation of soil properties can affect soil consolidation has long been realized, the design of soil consolidation via PVDs has been traditionally carried out deterministically and thus can be misleading due to the ignorance of the uncertainty associated with the inherent spatial variation of soil properties. One of the major advantages of probabilistic modelling over deterministic approach is that it can explicitly incorporate soil spatial variability in the analysis and design of a geotechnical problem and subsequently provides much physical insight into the impact of soil spatial variability on the behaviour of the problem under consideration.However, owing to the complexity of the stochastic problem, available research into consolidation of highly variable soils has been limited. The review of relevant literature has indicated that soil spatial variability in relation to ground improvement by PVDs has never been previously considered in a systematic, scientific manner in design and little research has been done in this area. Therefore, to obtain a more realistic measure of the degree of consolidation at any specified time, the effect of soil spatial variability needs to be taken into account by employing probabilistic modelling approach.Among several available methods of stochastic modelling, the random finite element method (RFEM) using random variable soil input properties in a Monte Carlo framework has gained much popularity in recent years. The same approach is adopted in the present research for modelling soil spatial variability in soil consolidation by PVDs. The soil permeability, k, and volume compressibility, mv, are considered as random variables and the variability of both k and mv is characterised statistically in terms of the mean, standard deviation, lognormal probability distribution and scale of fluctuation (SOF).The random fields of k and mv are generated using 2D local average subdivision (LAS) method developed by Fenton and Vanmarcke (1990). The generated random fields are then used as inputs in a stochastic finite element modelling of soil consolidation by PVDs. In this research, all numerical analyses are carried out using the 2D finite element computer program AFENA (Carter and Balaam 1995), in which the consolidation process of soil is treated as a coupled transient problem governed by the Biot’s consolidation theory (Biot 1941).
APA, Harvard, Vancouver, ISO, and other styles
20

Birgoren, Gulum. "Strong motion simulation of the 1999 earthquakes in western Turkey : Stochastic Green's Function Technique with characterized source model and phase dependent site response." 京都大学 (Kyoto University), 2004. http://hdl.handle.net/2433/147832.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Rosen, Mary Ellen Furner. "Mean Square Displacement for a Discrete Centroid Model of Cell Motion and a Mathematical Analysis of Focal Adhesion Lifetimes and Their Effect on Cell Motility." BYU ScholarsArchive, 2021. https://scholarsarchive.byu.edu/etd/8780.

Full text
Abstract:
One of the characteristics that distinguishes living things from non-living things is motility. On the cellular level, the motility or non-motility of different types of cells can be life building, life-saving or life-threatening. A thorough study of cell motion is needed to help understand the underlying mechanisms that enhance or prohibit cell motion. We introduce a discrete centroid model of cell motion in the context of a generalized random walk. We find an approximation for the theoretical mean square displacement (MSD) that uses a subset of the state space to estimate the MSD for the entire space. We give some intuition as to why this is an unexpectedly good estimate. A lower and upper bound for the MSD is also given. We extend the centroid model to an ODE model and use it to analyze the distribution of focal adhesion (FA) lifetimes gathered from experimental data. We found that in all but one case a unimodal, non-symmetric gamma distribution is a good match for the experimental data. We use a detach-rate function in the ODE model to determine how long a FA will persist before it detaches. A detach-rate function that is dependent on both force and time produces distributions with a best fit gamma curve that closely matches the data. Using the data gathered from the matching simulations, we calculate both the cell speed and mean FA lifetime and compare them. Where available, we also compare this relationship to that of the experimental data and find that the simulation reasonably matches it in most cases. In both the simulations and experimental data, the cell speed and mean FA lifetime are related, with longer mean lifetimes being indicative of slower speeds. We suspect that one of the main predictors of cell speed for migrating cells is the distribution of the FA lifetimes.
APA, Harvard, Vancouver, ISO, and other styles
22

Pesee, Chatchai. "Stochastic modelling of financial processes with memory and semi-heavy tails." Thesis, Queensland University of Technology, 2005. https://eprints.qut.edu.au/16057/2/Chatchai%20Pesee%20Thesis.pdf.

Full text
Abstract:
This PhD thesis aims to study financial processes which have semi-heavy-tailed marginal distributions and may exhibit memory. The traditional Black-Scholes model is expanded to incorporate memory via an integral operator, resulting in a class of market models which still preserve the completeness and arbitragefree conditions needed for replication of contingent claims. This approach is used to estimate the implied volatility of the resulting model. The first part of the thesis investigates the semi-heavy-tailed behaviour of financial processes. We treat these processes as continuous-time random walks characterised by a transition probability density governed by a fractional Riesz- Bessel equation. This equation extends the Feller fractional heat equation which generates a-stable processes. These latter processes have heavy tails, while those processes generated by the fractional Riesz-Bessel equation have semi-heavy tails, which are more suitable to model financial data. We propose a quasi-likelihood method to estimate the parameters of the fractional Riesz- Bessel equation based on the empirical characteristic function. The second part considers a dynamic model of complete financial markets in which the prices of European calls and puts are given by the Black-Scholes formula. The model has memory and can distinguish between historical volatility and implied volatility. A new method is then provided to estimate the implied volatility from the model. The third part of the thesis considers the problem of classification of financial markets using high-frequency data. The classification is based on the measure representation of high-frequency data, which is then modelled as a recurrent iterated function system. The new methodology developed is applied to some stock prices, stock indices, foreign exchange rates and other financial time series of some major markets. In particular, the models and techniques are used to analyse the SET index, the SET50 index and the MAI index of the Stock Exchange of Thailand.
APA, Harvard, Vancouver, ISO, and other styles
23

Pesee, Chatchai. "Stochastic Modelling of Financial Processes with Memory and Semi-Heavy Tails." Queensland University of Technology, 2005. http://eprints.qut.edu.au/16057/.

Full text
Abstract:
This PhD thesis aims to study financial processes which have semi-heavy-tailed marginal distributions and may exhibit memory. The traditional Black-Scholes model is expanded to incorporate memory via an integral operator, resulting in a class of market models which still preserve the completeness and arbitragefree conditions needed for replication of contingent claims. This approach is used to estimate the implied volatility of the resulting model. The first part of the thesis investigates the semi-heavy-tailed behaviour of financial processes. We treat these processes as continuous-time random walks characterised by a transition probability density governed by a fractional Riesz- Bessel equation. This equation extends the Feller fractional heat equation which generates a-stable processes. These latter processes have heavy tails, while those processes generated by the fractional Riesz-Bessel equation have semi-heavy tails, which are more suitable to model financial data. We propose a quasi-likelihood method to estimate the parameters of the fractional Riesz- Bessel equation based on the empirical characteristic function. The second part considers a dynamic model of complete financial markets in which the prices of European calls and puts are given by the Black-Scholes formula. The model has memory and can distinguish between historical volatility and implied volatility. A new method is then provided to estimate the implied volatility from the model. The third part of the thesis considers the problem of classification of financial markets using high-frequency data. The classification is based on the measure representation of high-frequency data, which is then modelled as a recurrent iterated function system. The new methodology developed is applied to some stock prices, stock indices, foreign exchange rates and other financial time series of some major markets. In particular, the models and techniques are used to analyse the SET index, the SET50 index and the MAI index of the Stock Exchange of Thailand.
APA, Harvard, Vancouver, ISO, and other styles
24

Aloi, Daniel Nicholas. "Development and verification of a mathematical model to investigate the effects of earth-surface-based multipath reflections at a differential global positioning system ground reference site." Ohio : Ohio University, 1999. http://www.ohiolink.edu/etd/view.cgi?ohiou1175264170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Dujardin, Alain. "Prédiction des mouvements du sol dus à un séisme : différences de décroissance entre petits et gros séismes et simulations large bande par fonctions de Green empiriques." Thesis, Nice, 2015. http://www.theses.fr/2015NICE4070/document.

Full text
Abstract:
La prédiction des mouvements du sol générés par un séisme est un enjeu majeur pour la prise en compte du risque sismique. C’est l’un des objectifs du projet SIGMA dans le cadre duquel j’ai réalisé ma thèse. Celle-ci se compose de deux parties. La première se concentre sur la dépendance à la magnitude de la décroissance des paramètres des mouvements du sol avec la distance. Celle-ci est un sujet de préoccupation aussi bien pour l’utilisation des relations d’atténuation (GMPEs), que pour les méthodes basées sur l’utilisation de petits évènements en tant que fonctions de Green empiriques. Nous avons démontré qu’aux distances les plus faibles (inférieures à la longueur de la faille), l'effet de saturation dû aux dimensions de la faille est prépondérant. Aux distances plus importantes, l'effet de l’atténuation anélastique devient prépondérant. Nous avons donc montré qu’il pouvait être délicat de mélanger des données de différentes régions dans les GMPEs, et validé l’utilisation des fonctions de Green empiriques à toutes les distances. Dans la deuxième partie sont testées 3 différentes méthodes de simulations dans un contexte complexe : un code combinant une source étendue en k2 et des EGFs, un code point-source EGFs et un code stochastique. Nous avons choisi de travailler sur le séisme de magnitude Mw 5.9 (29 mai 2012) situé dans un bassin sédimentaire profond (la plaine du Po), et qui a engendré des sismogrammes souvent dominés par les ondes de surface. On y démontre que sans connaissance à priori du milieu de propagation, les méthodes basées sur des EGF permettent de reproduire les ondes de surface, les valeurs de PGA, de PGV, ainsi que les durées des signaux générés
The prediction of ground motion generated by an earthquake is a major issue for the consideration of seismic risk. This is one of the objectives of SIGMA project in which I realized my thesis. It consists of two parts. The first focuses on the magnitude dependence of the ground motion parameters decay with distance. This is a concern both for the use of relation of attenuation (GMPEs) than methods based on the use of small events as empirical Green functions. We have shown that as the shorter distances (less than the length of the fault), the saturation effect due to the fault size is preponderant. For larger distances, it’s the eanelastic attenuation effect which becomes predominant. So we have shown that it can be tricky to mix data from different regions in GMPEs and we validated the use of empirical Green functions at every distance. In the second part are tested three different simulation methods in a complex context: a code combining finite fault source in k2 and EGFs, a point-source code with EGFs and a stochastic code. We chose to work on the Mw 5.9 earthquake (May 29, 2012) which occurs in a deep sedimentary basin (the Po plain), and which has generated seismograms often dominated by surface waves. We show that without a priori knowledge of the propagation medium, methods based on EGFs can reproduce surface waves, the values of PGA, PGV, and the durations of the signals generated
APA, Harvard, Vancouver, ISO, and other styles
26

Bayhan, Beyhan. "Buildings Under Recurring Near-field Earthquakes." Phd thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612424/index.pdf.

Full text
Abstract:
Prior to this study, to our best knowledge, no cast-in-place, older-type RC building has ever been subjected to near-field strong ground motions from three major earthquakes. This happened in an indirect way in Turkey over a time span of eleven years. Three identical buildings belonging to Ministry of Public Works and Resettlement (MPWR) that had been built to the same design templates, experienced March 13th 1992 Erzincan earthquake in Erzincan, November 12th 1999 Dü
zce earthquake in Bolu and May 1st 2003 Bingö
l earthquake in Bingö
l, respectively. The ground motion sensor stations were fortuitously nearby in an adjacent single-story building in Bolu and Bingö
l. The station in Erzincan was in a single-story building about 2 km away from the case study building but we assume that the record applies to the building there. These three data represent characteristics of near-field ground motions and the distance of the sensor stations to the nearest fault trace was less than 10 km. The buildings sustained varying degrees of damage during the earthquakes and their damage survey was employed through site investigations. Given that the damage information, input motions, design drawings and material properties of the buildings are all known, this provided an opportunity to predict the structural damage to these buildings by proper modeling using the tools of current computational performance assessment procedures. In this circumstance, three dimensional (3D) analytical models of the MPWR buildings have been performed. Bi-directional excitations have been applied to the models by nonlinear time history analyses (NTHA). The results illustrate that NTHA are capable of indicating the occurrence of shear failure in captive columns
however, they overestimate the global damage level for all buildings. The overestimation is more significant in Erzincan case where the building sustained a pulse-type motion without significant distress.
APA, Harvard, Vancouver, ISO, and other styles
27

Laurendeau, Aurore. "Définition des mouvements sismiques "au rocher." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENU036/document.

Full text
Abstract:
L'objectif de cette thèse vise à améliorer la définition des vibrations (« mouvement sismique ») sur des sites « durs » (sédiments raides ou rochers) liés à des scénarios (séismes de magnitude entre 5 et 6.5, distances inférieures à 50 kilomètres) représentatifs du contexte métropolitain français. Afin de contraindre ces mouvements sismiques sur sites « durs », une base de données accélérométriques a été construite, à partir des enregistrements accélérométriques japonais K-NET et KiK-net qui ont l'avantage d'être publiques, nombreux et de grande qualité. Un modèle de prédiction des mouvements sismiques (spectre de réponse en accélération) a été conçu à partir de cette nouvelle base. La comparaison entre modèles théoriques et observations montre la dépendance des vibrations sur sites rocheux à la fois aux caractéristiques de vitesse du site (paramètre classique décrivant la vitesse des ondes S dans les 30 derniers mètres) et aux mécanismes d'atténuation hautes fréquences (un phénomène très peu étudié jusque-là). Ces résultats confirment une corrélation entre ces deux mécanismes (les sites rocheux les plus mous atténuent plus le mouvement sismique à hautes fréquences) et nous proposons un modèle de prédiction du mouvement sismique prenant en compte l'ensemble des propriétés du site (atténuation et vitesse). Les méthodes nouvelles de dimensionnement dynamiques non linéaires (à la fois géotechniques et structurelles) ne se satisfont pas des spectres de réponse mais requièrent des traces temporelles. Dans le but de générer de telles traces temporelles, la méthode stochastique non stationnaire développée antérieurement par Pousse et al. 2006 a été revisitée. Cette méthode semi-empirique nécessite de définir au préalable les distributions des indicateurs clés du mouvement sismique. Nous avons ainsi développé des modèles de prédiction empiriques pour la durée de phase forte, l'intensité d'Arias et la fréquence centrale, paramètre décrivant la variation du contenu fréquentiel au cours du temps. Les nouveaux développements de la méthode stochastique permettent de reproduire des traces temporelles sur une large bande de fréquences (0.1-50 Hz), de reproduire la non stationnarité en temps et en fréquence et la variabilité naturelle des vibrations sismiques. Cette méthode présente l'avantage d'être simple, rapide d'exécution et de considérer les bases théoriques de la sismologie (source de Brune, une enveloppe temporelle réaliste, non stationnarité et variabilité du mouvement sismique). Dans les études de génie parasismique, un nombre réduit de traces temporelles est sélectionné, et nous analysons dans une dernière partie l'impact de cette sélection sur la conservation de la variabilité naturelle des mouvements sismiques
The aim of this thesis is to improve the definition of vibrations ("seismic motion") on "hard" sites (hard soils or rocks) related to scenarios (earthquakes of magnitude between 5 and 6.5, distances less than 50 km) representative of the French metropolitan context.In order to constrain the seismic motions on "hard" sites, an accelerometric database was built, from the K-NET and KiK-net Japanese recordings which have the benefit of being public, numerous and high quality. A ground motion prediction equation for the acceleration response spectra was developed from this new database. The comparison between theoretical models and observations shows the dependence of vibration on rock sites in both the velocity characteristics of the site (classical parameter describing the S-wave velocity in the last 30 meters) and the high frequency attenuation mechanisms (a phenomenon little studied up to now). These results confirm a correlation between these two mechanisms (the high frequency seismic motion is more attenuated in the case of softer rock sites) and we propose a ground motion prediction equation taking into account all the properties of the site (attenuation and velocity).New methods of nonlinear dynamic analysis (both geotechnical and structural) are not satisfied with the response spectra but require time histories. To generate such time histories, the non-stationary stochastic method previously developed by Pousse et al. (2006) has been revisited. This semi-empirical method requires first to define the distributions of key indicators of seismic motion. We have developed empirical models for predicting the duration, the Arias intensity and the central frequency, parameter describing the frequency content variation over time. New developments of the stochastic method allow to reproduce time histories over a wide frequency band (0.1-50 Hz), to reproduce the non-stationarity in time and frequency and to reproduce the natural variability of seismic vibrations. This method has the advantage of being simple, fast and taking into account basic concepts of seismology (Brune's source, a realistic envelope function, non-stationarity and variability of seismic motion). In earthquake engineering studies, a small number of time histories is selected, and we analyze in the last part the impact of this selection on the conservation of the ground motion natural variability
APA, Harvard, Vancouver, ISO, and other styles
28

Ocak, Recai Soner. "Probabilistic Seismic Hazard Assessment Of Eastern Marmara And Evaluation Of Turkish Earthquake Code Requirements." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613902/index.pdf.

Full text
Abstract:
The primary objective of this study is to evaluate the seismic hazard in the Eastern Marmara Region using improved seismic source models and enhanced ground motion prediction models by probabilistic approach. Geometry of the fault zones (length, width, dip angle, segmentation points etc.) is determined by the help of available fault maps and traced source lines on the satellite images. State of the art rupture model proposed by USGS Working Group in 2002 is applied to the source system. Composite reoccurrence model is used for all seismic sources in the region to represent the characteristic behavior of North Anatolian Fault. New and improved global ground motion models (NGA models) are used to model the ground motion variability for this study. Previous studies, in general, used regional models or older ground motion prediction models which were updated by their developers during the NGA project. New NGA models were improved in terms of additional prediction parameters (such as depth of the source, basin effects, site dependent standard deviations, etc.), statistical approach, and very well constrained global database. The use of NGA models reduced the epistemic uncertainty in the total hazard incorporated by regional or older models using smaller datasets. The results of the study is presented in terms of hazard curves, deaggregation of the hazard and uniform hazard spectrum for six main locations in the region (Adapazari, Duzce, Golcuk, Izmit, Iznik, and Sapanca City Centers) to provide basis for seismic design of special structures in the area. Hazard maps of the region for rock site conditions at the accepted levels of risk by Turkish Earthquake Code (TEC-2007) are provided to allow the user perform site-specific hazard assessment for local site conditions and develop site-specific design spectrum. Comparison of TEC-2007 design spectrum with the uniform hazard spectrum developed for selected locations is also presented for future reference.
APA, Harvard, Vancouver, ISO, and other styles
29

Menes, Matheus Dorival Leonardo Bombonato. "Versão discreta do modelo de elasticidade constante da variância." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-16042013-151325/.

Full text
Abstract:
Neste trabalho propomos um modelo de mercado através de uma discretização aleatória do movimento browniano proposta por Leão & Ohashi (2010). Com este modelo, dada uma função payoff, vamos desenvolver uma estratégia de hedging e uma metodologia para precificação de opções
In this work we propose a market model using a discretization scheme of the random Brownian motion proposed by Leão & Ohashi (2010). With this model, for any given payoff function, we develop a hedging strategy and a methodology to option pricing
APA, Harvard, Vancouver, ISO, and other styles
30

Newbury, James. "Limit order books, diffusion approximations and reflected SPDEs : from microscopic to macroscopic models." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:825d9465-842b-424b-99d0-ff4dfa9ebfc5.

Full text
Abstract:
Motivated by a zero-intelligence approach, the aim of this thesis is to unify the microscopic (discrete price and volume), mesoscopic (discrete price and continuous volume) and macroscopic (continuous price and volume) frameworks of limit order books, with a view to providing a novel yet analytically tractable description of their behaviour in a high to ultra high-frequency setting. Starting with the canonical microscopic framework, the first part of the thesis examines the limiting behaviour of the order book process when order arrival and cancellation rates are sent to infinity and when volumes are considered to be of infinitesimal size. Mathematically speaking, this amounts to establishing the weak convergence of a discrete-space process to a mesoscopic diffusion limit. This step is initially carried out in a reduced-form context, in other words, by simply looking at the best bid and ask queues, before the procedure is extended to the whole book. This subsequently leads us to the second part of the thesis, which is devoted to the transition between mesoscopic and macroscopic models of limit order books, where the general idea is to send the tick size to zero, or equivalently, to consider infinitely many price levels. The macroscopic limit is then described in terms of reflected SPDEs which typically arise in stochastic interface models. Numerical applications are finally presented, notably via the simulation of the mesocopic and macroscopic limits, which can be used as market simulators for short-term price prediction or optimal execution strategies.
APA, Harvard, Vancouver, ISO, and other styles
31

Eliasson, Peder. "Emittance preservation and luminosity tuning in future linear colliders." Doctoral thesis, Uppsala University, Department of Physics and Astronomy, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-8576.

Full text
Abstract:

The future International Linear Collider (ILC) and Compact Linear Collider (CLIC) are intended for precision measurements of phenomena discovered at the Large Hadron Collider (LHC) and also for the discovery of new physics. In order to offer optimal conditions for such experiments, the new colliders must produce very-high-luminosity collisions at energies in the TeV regime.

Emittance growth caused by imperfections in the main linacs is one of the factors limiting the luminosity of CLIC and ILC. In this thesis, various emittance preservation and luminosity tuning techniques have been tested and developed in order to meet the challenging luminosity requirements.

Beam-based alignment was shown to be insufficient for reduction of emittance growth. Emittance tuning bumps provide an additional powerful preservation tool. After initial studies of tuning bumps designed to treat certain imperfections, a general strategy for design of optimised bumps was developed. The new bumps are optimal both in terms of emittance reduction performance and convergence speed. They were clearly faster than previous bumps and reduced emittance growth by nearly two orders of magnitude both for CLIC and ILC.

Time-dependent imperfections such as ground motion and magnet vibrations also limit the performance of the colliders. This type of imperfections was studied in detail, and a new feedback system for optimal reduction of emittance growth was developed and shown to be approximately ten times more efficient than standard trajectory feedbacks.

The emittance tuning bumps require fast and accurate diagnostics. The possibility of measuring emittance using a wide laserwire was introduced and simulated with promising results. While luminosity cannot be directly measured fast enough, it was shown that a beamstrahlung tuning signal could be used for efficient optimisation of a number of collision parameters using tuning bumps in the Final Focus System.

Complete simulations of CLIC emittance tuning bumps, including static and dynamic imperfections and realistic tuning and emittance measurement procedures, showed that an emittance growth six times lower than that required may be obtained using these methods.

APA, Harvard, Vancouver, ISO, and other styles
32

Hor, Boramy. "Évaluation et réduction des conséquences des mouvements de terrains sur le bâti : approches expérimentale et numérique." Phd thesis, INSA de Lyon, 2012. http://tel.archives-ouvertes.fr/tel-00737787.

Full text
Abstract:
L'instabilité des cavités souterraines (mines, carrières, tunnels,...) peut induire les mouvements de terrains d'amplitude suffisante pour endommager les bâtiments et les infrastructures en surface. Les méthodes traditionnelles, utilisées dans les pratiques d'ingénieur pour prévoir les déformations dans les structures, sont basées sur les caractéristiques des mouvements de terrain en condition de terrain vierge sans prendre en compte l'effet de la présence des structures en surface. L'objectif de cette thèse est de prédire les déformations des ouvrages en tenant compte de l'influence de l'interaction sol-structure, d'une part ; et d'évaluer la performance d'une solution de protection (tranchée périphérique), d'autre part. Cela a été achevé par la réalisation d'études paramétriques utilisant deux approches complémentaires : une approche expérimentale à l'aide d'un modèle réduit physique 3D sous gravité normale et une modélisation numérique 3D par la méthode des éléments finis. En particulier l'effet d'un certain nombre de paramètres géométriques et mécaniques a pu être investigué dans l'étude de l'interaction sol-structure : la position de la structure par rapport à la cuvette d'affaissement, le poids de la structure et la raideur relative entre le sol et la structure. Concernant l'étude de l'efficacité de tranchées périphériques, l'effet de la position de la structure, de la position de la tranchée vis-à-vis de la structure et de la rigidité de la tranchée a été analysé. Les résultats obtenus ont abouti à une meilleure compréhension du problème d'interaction sol-structure et ont montré l'importance de cet effet qui doit être pris en compte dans l'évaluation de la vulnérabilité du bâti. Le transfert des mouvements du sol à la structure est faible (moins de 2,5%), dans le cas modélisé : structure rigide et interface glissante. Les différents résultats ont permis par ailleurs de mettre en évidence l'efficacité de la tranchée périphérique pour réduire les sollicitations affectant les structures. La tranchée doit être remplie avec un matériau très déformable et surtout placée à une distance de l'ordre d'un mètre de la structure.
APA, Harvard, Vancouver, ISO, and other styles
33

Allez, Romain. "Chaos multiplicatif Gaussien, matrices aléatoires et applications." Phd thesis, Université Paris Dauphine - Paris IX, 2012. http://tel.archives-ouvertes.fr/tel-00780270.

Full text
Abstract:
Dans ce travail, nous nous sommes intéressés d'une part à la théorie du chaos multiplicatif Gaussien introduite par Kahane en 1985 et d'autre part à la théorie des matrices aléatoires dont les pionniers sont Wigner, Wishart et Dyson. La première partie de ce manuscrit contient une brève introduction à ces deux théories ainsi que les contributions personnelles de ce manuscrit expliquées rapidement. Les parties suivantes contiennent les textes des articles publiés [1], [2], [3], [4], [5] et pré-publiés [6], [7], [8] sur ces résultats dans lesquels le lecteur pourra trouver des développements plus détaillés
APA, Harvard, Vancouver, ISO, and other styles
34

Dépée, Alexis. "Etude expérimentale et théorique des mécanismes microphysiques mis en jeu dans la capture des aérosols radioactifs par les nuages." Thesis, Université Clermont Auvergne‎ (2017-2020), 2019. http://www.theses.fr/2019CLFAC057.

Full text
Abstract:
Les particules atmosphériques sont un sujet d’importance dans plusieurs couches de la société. Leur présence dans l’atmosphère est aussi bien une problématique météorologique et climatique qu’un enjeu de santé publique, notamment de par l’accroissement des maladies cardiovasculaires. En particulier, les particules radioactives émises dans l’atmosphère à la suite d’un accident nucléaire peuvent polluer les écosystèmes durant plusieurs années. Le récent accident du Centre Nucléaire de Production d’Électricité de Fukushima Daiichi en 2011 nous rappelle que, même aujourd’hui, le risque zéro n’existe pas. À la suite d’une émission dans l’atmosphère, les particules nanométriques diffusent et s’agglomèrent alors que les particules de plusieurs micromètres sédimentent. Les tailles intermédiaires vont, quant à elles, pouvoir être transportées à l’échelle globale dont le mécanisme principal de rabattement au sol provient des interactions avec les nuages et les précipitations. Afin d’améliorer la connaissance de la contamination des sols consécutive à de tels accidents, la compréhension de la capture des aérosols par les nuages est alors essentielle. Dans ce but, un modèle microphysique est implémenté dans ce travail, considérant les mécanismes microphysiques qui interviennent dans la capture des aérosols par des gouttes de nuage, notamment les forces électrostatiques dès lors que les radionucléides ont pour propriété de fortement se charger. Des mesures en laboratoire sont alors réalisées à l’aide de In-CASE (In-Cloud Aerosols Scavenging Experiment), expérience conçue dans ce travail, afin de comparer le modèle développé aux observations, et ce, toujours à une échelle microphysique où les paramètres d’influence régissant la capture au sein du nuage sont contrôlés. Par ailleurs, des systèmes de charge des particules et des gouttes sont conçus pour soigneusement maîtriser les charges électriques, tandis que l’humidité relative est précisément pilotée. Les nouvelles connaissances de la capture des particules par des gouttes de nuage qui en découlent, concernant entre autres les effets électrostatiques, sont ensuite incorporées au modèle de nuage convectif DESCAM (Detailed SCAvenging Model). Ce modèle à microphysique détaillée décrit un nuage de sa formation jusqu’aux précipitations, permettant d’étudier l’impact des nouvelles données sur le rabattement des particules à méso-échelle. De plus, des modifications sont apportées à DESCAM pour élargir l’étude aux nuages stratiformes qui constituent en France, la majorité des précipitations. À terme, cette étude ouvre la voie à l’amélioration de la modélisation du rabattement atmosphérique des particules, et notamment à la contamination des sols dans les modèles de crise de l’Institut de Radioprotection et de Sûreté Nucléaire
Atmospheric particles are a key topic in many social issues. Their presence in this atmosphere is a meteorological and climatic subject, as well as a public health concern since these particles are correlated with the increase of cardiovascular diseases. Specially, radioactive particles emitted as a result of a nuclear accident can jeopardise ecosystems for decades. The recent accident at the Fukushima Daiichi’s nuclear power plant in 2011 reminds us that the risk, even extremely unlikely, exists.After a release of nuclear material in the atmosphere, nanometric particles diffuse and coagulate, while micrometric particles settle due to gravity. Nevertheless, the intermediate size particles can be transported at a global scale when the main mechanism involved in their scavenging comes from the interaction with clouds and their precipitations. To enhance the ground contamination knowledge after such accidental releases, the understanding of the particle in-cloud collection is thus essential. For this purpose, a microphysical model is implemented in this work, including the whole microphysical mechanisms acting on the particle collection by cloud droplets like the electrostatic forces since radionuclides are well-known to become significantly charged. Laboratory measurements are then conducted through In-CASE (In-Cloud Aerosols Scavenging Experiment), a novel experiment built in this work, to get comparisons between modelling and observations, once again at a microphysical scale where every parameter influencing the particle in-cloud collection is controlled. Furthermore, two systems to electrically charge particles and droplets are constructed to set the electric charges carefully while the relative humidity level is also regulated. These new research results related to the particle collection by cloud droplets following the electrostatic forces, among others effects, are thus incorporated into the convective cloud model DESCAM (Detailed SCAvenging Model). This detailed microphysical model describes a cloud from its formation to the precipitations, allowing the study at a meso-scale of the impact of the new data on the particle scavenging. Moreover, some changes are made in DESCAM to expand the study to stratiform clouds since the major part of the French precipitations come from the stratiform ones. Finally, this work paves the way for the enhancement of the atmospheric particle scavenging modelling, including the ground contamination in the crisis model used by the French Institute in Radiological Protection and Nuclear Safety
APA, Harvard, Vancouver, ISO, and other styles
35

Vestin, Albin, and Gustav Strandberg. "Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms." Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.

Full text
Abstract:
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.
APA, Harvard, Vancouver, ISO, and other styles
36

Naguit, Muriel. "Towards Earthquake-resilient Buildings: Rupture Process & Exposure/Damage Analysis of the 2013 M7.1 Bohol Philippines Earthquake." Phd thesis, 2017. http://hdl.handle.net/1885/117284.

Full text
Abstract:
The strong ground shaking due to the Mw7.1 Bohol Philippines earthquake left a significant imprint on its built environment. Two key factors defining this event include the wide spread of seismic intensities inferred to have shaken the island and the extensive building damage. These make the Bohol Earthquake an important opportunity to improve knowledge on building fragility and vulnerability. However, this entails a statistical description of building damage and a reliable source model for accurate estimation of earthquake ground motion. To this end, an extensive survey was conducted leading to a robust description of over 25,000 damaged and undamaged structures. This comprehensive database represents a mix of construction types at various intensity levels, in both urban and rural settings. For the ground motion estimation, the geometry and slip distribution of the finite source models were based on the analysis of SAR data, aftershocks and tele-seismic waveforms. Ground motion fields were simulated and compared using two methods including the stochastic modeling and a suite of ground motion prediction equations. The intensity-converted ground motions were calibrated and associated with the exposure-damage database to derive the empirical fragility and vulnerability models for typical building types in Bohol. These newly-derived models were used to validate the building fragility and vulnerability models already in use in the Philippines. This post-event assessment emphasizes the importance of assembling an exposure-damage database whenever damaging earthquakes occur. The sensitivity of fragility functions to ground motion inputs is also highlighted. Results indicate that the pattern of damage is best captured in the stochastic finite-fault simulation, although the Zhao et al. (2006) ground motion model registers a comparable range of ground motions. Constraints were placed on seismic building fragility and vulnerability models, which can promote more effective implementation of building regulations and construction practices as well as to deliver credible impact forecasts.
APA, Harvard, Vancouver, ISO, and other styles
37

Huang, Jyun-Yan, and 黃雋彥. "Characteristics of Strong Ground Motion and Site Correction of Stochastic Ground Motion Simulation." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/96834653952206967550.

Full text
Abstract:
博士
國立中央大學
地球科學學系
102
The important point of this study included two parts. Characteristics of strong motions were discussed in the first part. The Empirical Mode Decomposition (EMD) based new baseline correction scheme was constructed. Acceleration strong motion records of 2011 great Tohoku Japan earthquake were corrected to reproduce coseismic displacement time histories. When applying this new scheme we found that the correction results at least need coseismic deformation value in the nearby region as constrain to improve the accuracy after comparing with high frequency GPS records. Than the ground motions of 2010 Darfield New Zealand earthquake sequence and 1999 ChiChi Taiwan earthquake were analyzed to identify characteristics of time recuperated nonlinear site response (Aguirre and Irikura, 1997), and using Degree of Nonlinearity (DNL) to quantitatively discuss nonlinear site response. The DNL results showed the 0.5 to 10 Hz frequency band was suitable to calculate its DNL value. And this experience also applied to the 2008 Wenchuan China earthquake to discuss the relations between DNL and site class and liquefaction area. Second part, stochastic point source simulation (Boore, 1983; Boore, 2003a) was first applied for strong motion stations of the Taiwan Strong Motion Instrumentation Program (TSMIP) in TAP and TCU region. This study selected shallow, small magnitude earthquake as database to construct Empirical Transfer Function (ETF) for each strong motion stations. Afterwards, ETFs were used to do the site corrections for stochastic simulation of the target earthquakes. Results showed site correction works well that it reduced errors in PGA and frequency band for high frequency simulations of shallow earthquakes. The uncertainty of PGA could reach the same level with that from Ground Motion Prediction Equations (GMPEs, Jean et al., 2006; Chang et al., 2010; Lin,2009) which had considered site correction. For stochastic finite fault (Beresnev and Atkinson,1998; Motezedian and Atkinson,2005; Boore,2009) simulation, ETF also works well and reducing errors. The correction results provide believable frequency spectrum and PGA slightly better than prediction from GMPE. On the other hand, after the discussion of near source effect for stochastic simulation in this study, the results showed if fault region including length and width could reasonably decided following geological survey or historical earthquake investigation, average of many random asperity distribution models could provide believable simulation. In the future, stochastic simulations still need to consider hanging wall effect, directivity effect to reduce errors for simulations. Finally, four kinds of common transfer functions for site effect study including H/V ratio of S-wave (HV), spectral ratio method between soil to rock station pair (HH), microtremor H/V (MicroHV) and ETF were compared in the Taipei basin. The results indicated transfer function calculated from single station method (H/V for S-wave and microtremor) had good agreement with each other. Single station method reacted between basement A and surface for B class stations but reacted between basement B and surface for D and E classes. Finally, microtremor soil to rock spectral ratio (MHH) was tested to considered alternative transfer functions for those sites who lack of strong motion observation region in this study. After comparing with HH, it indicated the distance to reference rock should less than 10 to 15 km, and its reliable frequency band up to 2 Hz.
APA, Harvard, Vancouver, ISO, and other styles
38

Vlachos, Christos. "Stochastic Characterization and Simulation of Ground Motions based on Earthquake Scenarios." Thesis, 2016. https://doi.org/10.7916/D8RB74TC.

Full text
Abstract:
A novel stochastic earthquake ground motion model is formulated in association with physically interpretable parameters that are capable of efficiently characterizing the complex evolutionary nature of the phenomenon. A multi-modal, analytical, fully non-stationary spectral version of the Kanai-Tajimi (K-T) model is introduced achieving a realistic description of the evolutionary spectral energy distribution of seismic ground motions. The functional forms describing the temporal evolution of the model parameters can efficiently model highly non-stationary power spectral characteristics. The analysis space, where the analytical forms describing the evolution of the model parameters are established, is the energy domain instead of the typical use of the time domain. This space is used in conjunction with a newly defined energy-associated amplitude modulating function. The Spectral Representation Method supports the simulation of sample ground motions realizations. A predictive stochastic model for simulation of earthquake ground motions is developed, using a user-specified earthquake scenario description as input, and resulting in fully nonstationary ground acceleration time-histories at a site of interest. The previously formed analytical non-stationary K-T ground motion model lies at the core of the developed predictive model. An extensive Californian subset of the NGA-West2 earthquake ground motion database is used to develop and calibrate the predictive stochastic model. Sample observations of the model parameters are obtained by fitting the K-T model to the database records, and their resulting marginal distributions are effectively described by simple probability models. Advanced random-effect regression models are established in the normal probabilistic space, capable of linking the stochastic K-T model parameters with the moment magnitude Mw, closest distance Rrup and average shear-wave velocity VS30 at a Californian site of interest. The included random effects take effectively into account the correlation of ground motions pertaining to the same earthquake event, and the fact that each site is expected to have its own effect on the resulting ground motion. The covariance structure of the normal K-T model parameters is next estimated, allowing finally for the complete mathematical description of the predictive stochastic model for a given earthquake scenario. The entirety of the necessary steps for the simulation of the developed predictive stochastic model is provided, resulting in the generation of any number of fully non-stationary ground acceleration time-series that are statistically consistent with the specified earthquake scenario. In an effort to assess the performance and versatility of the developed predictive stochastic model, a list of simple engineering metrics, associated with the characterization of the earthquake ground motion time-series, is studied, and results from simulated earthquake ground acceleration time-series of the developed predictive model are compared with corresponding predictions of pertinent Ground Motion Prediction Equations (GMPEs) for a variety of earthquake and local-site characteristics. The studied set of ground acceleration time-series features includes the Arias intensity IA, the significant duration T5-95 of the strong ground shaking, and the spectral-based mean period of the earthquake record Tm. The predictive stochastic model is next validated against the state-of-the-art NGA-West2 GMPE models. The statistics of elastic response spectra derived by ensembles of synthetic ground motions are compared with the associated response spectra as predicted by the considered NGA-West2 ground motion prediction equations for a wide spectrum of earthquake scenarios. Finally, earthquake non-linear response-history analyses are conducted for a set of representative single- and multi-degree-of-freedom hysteretic structural systems, comparing the seismically induced inelastic structural demand of the considered systems, when subjected to sets of both real strong ground motion records, and associated simulated ground acceleration time-histories as well. The comparisons are performed in terms of seismic structural demand fragility curves.
APA, Harvard, Vancouver, ISO, and other styles
39

CHANG, SHUN-CHIANG, and 章順強. "The Strong Ground Motion Attenuation Parameters and Stochastic Simulation." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/3fa5mw.

Full text
Abstract:
博士
國立中央大學
地球科學學系
105
Strong ground motion parameters (κ0、Q、Mw、M0 and △σ) were measured from seismograms of Taiwan Strong Motion Instrument Program (TSMIP) with local magnitudes (ML) between 3.0–7.1 that occurred between 1993 and 2014 in this study. Meanwhile, inversion technique was also used for testing input for stochastic simulation method. First, the high frequency decay parameter, kappa (κ) was computed by fitting the Fourier amplitude spectra of each station from TSMIP network. The relation between κ values and the hypocentral distance (Rhyp) were calculated from SH-waves for each individual station. Incidentally, the κ value at Rhyp=0 (denoted as κ0) can be used as site parameter, the range of κ0 for TSMIP stations were from 0.0185 - 0.0939 s in this study and the distribution is highly corresponding to geology and velocity. For instance, low κ0 values that below 0.06 s were occurred in and around the Central Mountain and foothill region, which was basically located in the middle of Taiwan. In contrast, high κ0 values that upon 0.06 s were observed at the alluvial areas, i.e., the Taipei basin and the Ilan plain in the northern Taiwan, the Chianan plain in the southwestern Taiwan, and the longitudinal valley in the eastern Taiwan. The site-specific κ0 values from 426 stations were correlated with the averaged shear wave velocity of the top 30 m of strata (VS30), and the relationship could be described by κ0 =0.163 – 0.077·ln(VS30) ± 0.053 and a high linear correlation (R2 = 0.63) was found. The second part, the generalized inversion technique (GIT) (Oth et al., 2011) was used for inversion purpose of seimogenic parameters from SH-wave in the frequency range 0.1 to 40 Hz (interval 0.1 Hz) for whole Taiwan region (Taiwan model) and the southern region (regional model). The attenuation characteristics, earthquake source parameters and site amplification functions could be decomposed step by step from GIT. In this study, the characteristics of the site amplification are referred to horizontal-to-vertical (H/V) Fourier spectral ratios of earthquakes for a referent rock site. The basic three effects were are set with the parameters of Boore (2003) to determine the rest seismic moment(M0)、corner frequency(fc)、stress drop(△σ) and Q(f). Finally, the strong ground parameters obtained by this study are verified from stochastic simulation method. Accordingly to size of seismic source, point source technique (SMSIM) was used for ML<6 and finite-fault technique (EXSIM) with correlated source information (fault plane solution and slip, etc.) was used for ML≧6 events. However, the definition of κ0 obtained from this study is differed from that of previous studies (Boore, 2003; Boore, 2009, SMSIM and EXSIM). Several changes were made in this study, first, the κ0 and site amplification were exclude from waveforms that were generated from SMSIM. Second, different shape of high-frequency decay (related to κ0) spectrum generated from this study and also site amplification function was superposed in the SMSIM simulation spectrum in frequency domain. Finally, the regional parametric model was used to obtain the best simulation results for SMSIM technique. However, if there is no regional parameter model, Taiwan model can also get a good simulation results. When source information could be found for large earthquakes, EXSIM could provide better simulation results. Therefore, the parameters model calibrated in this study can be used to predict ground motion.
APA, Harvard, Vancouver, ISO, and other styles
40

Huang, Cong-lun, and 黃琮倫. "Site Correction of Stochastic Ground Motion Simulation in Southwestern Taiwan." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/57900007194326098543.

Full text
Abstract:
碩士
國立中央大學
地球科學學系
102
On March 17, 1906, the Meishan earthquake (ML7.1) hit southwestern Taiwan, caused severe damage and lost (鄭世楠和葉永田,1998). This event noticed that the ground motion prediction plays an important role in reducing the earthquake hazard. In this study, we simulate the shallow earthquake event which record by TSMIP from 1991 to 2013, with the stochastic point source simulation (Boore, 1983; Boore, 2003a). The empirical transfer function from 0.2Hz to 10Hz for each station in southwester area will be calculated by H/H method (Borcheret, 1970). After doing the site correction with these empirical transfer functions for several target event, the prediction of PGA shows no large difference compare to the result calculating by ground motion prediction equation (GMPE, Jean et al., 2006; 張毓文,2010). The stochastic finite fault simulation (Stochastic Finite Fault Simulation,Beresnev and Atkinson, 1998; Motezedian and Atkinson, 2005; Boore, 2009) and empirical site correction also show well performance on March 4, 2010, Jiashiang earthquake. The result not only shows the PGA prediction is better than the result calculate by GMPE but also provides reliable spectrum form 0.2Hz to 10Hz. We think the earthquake azimuth and PGA value are to be concerned in calculating the empirical transfer function except the depth and magnitude in the future. The last part is the Meishan fault ground motion simulation with the parameters provides by TEM and calculate with strong ground motion prediction method “Recipe” (NIED, 2009). Both the PGA value and PGA distribution are according with the GMPE result and it represent that the empirical site correction of stochastic simulation can provide good result in ground motion prediction.
APA, Harvard, Vancouver, ISO, and other styles
41

Sarica, Rabia Zeynep. "Wavelet analyses for seismic ground motion, simulation, and stochastic site response." 2005. http://www.lib.ncsu.edu/theses/available/etd-08082005-000500/unrestricted/etd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Saifuddin and 薩伏丁. "Stochastic Ground Motion Simulation with Site Correction Using Equivalent-Linear Method." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/70754832194247407055.

Full text
Abstract:
碩士
國立中央大學
地球科學學系
101
Attenuation relationship of peak ground acceleration is widely employed by engineers in seismic hazard estimation studies. This study was conducted by combining stochastic point-source (Boore, 2003) and equivalent-linear (Idriss & Sun, 1992) methods to simulate ground motion, and compared the results of attenuation relationship. Seismic parameters for stochastic method were selected based on previous studies (Sokolov et al., 2006; 2009). The stochastic method was performed to obtain simulated waveforms at a rock site (TAP086 station) near of Wuku downhole and a soil site (Wuku downhole). It showed decent results at rock site, while underestimated results at soil site. Simulated waveforms at rock- and soil-sites locations were used as outcrop input motions for site correction. Site correction was performed using equivalent-linear method. Shear wave velocity profile (Wen et al., 1995; Wang et al., 2004), geological soil profile of Wuku downhole (Su et al., 1997) and input motions were required for equivalent-linear method to obtain simulated waveforms at soil surface of Wuku downhole array. Site correction was performed using outcrop input motions at different depths (30 m, engineering and geological bedrocks). Comparison of peak ground acceleration (PGA) observation against simulation was presented in log and linear scales while degree of spectrum difference (DSPD) was only in log scale. In general, equivalent-linear method could assist to correct simulated PGA and Fourier amplitude spectrum (FAS) at Wuku downhole either using simulated waveform at rock or soil-site locations. It was showed by reducing value of error in PGA comparison in log scale (σ_lnErr) and linear scale excluding input motion using simulated waveform of rock site at geological bedrock in linear scale and reducing slightly value of DSPD. For input motion in Wuku downhole, we found tendency that the deeper of input motion, the lower σ_lnErr value. Comparison of our simulation results to attenuation relationship, we found that our results were slightly better at TAP086 and Wuku downhole. We also applied stochastic point source and equivalent-linear methods to Sungshan downhole site. Stochastic point-source already yielded decent PGAs at surface of Sungshan downhole, site correction was also performed at different depths of input motions (30 m, engineering, and geological bedrocks). Site correction at 30 m did not alter value of σ_lnErr and DSPD, while engineering and geological bedrocks enlarged the value of σ_lnErr and DSPD. These phenomena might be occurred due to site condition of Sungshang downhole. Regardless of the results from Sungshan downhole were not decent as Wuku downhole, we found that engineering and geological bedrocks obtained similar site correction as Wuku downhole. These phenomena might be occurred because significant amplification was controlled by top layer of engineering bedrock which is Sungshang Formation.
APA, Harvard, Vancouver, ISO, and other styles
43

Papadimitriou, Konstantinos. "Stochastic Characterization of Strong Ground Motion and Applications to Structural Response." Thesis, 1991. https://thesis.library.caltech.edu/6331/1/Papadimitriou_k_1991.pdf.

Full text
Abstract:

This study addresses the problem of characterizing strong ground motion for the purpose of computing the dynamic response of structures to earthquakes. A new probabilistic ground motion model is proposed which can act as an interface between ground motion prediction studies and structural response studies. The model is capable of capturing, with at most nine parameters, all those features of the ground acceleration history which have an important influence on the dynamic response of linear and nonlinear structures, including the amplitude and frequency content nonstationarities of the shaking. Using a Bayesian probabilistic framework, a simple and effective statistical method is developed for extracting the "optimal" model from an actual accelerogram. The proposed ground motion model can be efficiently applied in simulations as well as analytical response and reliability studies of linear and inelastic structures.

The random response of linear and nonlinear oscillators subjected to the proposed stochastic excitation is considered. The nonlinearity of the oscillator is accounted for by equivalent linearization. A formulation is developed which approximates the original lengthy expressions for the second-moment statistics of the transient response by much simpler expressions. The results provide insight into the characteristics of the nonstationary response and the effect of the ground motion nonstationarities. It is found that the temporal nonstationarity in the frequency content of the ground motion significantly influences the response of both linear and nonlinear structural models. Simulations are also used to study the sensitivity of inelastic structural response parameters to the details of the ground motion which are left "random" by the model. The results can also be used to provide a quantitative assessment of the expected structural damage associated with the ground motion described by the model.

APA, Harvard, Vancouver, ISO, and other styles
44

Megawati and 孟華蒂. "Stochastic Ground Motion Simulation with Site Correction in Ilan Area, Northeastern Taiwan." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/23888259798584119905.

Full text
Abstract:
碩士
國立中央大學
地球科學學系
103
Seismic waveform is controlled by three factors – source properties, path characteristics, and local site effects. The local site effect is the important factor participate strong ground motion prediction. In this study, we used stochastic point-source method for simulating ground motion (Boore, 2005). This method has been widely used in the development of ground-motion prediction equation and in modeling the parameters that controls observed ground motion (Atkinson et al., 2009). The shallow earthquake events which recorded by Taiwan Strong Motion Instrumentation Program (TSMIP) from 1992 to 2012 are simulated with the stochastic point-source method (Boore, 1983; Boore, 2003). The earthquakes are selected with the depth from 0 to 30 km and the magnitude (Mw) from 4 to 6.5. The study area is situated in Ilan area which is located in the northeastern Taiwan. There are 70 TSMIP stations which based on the Vs30 consist of site classes B, C, D, and E. Seismic parameters for stochastic method were selected based on previous studies (Sokolov et al., 2006; 2009). The crustal amplification parameter is set to the half space. The empirical transfer functions from 0.2 Hz to 10 Hz for each station in Ilan area were calculated by H/H method between observed and simulated spectra (Borcherdt, 1970). Ground motion prediction is calculated by selecting several target events for stochastic point-source simulating to the half space. The prediction of peak ground acceleration (PGA) is estimated after doing the site correction with the empirical transfer function. Then, the simulated ground motion was compared in time domain (PGA) and frequency domain (Degree of spectrum difference, DSPD) to show the goodness of the simulation. Finally, the results of PGA prediction was compared with attenuation relationship. Comparison of our simulation results to attenuation relationship, we found that our simulation results showed slightly better. Keywords : Stochastic point-source method, Site effect, Empirical transfer function
APA, Harvard, Vancouver, ISO, and other styles
45

Navidi, Sara. "Site amplification model for use in ground motion prediction equations." 2012. http://hdl.handle.net/2152/19447.

Full text
Abstract:
The characteristics of earthquake shaking are affected by the local site conditions. The effects of the local soil conditions are often quantified via an amplification factor (AF), which is defined as the ratio of the ground motion at the soil surface to the ground motion at a rock site at the same location. Amplification factors can be defined for any ground motion parameter, but most commonly are assessed for acceleration response spectral values at different oscillator periods. Site amplification can be evaluated for a site by conducting seismic site response analysis, which models the wave propagation from the base rock through the site-specific soil layers to the ground surface. An alternative to site-specific seismic response analysis is site amplification models. Site amplification models are empirical equations that predict the site amplification based on general characteristics of the site. Most of the site amplification models that already used in ground motion prediction equations characterize a site with two parameters: the average shear wave velocity in the top 30 m (VS30) and the depth to bedrock. However, additional site parameters influence site amplification and should be included in site amplification models. To identify the site parameters that help explain the variation in site amplification, ninety nine manually generated velocity profiles are analyzed using seismic site response analysis. The generated profiles have the same VS30 and depth to bedrock but a different velocity structure in the top 30 m. Different site parameters are investigated to explain the variability in the computed amplification. The parameter Vratio, which is the ratio of the average shear wave velocity between 20 m and 30 m to the average shear wave velocity in the top 10 m, is identified as the site parameter that most affects the computed amplification for sites with the same VS30 and depth to bedrock. To generalize the findings from the analyses in which only the top 30 m of the velocity profile are varied, a suite of fully randomized velocity profiles are generated and site response analysis is used to compute the amplification for each site for a range of input motion intensities. The results of the site response analyses conducted on these four hundred fully randomized velocity profiles confirm the influence of Vratio on site amplification. The computed amplification factors are used to develop an empirical site amplification model that incorporates the effect of Vratio, as well as VS30 and the depth to bedrock. The empirical site amplification model includes the effects of soil nonlinearity, such that the predicted amplification is a function of the intensity of shaking. The developed model can be incorporated into the development of future ground motion prediction equations.
text
APA, Harvard, Vancouver, ISO, and other styles
46

Chang, Ting-Wei, and 張庭瑋. "Verification and Programming Implementation of Random Parameters Ground Motion Correction Model." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/27fn4v.

Full text
Abstract:
碩士
國立臺北科技大學
土木工程系土木與防災碩士班
106
The purpose of this study is to establish a modified strong ground motion time history simulation model with random parameters and develop a corresponding computer simulation program to generate the required acceleration duration for seismic engineering research or practical design. This model is mainly derived from the models mentioned by Li Jie and Wang Ding, improving its Brune source displacement amplitude spectrum and displacement phase spectrum, thereby increasing the initial phase parameters of the path effect, and using the direct inverse fast Fourier transform to replace the narrow bandwidth originally used. Harmonic superposition method was used to verify this random parameter strongly modified model. In this study, the verification and simulation of the proposed model were performed using the ground surface acceleration records of four stations during the Kobe Earthquake in 1995, and compared with the Fourier amplitude spectrum and phase spectrum of the true strong ground motion record, and the corresponding randomization was obtained. parameter. Finally, the strong ground motion time history simulation record generated by the inverse fast Fourier transform is used to verify whether it matches the true strong ground motion acceleration.
APA, Harvard, Vancouver, ISO, and other styles
47

Hsiao, Yu Kai, and 蕭友凱. "Brownian Motion Stochastic Differential Equations Model and Risk Premium Confidence Interval." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/26066043928350069731.

Full text
Abstract:
碩士
真理大學
統計與精算學系碩士班
98
More recently because of advances in medical technology and public health developments, the developed countries improve the people's mortality makes the life expectancy has been increasing. It leads to us to research the longevity risk. The increase in life expectancy may cause insurance company annuity payment and medical expenses resulting operational risks such as ill-prepared. Government's social security is also facing the same problem. To address such problems, we should discuss the fundamental interest rates model and mortality model. This article is only for the research of mortality model. At present the insurance product pricing mortality model used in deterministic form, Such as the Gompertz law, Coale-Kisker model so as to parameters and assumptions based. It did not consider the uncertainty of future mortality. In recent years, stochastic discrete mortality model, Lee-Carter model in consideration of the random components for future is favored. Due to changes in mortality should be random and continuous. In this article, we set up a continuous stochastic mortality model by differential equations of Brownian motion. We use HMD (Human Mortality Databases) in Japan, Taiwan, England & Wales, Sweden and USA data as examples by Monte Carlo simulation method to establish 95% confidence interval. Fitting and forecasting mortality and compare with Lee-Carter Methods.
APA, Harvard, Vancouver, ISO, and other styles
48

Starke, John Orville. "Evaluation of a stochastic model for predicting ground subsidence over longwall panels." 1985. http://catalog.hathitrust.org/api/volumes/oclc/12803255.html.

Full text
Abstract:
Thesis (M.S.)--University of Wisconsin--Madison, 1985.
Typescript. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaves 135-139).
APA, Harvard, Vancouver, ISO, and other styles
49

Zhang, Deyi. "Stochastic Modelling and Analysis for Bridges under Spatially Varying Ground Motions." Thesis, 2013. http://hdl.handle.net/10012/8038.

Full text
Abstract:
Earthquake is undoubtedly one of the greatest natural disasters that can induce serious structural damage and huge losses of properties and lives. The resulting destructive consequences not only have made structural seismic analysis and design much more important but have impelled the necessity of more realistic representation of ground motions, such as inclusion of ground motion spatial variations in earthquake modelling and seismic analysis and design of structures. Recorded seismic ground motions exhibit spatial variations in their amplitudes and phases, and the spatial variabilities have an important effect on the responses of structures extended in space, such as long span bridges. Because of the multi-parametric nature and the complexity of the problems, the development of specific design provisions on spatial variabilities of ground motions in modern seismic codes has been impeded. Eurocode 8 is currently the only seismic standard worldwide that gives a set of detailed guidelines to explicitly tackle spatial variabilities of ground motions in bridge design, providing both a simplified design scheme and an analytical approach. However, there is gap between the code-specified provisions in Eurocode 8 and the realistic representation of spatially varying ground motions (SVGM) and the corresponding stochastic vibration analysis (SVA) approaches. This study is devoted to bridge this gap on modelling of SVGM and development of SVA approaches for structures extended in space under SVGM. A complete and realistic SVGM representation approach is developed by accounting for the incoherence effect, wave-passage effect, site-response effect, ground motion nonstationarity, tridirectionality, and spectra-compatibility. This effort brings together various aspects regarding rational seismic scenarios determination, comprehensive methods of accounting for varying site effects, conditional modelling of SVGM nonstationarity, and code-specified ground motion spectra-compatibility. A comprehensive, systematic, and efficient SVA methodology is derived for long span structures under tridirectional nonstationary SVGM. An absolute-response-oriented scheme of pseudo-excitation method and an improved high precision direct integration method are proposed to reduce the enormous computational effort of conventional nonstationary SVA. A scheme accounting for tridirectional varying site-response effect is incorporated in the nonstationary SVA scheme systematically. The proposed highly efficient and accurate SVA approach is implemented and verified in a general finite element analysis platform to make it readily applicable in SVA of complex structures. Based on the proposed SVA approach, parametric studies of two practical long span bridges under SVGM are conducted. To account for spatial randomness and variability of soil properties in soil-structure interaction analysis of structures under SVGM, a meshfree-Galerkin approach is proposed within the Karhunen-Loeve expansion scheme for representation of spatial soil properties modelled as a random field. The meshfree shape functions are proposed as a set of basis functions in the Galerkin scheme to solve integral equation of Karhunen-Loeve expansion, with a proposed optimization scheme in treating the compatibility between the target and analytical covariance models. The accuracy and validity of the meshfree-Galerkin scheme are assessed and demonstrated by representation of covariance models for various homogeneous and nonhomogeneous spatial fields. The developed modelling approaches of SVGM and the derived analytical SVA approaches can be applied to provide more refined solutions for quantitatively assessing code-specified design provisions and developing new design provisions. The proposed meshfree-Galerkin approach can be used to account for spatial randomness and variability of soil properties in soil-structure interaction analysis.
APA, Harvard, Vancouver, ISO, and other styles
50

Raghu, Kanth S. T. G. "Engineering Seismic Source Models And Strong Ground Motion." Thesis, 2005. http://etd.iisc.ernet.in/handle/2005/1491.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography