To see the other types of publications on this topic, follow the link: Stochastic processes Mathematical models.

Dissertations / Theses on the topic 'Stochastic processes Mathematical models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Stochastic processes Mathematical models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Cole, D. J. "Stochastic branching processes in biology." Thesis, University of Kent, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.270684.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Le, Truc. "Stochastic volatility models." Monash University, School of Mathematical Sciences, 2005. http://arrow.monash.edu.au/hdl/1959.1/5181.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shepherd, Tricia D. "Models for chemical processes : activated dynamics across stochastic potentials." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/27062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gagliardini, Lucia. "Chargaff symmetric stochastic processes." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/8699/.

Full text
Abstract:
Scopo della modellizzazione delle stringhe di DNA è la formulazione di modelli matematici che generano sequenze di basi azotate compatibili con il genoma esistente. In questa tesi si prendono in esame quei modelli matematici che conservano un'importante proprietà, scoperta nel 1952 dal biochimico Erwin Chargaff, chiamata oggi "seconda regola di Chargaff". I modelli matematici che tengono conto delle simmetrie di Chargaff si dividono principalmente in due filoni: uno la ritiene un risultato dell'evoluzione sul genoma, mentre l'altro la ipotizza peculiare di un genoma primitivo e non intaccata dalle modifiche apportate dall'evoluzione. Questa tesi si propone di analizzare un modello del secondo tipo. In particolare ci siamo ispirati al modello definito da da Sobottka e Hart. Dopo un'analisi critica e lo studio del lavoro degli autori, abbiamo esteso il modello ad un più ampio insieme di casi. Abbiamo utilizzato processi stocastici come Bernoulli-scheme e catene di Markov per costruire una possibile generalizzazione della struttura proposta nell'articolo, analizzando le condizioni che implicano la validità della regola di Chargaff. I modelli esaminati sono costituiti da semplici processi stazionari o concatenazioni di processi stazionari. Nel primo capitolo vengono introdotte alcune nozioni di biologia. Nel secondo si fa una descrizione critica e prospettica del modello proposto da Sobottka e Hart, introducendo le definizioni formali per il caso generale presentato nel terzo capitolo, dove si sviluppa l'apparato teorico del modello generale.
APA, Harvard, Vancouver, ISO, and other styles
5

Leung, Ho-yin, and 梁浩賢. "Stochastic models for optimal control problems with applications." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B42841781.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Dongxiao. "Conditional stochastic analysis of solute transport in heterogeneous geologic media." Diss., The University of Arizona, 1993. http://hdl.handle.net/10150/186553.

Full text
Abstract:
This dissertation develops an analytical-numerical approach to deterministically predict the space-time evolution of concentrations in heterogeneous geologic media conditioned on measurements of hydraulic conductivities (transmissivities) and/or hydraulic heads. Based on the new conditional Eulerian-Lagrangian transport theory by Neuman, we solve the conditional transport problem analytically at early time, and express it in pseudo-Fickian form at late time. The stochastically derived deterministic pseudo-Fickian mean concentration equation involves a conditional, space-time dependent dispersion tensor. The latter not only depends on properties of the medium and the velocity but also on the available information, and can be evaluated numerically along mean "particle" trajectories. The transport equation lends itself to accurate solution by standard Galerkin finite elements on a relatively coarse grid. This approach allows computing without using Monte Carlo simulation and explicitly the following: Concentration variance/covariance (uncertainty), origin of detected contaminant and associated uncertainty, mass flow rate across a "compliance surface", cumulative mass release and travel time probability distribution across this surface, uncertainty associated with the latter, second spatial moment of conditional mean plume about its center of mass, conditional mean second spatial moment of actual plume about its center of mass, conditional co-variance of plume center of mass, and effect of non-Gaussian velocity distribution. This approach can also account for uncertainty in initial mass and/or concentration when predicting the future evolution of a plume, whereas almost all existing stochastic models of solute transport assume the initial state to be known with certainty. We illustrate this approach by considering deterministic and uncertain instantaneous point and nonpoint sources in a two-dimensional domain with a mildly fluctuating, statistically homogeneous, lognormal transmissivity field. We take the unconditional mean velocity to be uniform, but allow conditioning on log transmissivity and hydraulic head data. Conditioning renders the velocity field statistically nonhomogeneous with reduced variances and correlation scales, renders the predicted plume irregular and non-Gaussian, and generally reduces both predictive dispersion and uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
7

Thompson, Mery H. "Optimum experimental designs for models with a skewed error distribution with an application to stochastic frontier models /." Connect to e-thesis, 2008. http://theses.gla.ac.uk/236/.

Full text
Abstract:
Thesis (Ph.D.) - University of Glasgow, 2008.
Ph.D. thesis submitted to the Faculty of Information and Mathematical Sciences, Department of Statistics, 2008. Includes bibliographical references. Print version also available.
APA, Harvard, Vancouver, ISO, and other styles
8

Uyar, Emrah. "Routing in stochastic environments." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26554.

Full text
Abstract:
Thesis (Ph.D)--Industrial and Systems Engineering, Georgia Institute of Technology, 2009.
Committee Co-Chair: Erera, Alan L.; Committee Co-Chair: Savelsbergh, Martin W. P.; Committee Member: Ergun, Ozlem; Committee Member: Ferguson, Mark; Committee Member: Kleywegt, Anton J.. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
9

Gong, Bo. "Numerical methods for backward stochastic differential equations with applications to stochastic optimal control." HKBU Institutional Repository, 2017. https://repository.hkbu.edu.hk/etd_oa/462.

Full text
Abstract:
The concept of backward stochastic differential equation (BSDE) was initially brought up by Bismut when studying the stochastic optimal control problem. And it has been applied to describe various problems particularly to those in finance. After the fundamental work by Pardoux and Peng who proved the well-posedness of the nonlinear BSDE, the BSDE has been investigated intensively for both theoretical and practical purposes. In this thesis, we are concerned with a class of numerical methods for solving BSDEs, especially the one proposed by Zhao et al.. For this method, the convergence theory of the semi-discrete scheme (the scheme that discretizes the equation only in time) was already established, we shall further provide the analysis for the fully discrete scheme (the scheme that discretizes in both time and space). Moreover, using the BSDE as the adjoint equation, we shall construct the numerical method for solving the stochastic optimal control problem. We will discuss the situation when the control is deterministic as well as when the control is feedback.
APA, Harvard, Vancouver, ISO, and other styles
10

Hashad, Atalla I. "Analysis of non-Gaussian processes using the Wiener model of discrete nonlinear systems." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1994. http://handle.dtic.mil/100.2/ADA297343.

Full text
Abstract:
Dissertation (Ph. D. in Electrical Engineering) Naval Postgraduate School, December 1994.
"December 1994." Dissertation supervisor(s): Charles W. Therrien. Includes bibliographical references. Also available online.
APA, Harvard, Vancouver, ISO, and other styles
11

Parra, Rojas César. "Intrinsic fluctuations in discrete and continuous time models." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/intrinsic-fluctuations-in-discrete-and-continuous-time-models(d7006a2b-1496-44f2-8423-1f2fa72be1a5).html.

Full text
Abstract:
This thesis explores the stochastic features of models of ecological systems in discrete and in continuous time. Our interest lies in models formulated at the microscale, from which a mesoscopic description can be derived. The stochasticity present in the models, constructed in this way, is intrinsic to the systems under consideration and stems from their finite size. We start by exploring a susceptible-infectious-recovered model for epidemic spread on a network. We are interested in the case where the connectivity, or degree, of the individuals is characterised by a very broad, or heterogeneous, distribution, and in the effects of stochasticity on the dynamics, which may depart wildly from that of a homogeneous population. The model at the mesoscale corresponds to a system of stochastic differential equations with a very large number of degrees of freedom which can be reduced to a two-dimensional model in its deterministic limit. We show how this reduction can be carried over to the stochastic case by exploiting a time-scale separation in the deterministic system and carrying out a fast-variable elimination. We use simulations to show that the temporal behaviour of the epidemic obtained from the reduced stochastic model yields reasonably good agreement with the microscopic model under the condition that the maximum allowed degree that individuals can have is not too close to the population size. This is illustrated using time series, phase diagrams and the distribution of epidemic sizes. The general mesoscopic theory used in continuous-time models has only very recently been developed for discrete-time systems in one variable. Here, we explore this one-dimensional theory and find that, in contrast to the continuous-time case, large jumps can occur between successive iterates of the process, and this translates at the mesoscale into the need for specifying `boundary' conditions everywhere outside of the system. We discuss these and how to implement them in the stochastic difference equation in order to obtain results which are consistent with the microscopic model. We then extend the theoretical framework to make it applicable to systems containing an arbitrary number of degrees of freedom. In addition, we extend a number of analytical results from the one-dimensional stochastic difference equation to arbitrary dimension, for the distribution of fluctuations around fixed points, cycles and quasi-periodic attractors of the corresponding deterministic map. We also derive new expressions, describing the autocorrelation functions of the fluctuations, as well as their power spectrum. From the latter, we characterise the appearance of noise-induced oscillations in systems of dimension greater than one, which have been previously observed in continuous-time systems and are known as quasi-cycles. Finally, we explore the ability of intrinsic noise to induce chaotic behaviour in the system for parameter values for which the deterministic map presents a non-chaotic attractor; we find that this is possible for periodic, but not for quasi-periodic, states.
APA, Harvard, Vancouver, ISO, and other styles
12

Frencl, Victor Baptista 1983. "Técnicas de filtragem utilizando processos com saltos markovianos aplicados ao rastreamento de alvos móveis." [s.n.], 2010. http://repositorio.unicamp.br/jspui/handle/REPOSIP/260016.

Full text
Abstract:
Orientadores: João Bosco Ribeiro do Val, Rafael Santos Mendes
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-16T00:17:25Z (GMT). No. of bitstreams: 1 Frencl_VictorBaptista_M.pdf: 1147363 bytes, checksum: 3933c461f1a19d86a46afeaba7057140 (MD5) Previous issue date: 2010
Resumo: Esta dissertação possui como tema o estudo do problema de rastreamento de alvos manobrantes a partir da modelagem de sistemas dinâmicos com utilização da teoria de saltos markovianos nas transições entre modelos, da utilização de filtros estocásticos recursivos e de técnicas de filtragem. Foram feitos estudos e análises de dois tipos de modelos dinâmicos, o de velocidade constante e o de giro constante. Baseados nestes modelos, elaboraram-se algumas variações em cima destes. Também foram estudados modelos de observações, propondo a inclusão da velocidade radial nas observações do alvo. Os filtros estudados foram o filtro de Kalman estendido, que lida com modelos matemáticos não-lineares, e filtro BLUE, que trata de dinâmicas lineares e modelos de observações que envolvam conversões de coordenadas. As técnicas de filtragem de modelos múltiplos interagentes, que envolve chaveamento entre filtros, e de filtro de partículas, que baseia-se em simulações de Monte Carlo, foram estudados, propondo algumas variações destas técnicas. Foi desenvolvida uma metodologia, através de simulações numéricas no software MATLAB, para comparar desempenhos das propostas de técnicas de filtragem baseadas nestes estudos
Abstract: The dissertation's theme is the study of the maneuvering target tracking problem from dynamic systems modeling using markovian jumps on the transitions between models, recursive stochastic filters and filtering techniques. Surveys and analysis of two types of dynamic models were made: the constant velocity model and the constant turn model. Based on these models, some variations were prepared. Observations models were also studied, proposing the inclusion of the radial velocity in the target observations. The studied filters were the extended Kalman filter, which deals with nonlinear mathematical models, and the BLUE filter, which deals with linear dynamics and observations models which envolves coordinates conversions. The filtering techniques of the interacting multiple models, which involves the switching between models, and the particle filter, which is based on Monte Carlo simulations, were studied, proposing some variation of these techniques. We developed a methodology, using numerical simulations on MATLAB software, to compare performances of some of the filtering techniques based on these studies
Mestrado
Automação
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
13

Chao, Chon Ip. "The simulations of Levy processes and stochastic volatility models." Thesis, University of Macau, 2009. http://umaclib3.umac.mo/record=b2130012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Breen, Barbara J. "Computational nonlinear dynamics monostable stochastic resonance and a bursting neuron model /." Diss., Available online, Georgia Institute of Technology, 2004:, 2003. http://etd.gatech.edu/theses/available/etd-04082004-180036/unrestricted/breen%5Fbarbara%5Fj%5F200312%5Fphd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Nouri, Suhila Lynn. "Expected maximum drawdowns under constant and stochastic volatility." Link to electronic thesis, 2006. http://www.wpi.edu/Pubs/ETD/Available/etd-050406-151319/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Eberz-Wagner, Dorothea M. "Discrete growth models /." Thesis, Connect to this title online; UW restricted, 1999. http://hdl.handle.net/1773/5797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Britton, Matthew Scott. "Stochastic task scheduling in time-critical information delivery systems." Title page, contents and abstract only, 2003. http://web4.library.adelaide.edu.au/theses/09PH/09phb8629.pdf.

Full text
Abstract:
"January 2003" Includes bibliographical references (leaves 120-129) Presents performance analyses of dynamic, stochastic task scheduling policies for a real- time-communications system where tasks lose value as they are delayed in the system.
APA, Harvard, Vancouver, ISO, and other styles
18

Ortiz, Olga L. "Stochastic inventory control with partial demand observability." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/22551.

Full text
Abstract:
Thesis (Ph. D.)--Industrial and Systems Engineering, Georgia Institute of Technology, 2008.
Committee Co-Chair: Alan L Erera; Committee Co-Chair: Chelsea C, White III; Committee Member: Julie Swann; Committee Member: Paul Griffin; Committee Member: Soumen Ghosh.
APA, Harvard, Vancouver, ISO, and other styles
19

Dyson, Louise. "Mathematical models of cranial neural crest cell migration." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:66955fb9-691f-4d27-ad26-39bb2b089c64.

Full text
Abstract:
From the developing embryo to the evacuation of football stadiums, the migration and movement of populations of individuals is a vital part of human life. Such movement often occurs in crowded conditions, where the space occupied by each individual impacts on the freedom of others. This thesis aims to analyse and understand the effects of occupied volume (volume exclusion) on the movement of the individual and the population. We consider, as a motivating system, the rearrangement of individuals required to turn a clump of cells into a functioning embryo. Specifically, we consider the migration of cranial neural crest cells in the developing chick embryo. Working closely with experimental collaborators we construct a hybrid model of the system, consisting of a continuum chemoattractant and individual-based cell description and find that multiple cell phenotypes are required for successful migration. In the crowded environment of the migratory system, volume exclusion is highly important and significantly enhances the speed of cell migration in our model, whilst reducing the numbers of individuals that can enter the domain. The developed model is used to make experimental predictions, that are tested in vivo, using cycles of modelling and experimental work to give greater insight into the biological system. Our formulated model is computational, and is thus difficult to analyse whilst considering different parameter regimes. The second part of the thesis is driven by the wish to systematically analyse our model. As such, it concentrates on developing new techniques to derive continuum equations from diffusive and chemotactic individual-based and hybrid models in one and two spatial dimensions with the incorporation of volume exclusion. We demonstrate the accuracy of our techniques under different parameter regimes and using different mechanisms of movement. In particular, we show that our derived continuum equations almost always compare better to data averaged over multiple simulations than the equivalent equations without volume exclusion. Thus we establish that volume exclusion has a substantial effect on the evolution of a migrating population.
APA, Harvard, Vancouver, ISO, and other styles
20

Ysusi, Mendoza Carla Mariana. "Estimation of the variation of prices using high-frequency financial data." Thesis, University of Oxford, 2005. http://ora.ox.ac.uk/objects/uuid:1b520271-2a63-428d-b5a0-e7e9c4afdc66.

Full text
Abstract:
When high-frequency data is available, realised variance and realised absolute variation can be calculated from intra-day prices. In the context of a stochastic volatility model, realised variance and realised absolute variation can estimate the integrated variance and the integrated spot volatility respectively. A central limit theory enables us to do filtering and smoothing using model-based and model-free approaches in order to improve the precision of these estimators. When the log-price process involves a finite activity jump process, realised variance estimates the quadratic variation of both continuous and jump components. Other consistent estimators of integrated variance can be constructed on the basis of realised multipower variation, i.e., realised bipower, tripower and quadpower variation. These objects are robust to jumps in the log-price process. Therefore, given adequate asymptotic assumptions, the difference between realised multipower variation and realised variance can provide a tool to test for jumps in the process. Realised variance becomes biased in the presence of market microstructure effect, meanwhile realised bipower, tripower and quadpower variation are more robust in such a situation. Nevertheless there is always a trade-off between bias and variance; bias is due to market microstructure noise when sampling at high frequencies and variance is due to the asymptotic assumptions when sampling at low frequencies. By subsampling and averaging realised multipower variation this effect can be reduced, thereby allowing for calculations with higher frequencies.
APA, Harvard, Vancouver, ISO, and other styles
21

Merino, Fernández Raúl. "Option Price Decomposition for Local and Stochastic Volatility Jump Diffusion Models." Doctoral thesis, Universitat de Barcelona, 2021. http://hdl.handle.net/10803/671682.

Full text
Abstract:
In this thesis, an option price decomposition for local and stochastic volatility jump diffusion models is studied. On the one hand, we generalise and extend the Alòs decomposition to be used in a wide variety of models such as a general stochastic volatility model, a stochastic volatility jump dffusion model with finite activity or a rough volatility model. Furthermore, we note that in the case of local volatility models, speci_cally, spot-dependent models, a new decomposition formula must be used to obtain good numerical results. In particular, we study the CEV model. On the other hand, we observe that the approximation formula can be improved by using the decomposition formula recursively. Using this decomposition method, the call price can be transformed into a Taylor type formula containing an infinite series with stochastic terms. New approximation formulae are obtained in the Heston model case, finding better approximations.
En aquesta tesi, s'estudia una descomposició del preu d'una opció per a models de volatilitat local i volatilitat estocàstica amb salts. D'una banda, generalitzem i estenem la descomposició d'Alòs per a ser utilitzada en una àmplia varietat de models com, per exemple, un model de volatilitat estocàstica general, un model de volatilitat estocàstica amb salts d'activitat finita o un model de volatilitat 'rough'. A més a més, veiern que en el cas dels models de volatilitat local, en particular, els models dependents del 'spot' s'ha d'utilitzar una nova fórmula de descomposició per a obtenir bons resultats numèrics. En particular, estudiem el model CEV. D'altra banda, observem que la fórmula d'aproximació es pot millorar utilitzant la formula de descomposició de forma recursiva. Mitjançant aquesta tècnica de descomposició, el preu d'una opció de compra es pot transformar en una formula tipus Taylor que conté una sèrie infinita de termes estocàstics. S'obtenen noves fórmules d'aproximació en el cas del model de Heston, trobant una millor aproximació.
En esta tesis, se estudia una descomposición del precio de una opción para los modelos de volatilidad local y volatilidad estocástica con saltos. Por un lado, generalizamos y ampliamos la descomposición de Alòs para ser utilizada en una amplia variedad de modelos como, por ejemplo, un modelo de volatilidad estocástica general, un modelo de volatilidad estocástica con saltos de actividad finita o un modelo de volatilidad 'rough'. Además, vemos que en el caso de los modelos de volatilidad local, en particular, los modelos dependientes del 'spot', se debe utilizar una nueva fórmula de descomposición para obtener buenos resultados numéricos. En particular, estudiamos el modelo CEV. Por otro lado, observamos que la fórmula de aproximación se puede mejorar utilizando la fórmula de descomposición de forma recursiva. Mediante esta técnica de descomposición, el precio de una opción de compra se puede transformar en una fórmula tipo Taylor que contiene una serie infinita de términos estocásticos. Se obtienen nuevas fórmulas de aproximación en el caso del modelo de Heston, encontrando una mejor aproximación.
APA, Harvard, Vancouver, ISO, and other styles
22

Bruna, Maria. "Excluded-volume effects in stochastic models of diffusion." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:020c2d3e-5fef-478c-9861-553cd310daf5.

Full text
Abstract:
Stochastic models describing how interacting individuals give rise to collective behaviour have become a widely used tool across disciplines—ranging from biology to physics to social sciences. Continuum population-level models based on partial differential equations for the population density can be a very useful tool (when, for large systems, particle-based models become computationally intractable), but the challenge is to predict the correct macroscopic description of the key attributes at the particle level (such as interactions between individuals and evolution rules). In this thesis we consider the simple class of models consisting of diffusive particles with short-range interactions. It is relevant to many applications, such as colloidal systems and granular gases, and also for more complex systems such as diffusion through ion channels, biological cell populations and animal swarms. To derive the macroscopic model of such systems, previous studies have used ad hoc closure approximations, often generating errors. Instead, we provide a new systematic method based on matched asymptotic expansions to establish the link between the individual- and the population-level models. We begin by deriving the population-level model of a system of identical Brownian hard spheres. The result is a nonlinear diffusion equation for the one-particle density function with excluded-volume effects enhancing the overall collective diffusion rate. We then expand this core problem in several directions. First, for a system with two types of particles (two species) we obtain a nonlinear cross-diffusion model. This model captures both alternative notions of diffusion, the collective diffusion and the self-diffusion, and can be used to study diffusion through obstacles. Second, we study the diffusion of finite-size particles through confined domains such as a narrow channel or a Hele–Shaw cell. In this case the macroscopic model depends on a confinement parameter and interpolates between severe confinement (e.g., a single- file diffusion in the narrow channel case) and an unconfined situation. Finally, the analysis for diffusive soft spheres, particles with soft-core repulsive potentials, yields an interaction-dependent non-linear term in the diffusion equation.
APA, Harvard, Vancouver, ISO, and other styles
23

Tsujimoto, Tsunehiro. "Calibration of the chaotic interest rate model." Thesis, University of St Andrews, 2010. http://hdl.handle.net/10023/2568.

Full text
Abstract:
In this thesis we establish a relationship between the Potential Approach to interest rates and the Market Models. This relationship allows us to derive the dynamics of forward LIBOR rates and forward swap rates by modelling the state price density. It means that we are able to secure the arbitrage-free condition and positive interest rate feature when we model the volatility drifts of those dynamics. On the other hand, we develop the Potential Approach, particularly the Hughston-Rafailidis Chaotic Interest Rate Model. The early argument enables us to infer that the Chaos Models belong to the Stochastic Volatility Market Models. In particular, we propose One-variable Chaos Models with the application of exponential polynomials. This maintains the generality of the Chaos Models and performs well for yield curves comparing with the Nelson-Siegel Form and the Svensson Form. Moreover, we calibrate the One-variable Chaos Model to European Caplets and European Swaptions. We show that the One-variable Chaos Models can reproduce the humped shape of the term structure of caplet volatility and also the volatility smile/skew curve. The calibration errors are small compared with the Lognormal Forward LIBOR Model, the SABR Model, traditional Short Rate Models, and other models under the Potential Approach. After the calibration, we introduce some new interest rate models under the Potential Approach. In particular, we suggest a new framework where the volatility drifts can be indirectly modelled from the short rate via the state price density.
APA, Harvard, Vancouver, ISO, and other styles
24

Franz, Benjamin. "Recent modelling frameworks for systems of interacting particles." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:ac76d159-4cdd-40c9-b378-6ea1faf48aed.

Full text
Abstract:
In this thesis we study three different modelling frameworks for biological systems of dispersal and combinations thereof. The three frameworks involved are individual-based models, group-level models in the form of partial differential equations (PDEs) and robot swarms. In the first two chapters of the thesis, we present ways of coupling individual based models with PDEs in so-called hybrid models, with the aim of achieving improved performance of simulations. Two classes of such hybrid models are discussed that allow an efficient simulation of multi-species systems of dispersal with reactions, but involve individual resolution for certain species and in certain parts of a computational domain if desired. We generally consider two types of example systems: bacterial chemotaxis and reaction-diffusion systems, and present results in the respective application area as well as general methods. The third chapter of this thesis introduces swarm robotic experiments as an additional tool to study systems of dispersal. In general, those experiments can be used to mimic animal behaviour and to study the impact of local interactions on the group-level dynamics. We concentrate on a target finding problem for groups of robots. We present how PDE descriptions can be adjusted to incorporate the finite turning times observed in the robotic system and that the adjusted models match well with experimental data. In the fourth and last chapter, we consider interactions between robots in the form of hard-sphere collisions and again derive adjusted PDE descriptions. We show that collisions have a significant impact on the speed with which the group spreads across a domain. Throughout these two chapters, we apply a combination of experiments, individual-based simulations and PDE descriptions to improve our understanding of interactions in systems of dispersal.
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Wen-Kai. "Application of stochastic differential games and real option theory in environmental economics." Thesis, University of St Andrews, 2009. http://hdl.handle.net/10023/893.

Full text
Abstract:
This thesis presents several problems based on papers written jointly by the author and Dr. Christian-Oliver Ewald. Firstly, the author extends the model presented by Fershtman and Nitzan (1991), which studies a deterministic differential public good game. Two types of volatility are considered. In the first case the volatility of the diffusion term is dependent on the current level of public good, while in the second case the volatility is dependent on the current rate of public good provision by the agents. The result in the latter case is qualitatively different from the first one. These results are discussed in detail, along with numerical examples. Secondly, two existing lines of research in game theoretic studies of fisheries are combined and extended. The first line of research is the inclusion of the aspect of predation and the consideration of multi-species fisheries within classical game theoretic fishery models. The second line of research includes continuous time and uncertainty. This thesis considers a two species fishery game and compares the results of this with several cases. Thirdly, a model of a fishery is developed in which the dynamic of the unharvested fish population is given by the stochastic logistic growth equation and it is assumed that the fishery harvests the fish population following a constant effort strategy. Explicit formulas for optimal fishing effort are derived in problems considered and the effects of uncertainty, risk aversion and mean reversion speed on fishing efforts are investigated. Fourthly, a Dixit and Pindyck type irreversible investment problem in continuous time is solved, using the assumption that the project value follows a Cox-Ingersoll- Ross process. This solution differs from the two classical cases of geometric Brownian motion and geometric mean reversion and these differences are examined. The aim is to find the optimal stopping time, which can be applied to the problem of extracting resources.
APA, Harvard, Vancouver, ISO, and other styles
26

Chipindirwi, Simbarashe. "Analysis of a simple gene expression model." Thesis, Lethbridge, Alta. : University of Lethbridge, Dept. of Chemistry and Biochemistry, c2012, 2012. http://hdl.handle.net/10133/3251.

Full text
Abstract:
Gene expression is random owing to the low copy numbers of molecules in a living cell and the best way to study it is by use of a stochastic method, specifically the chemical master equation. The method is used here to derive analytically the invariant probability distributions, and expressions for the moments and noise strength for a simple gene model without feedback. Sensitivity analysis, emphasizing particularly the dependence of the probability distributions, the moments, and noise strength is carried out using Metabolic Control Analysis, which uses control coefficients that measure the response of observables when parameters change. Bifurcation analysis is also carried out. The results show that the number of mRNA molecules follows a hypergeometric probability distribution, and that noise decreases as the number of these molecules increases. Metabolic Control Analysis was successfully extended to genetic control mechanisms, with the obtained control coefficients satisfying a summation theorem. The system undergoes stochastic bifurcations as parameters change.
xii, 86 leaves : ill. ; 29 cm
APA, Harvard, Vancouver, ISO, and other styles
27

Rosser, Gabriel A. "Mathematical modelling and analysis of aspects of bacterial motility." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:1af98367-aa2f-4af3-9344-8c361311b553.

Full text
Abstract:
The motile behaviour of bacteria underlies many important aspects of their actions, including pathogenicity, foraging efficiency, and ability to form biofilms. In this thesis, we apply mathematical modelling and analysis to various aspects of the planktonic motility of flagellated bacteria, guided by experimental observations. We use data obtained by tracking free-swimming Rhodobacter sphaeroides under a microscope, taking advantage of the availability of a large dataset acquired using a recently developed, high-throughput protocol. A novel analysis method using a hidden Markov model for the identification of reorientation phases in the tracks is described. This is assessed and compared with an established method using a computational simulation study, which shows that the new method has a reduced error rate and less systematic bias. We proceed to apply the novel analysis method to experimental tracks, demonstrating that we are able to successfully identify reorientations and record the angle changes of each reorientation phase. The analysis pipeline developed here is an important proof of concept, demonstrating a rapid and cost-effective protocol for the investigation of myriad aspects of the motility of microorganisms. In addition, we use mathematical modelling and computational simulations to investigate the effect that the microscope sampling rate has on the observed tracking data. This is an important, but often overlooked aspect of experimental design, which affects the observed data in a complex manner. Finally, we examine the role of rotational diffusion in bacterial motility, testing various models against the analysed data. This provides strong evidence that R. sphaeroides undergoes some form of active reorientation, in contrast to the mainstream belief that the process is passive.
APA, Harvard, Vancouver, ISO, and other styles
28

Samaranayaka, Ari, and n/a. "Environmental stochasticity and density dependence in animal population models." University of Otago. Department of Mathematics & Statistics, 2006. http://adt.otago.ac.nz./public/adt-NZDU20060907.114616.

Full text
Abstract:
Biological management of populations plays an indispensable role in all areas of population biology. In deciding between possible management options, one of the most important pieces of information required by population managers is the likely population status under possible management actions. Population dynamic models are the basic tool used in deriving this information. These models elucidate the complex processes underlying the population dynamics, and address the possible consequences/merits of management actions. These models are needed to guide the population towards desired/chosen management goals, and therefore allow managers to make informed decisions between alternative management actions. The reliability that can be placed on inferences drawn from a model about the fate of a population is undoubtedly dependent on how realistically the model represents the dynamic process of the population. The realistic representation of population characteristics in models has proved to be somewhat of a thorn in the side of population biologists. This thesis focuses in particular on ways to represent environmental stochasticity and density dependence in population models. Various approaches that are used in building environmental stochasticity into population models are reviewed. The most common approach represents the environmental variation by changes to demographic parameters that are assumed to follow a simple statistical distribution. For this purpose, a distribution is often selected on the basis of expert opinion, previous practice, and convenience. This thesis assesses the effect of this subjective choice of distribution on the model predictions, and develops some objective criteria for that selection based on ecological and statistical acceptability. The more commonly used distributions are compared as to their suitability, and some recommendations are made. Density dependence is usually represented in population models by specifying one or more of the vital rates as a function of population density. For a number of reasons, a population-specific function cannot usually be selected based on data. The thesis develops some ecologically-motivated criteria for identifying possible function(s) that could be used for a given population by matching functional properties to population characteristics when they are known. It also identifies a series of properties that should be present in a general function which could be suitable for modelling a population when relevant population characteristics are unknown. The suitability of functions that are commonly chosen for such purposes is assessed on this basis. I also evaluate the effect of the choice of a function on the resulting population trajectories. The case where the density dependence of one demographic rate is influenced by the density dependence of another is considered in some detail, as in some situations it can be modelled with little information in a relatively function-insensitive way. The findings of this research will help in embedding characteristics of animal populations into population dynamics models more realistically. Even though the findings are presented in the context of slow-growing long-lived animal populations, they are more generally applicable in all areas of biological management.
APA, Harvard, Vancouver, ISO, and other styles
29

Zararsiz, Zarife. "On an epidemic model given by a stochastic differential equation." Thesis, Växjö University, School of Mathematics and Systems Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-5747.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

North, Ben. "Learning dynamical models for visual tracking." Thesis, University of Oxford, 1998. http://ora.ox.ac.uk/objects/uuid:6ed12552-4c30-4d80-88ef-7245be2d8fb8.

Full text
Abstract:
Using some form of dynamical model in a visual tracking system is a well-known method for increasing robustness and indeed performance in general. Often, quite simple models are used and can be effective, but prior knowledge of the likely motion of the tracking target can often be exploited by using a specially-tailored model. Specifying such a model by hand, while possible, is a time-consuming and error-prone process. Much more desirable is for an automated system to learn a model from training data. A dynamical model learnt in this manner can also be a source of useful information in its own right, and a set of dynamical models can provide discriminatory power for use in classification problems. Methods exist to perform such learning, but are limited in that they assume the availability of 'ground truth' data. In a visual tracking system, this is rarely the case. A learning system must work from visual data alone, and this thesis develops methods for learning dynamical models while explicitly taking account of the nature of the training data --- they are noisy measurements. The algorithms are developed within two tracking frameworks. The Kalman filter is a simple and fast approach, applicable where the visual clutter is limited. The recently-developed Condensation algorithm is capable of tracking in more demanding situations, and can also employ a wider range of dynamical models than the Kalman filter, for instance multi-mode models. The success of the learning algorithms is demonstrated experimentally. When using a Kalman filter, the dynamical models learnt using the algorithms presented here produce better tracking when compared with those learnt using current methods. Learning directly from training data gathered using Condensation is an entirely new technique, and experiments show that many aspects of a multi-mode system can be successfully identified using very little prior information. Significant computational effort is required by the implementation of the methods, and there is scope for improvement in this regard. Other possibilities for future work include investigation of the strong links this work has with learning problems in other areas. Most notable is the study of the 'graphical models' commonly used in expert systems, where the ideas presented here promise to give insight and perhaps lead to new techniques.
APA, Harvard, Vancouver, ISO, and other styles
31

Nassar, Hiba. "Regularized Calibration of Jump-Diffusion Option Pricing Models." Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-9063.

Full text
Abstract:
An important issue in finance is model calibration. The calibration problem is the inverse of the option pricing problem. Calibration is performed on a set of option prices generated from a given exponential L´evy model. By numerical examples, it is shown that the usual formulation of the inverse problem via Non-linear Least Squares is an ill-posed problem. To achieve well-posedness of the problem, some regularization is needed. Therefore a regularization method based on relative entropy is applied.
APA, Harvard, Vancouver, ISO, and other styles
32

Janssen, Arend. "Order book models, signatures and numerical approximations of rough differential equations." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:264e96b3-f449-401b-8768-337acab59cab.

Full text
Abstract:
We construct a mathematical model of an order driven market where traders can submit limit orders and market orders to buy and sell securities. We adapt the notion of no free lunch of Harrison and Kreps and Jouini and Kallal to our setting and we prove a no-arbitrage theorem for the model of the order driven market. Furthermore, we compute signatures of order books of different financial markets. Signatures, i.e. the full sequence of definite iterated integrals of a path, are one of the fundamental elements of the theory of rough paths. The theory of rough paths provides a framework to describe the evolution of dynamical systems that are driven by rough signals, including rough paths based on Brownian motion and fractional Brownian motion (see the work of Lyons). We show how we can obtain the solution of a polynomial differential equation and its (truncated) signature from the signature of the driving signal and the initial value. We also present and analyse an ODE method for the numerical solution of rough differential equations. We derive error estimates and we prove that it achieves the same rate of convergence as the corresponding higher order Euler schemes studied by Davie and Friz and Victoir. At the same time, it enhances stability. The method has been implemented for the case of polynomial vector fields as part of the CoRoPa software package which is available at http://coropa.sourceforge.net. We describe both the algorithm and the implementation and we show by giving examples how it can be used to compute the pathwise solution of stochastic rough differential equations driven by Brownian rough paths and fractional Brownian rough paths.
APA, Harvard, Vancouver, ISO, and other styles
33

Frencl, Victor Baptista 1983. "Estudo da dinâmica de indivíduos para rastreamento multi-alvo utilizando conjuntos aleatórios finitos." [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/260839.

Full text
Abstract:
Orientador: João Bosco Ribeiro do Val
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-25T04:12:10Z (GMT). No. of bitstreams: 1 Frencl_VictorBaptista_D.pdf: 1574127 bytes, checksum: 86e39d74bf9c9e7734aa764b39aaac1a (MD5) Previous issue date: 2014
Resumo: O problema de rastreamento de alvos é tratado de diversas formas na literatura, seja elaborando modelos matemáticos mais eficientes na reprodução da dinâmica de movimentos, seja na construção de filtros estocásticos que realizem estimativa de estados, como de posição e velocidade. Quando se trata do rastreamento em que os alvos de interesse são diversos indivíduos em movimento, a literatura não possui estudos específicos. Dessa forma, o objetivo principal da tese é aprofundar o conhecimento de rastreamento de indivíduos. Neste cenário, existe um número elevado e variável de alvos, que podem surgir de forma espontânea, agrupar-se ou separar-se, além de alarmes falsos imersos nas medidas. Estudou-se a teoria dos Conjuntos Aleatórios Finitos, cujo tratamento matemático se dá através do chamado Cálculo Multi-Alvo. Os filtros estocásticos também foram estudados sobre este ponto de vista, sendo os filtros PHD e GM-PHD os principais. Criada essa base teórica, três propostas baseadas nesse problema foram apresentadas: Modelos de Movimentação de Indivíduos, Simulador de Trajetórias de Indivíduos e Modelos Dinâmicos para Filtragem Estocástica. A primeira das propostas consiste em construir perfis probabilísticos de movimentação para cada um dos indivíduos. A segunda envolve a criação de um simulador de trajetórias de indivíduos que seja o mais verossímil possível em relação às trajetórias reais de uma pessoa, em cenários com variações de terreno, classificados pela dificuldade de locomoção. E finalmente, a terceira proposta tem como objetivo criar um modelo dinâmico combinado e modificado em relação a modelos encontrados na literatura para ser inserido no processo de filtragem estocástica. Ao final, alguns testes e simulações foram realizados, de tal forma a testar o desempenho de filtros e analisar o comportamento dos modelos matemáticos e dos perfis probabilísticos propostos
Abstract: The problem of target tracking is handled in different ways in the literature, either developing more efficient mathematical models to reproduce the dynamics of movements, or building stochastic filters that perform state estimation, such as position and velocity. When it comes to target tracking where the targets of interest are many individuals in motion, the literature lacks on specific studies. Thus, the main objective of the thesis is to deepen the knowledge of individuals tracking. In this scenario, there is a large and variable number of targets, which may arise spontaneously, group together or separate, in addition to measures immersed in false alarms. A study of the Random Finite Sets theory was made, whose mathematical treatment is through the so-called Multi-Target Calculus. Stochastic filters were also studied on this point of view, where the PHD and the GM-PHD filters are the main ones. After created the theoretical basis, three proposals based on this problem were presented: Motion Models for Individuals, a Simulator for Individuals Trajectories and Dynamic Models for Stochastic Filtering. The first proposal is based on building a motion probabilistic shape for each individual. The second proposal involves the creation of a trajectory simulator for individuals to be as plausible as possible to the real movements of a person, in scenarios with variations of terrain, ranked by locomotion difficulty. And finally, the third proposal aims to create a combined and modified dynamic model from models found in the literature, to be inserted in the stochastic filters. Finally, several tests and simulations were made in such a way to test the filters performances and analyze the behavior of the proposed mathematical models and the motion probabilistic shapes
Doutorado
Automação
Doutor em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
34

Salimi-Khorshidi, Gholamreza. "Statistical models for neuroimaging meta-analytic inference." Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:40a10327-7f36-42e7-8120-ae04bd8be1d4.

Full text
Abstract:
A statistical meta-analysis combines the results of several studies that address a set of related research hypotheses, thus increasing the power and reliability of the inference. Meta-analytic methods are over 50 years old and play an important role in science; pooling evidence from many trials to provide answers that any one trial would have insufficient samples to address. On the other hand, the number of neuroimaging studies is growing dramatically, with many of these publications containing conflicting results, or being based on only a small number of subjects. Hence there has been increasing interest in using meta-analysis methods to find consistent results for a specific functional task, or for predicting the results of a study that has not been performed directly. Current state of neuroimaging meta-analysis is limited to coordinate-based meta-analysis (CBMA), i.e., using only the coordinates of activation peaks that are reported by a group of studies, in order to "localize" the brain regions that respond to a certain type of stimulus. This class of meta-analysis suffers from a series of problems and hence cannot result in as accurate results as desired. In this research, we describe the problems that existing CBMA methods are suffering from and introduce a hierarchical mixed-effects image-based metaanalysis (IBMA) solution that incorporates the sufficient statistics (i.e., voxel-wise effect size and its associated uncertainty) from each study. In order to improve the statistical-inference stage of our proposed IBMA method, we introduce a nonparametric technique that is capable of adjusting such an inference for spatial nonstationarity. Given that in common practice, neuroimaging studies rarely provide the full image data, in an attempt to improve the existing CBMA techniques we introduce a fully automatic model-based approach that employs Gaussian-process regression (GPR) for estimating the meta-analytic statistic image from its corresponding sparse and noisy observations (i.e., the collected foci). To conclude, we introduce a new way to approach neuroimaging meta-analysis that enables the analysis to result in information such as “functional connectivity” and networks of the brain regions’ interactions, rather than just localizing the functions.
APA, Harvard, Vancouver, ISO, and other styles
35

Silveira, Graciele Paraguaia 1982. "Métodos numéricos integrados à lógica Fuzzy e método estocástico para solução de EDP's = uma aplicação à dengue." [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/307567.

Full text
Abstract:
Orientadores: Laécio Carvalho de Barros, Laércio Luis Vendite
Tese (doutorado) - Universidade Estadual de Campinas,Instituto de Matemática, Estatística e Computação Científica
Made available in DSpace on 2018-08-19T00:19:17Z (GMT). No. of bitstreams: 1 Silveira_GracieleParaguaia_D.pdf: 5271083 bytes, checksum: abdfc81c4fe86f2067acbee72425a50c (MD5) Previous issue date: 2011
Resumo: Neste trabalho um modelo matemático (do tipo SIR - Suscetível, Infectado, Recuperado) integrado foi proposto para o estudo do espalhamento espaço - temporal da dengue. O modelo é descrito por Equações Diferenciais Parciais cujas soluções numéricas foram obtidas a partir de um esquema híbrido, que também incorpora lógica fuzzy e método estocástico. Utilizou-se os métodos WENO-5 (esquemas essencialmente não-oscilatórios, de ordem 5) para regiões não suaves do domínio e esquemas de diferenças finitas de alta ordem para as regiões suaves na discretização espacial. Além disso, um esquema lifting foi construído para definir suavidade ou não, nas regiões. Para a evolução temporal, escolheu-se um método de Runge-Kutta TVD (Valor Total Decrescente) de ordem 3. Os parâmetros incertos, relacionados ao comportamento do Aedes aegypti foram estimados fazendo-se uso de Sistemas Baseados em Regras Fuzzy (SBRF). Tais parâmetros dependem de hábitos da população, que fornece criadouros e sangue para a maturação dos ovos da fêmea e dependem ainda da ocorrência de chuvas. Esta variável, quantidade de chuva, apresenta dependência estocástica nos valores amostrados e, por essa razão, optou-se pelo Método Cadeia de Markov (de ordem 2). Dados reais sobre o comportamento da doença e proliferação do vetor, na região sul de Campinas, foram obtidos da Secretaria Municipal de Saúde, IAC (Instituto Agronômico de Campinas) e de especialistas do epiGeo (Laboratório de Análise Espacial de Dados Epidemiológicos - UNICAMP). Simulações e análise de variados cenários foram realizadas, visando obter cenários (mapas) a respeito do espalhamento da doença, levando em conta características típicas do domínio estudado. Por fim, um modelo do tipo Takagi-Sugeno - regras fuzzy, cujas saídas são EDP's - foi elaborado para a análise do risco de dengue na região do domínio, a partir de um mapa de risco relativo desenvolvido pelos pesquisadores do epiGeo
Abstract: In this work we proposed an integrated mathematical model of the type SIR - Susceptible, Infected and Recovered - to study the spatial and time evolutions of dengue disease. The model consists of a partial differential equations system whose numerical solutions were obtained by an explicit high order hybrid scheme that incorporates Fuzzy logic and stochastic process. For the spatial discretization, we used a WENO-5 scheme (Weighted Essentially Non Oscillatory Schemes, fifth order) for regions not smooth of the map and centered finite difference schemes of high order for the regions smooth. Also, a lifting scheme was made to define smoothness or not in the regions. For the time evolution, we have chosen a third order TVD Runge-Kutta (Total Value Diminishing). The uncertain parameters related to the behavior of Aedes aegypti were estimated by the Fuzzy Rule- Based Systems. Such parameters depend of the population habits, mosquito's breeding, blood for the maturation of the eggs and rain events. The rainfall variable has stochastic dependence on the sampled values and for this reason, we chose a Markov chain method (order 2) to estimate the rain. Informations on the behavior of the disease and the conditions for the proliferation of vectors in the region south of city of Campinas were researched for the Health Department, Agronomic Institute and epiGeo (Laboratory for Spatial Analysis of Epidemiological Data) of the Medical Sciences Faculty of UNICAMP. Simulations of various situations were performed to obtain scenarios regarding the spread of the disease, taking into account characteristics of the region studied. Finally, a model of the Takagi-Sugeno type - fuzzy rules, whose outputs are EDP's - was designed to analyze the dengue risk in the region of the domain, from a map of relative risk developed by researchers at the epiGeo
Doutorado
Matematica Aplicada
Doutor em Matemática Aplicada
APA, Harvard, Vancouver, ISO, and other styles
36

McBryde, Emma Sue. "Mathematical and statistical modelling of infectious diseases in hospitals." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16330/.

Full text
Abstract:
Antibiotic resistant pathogens, such as methicillin-resistant Staphylococcus aureus (MRSA), and vancomycin-resistant enterococci (VRE), are an increasing burden on healthcare systems. Hospital acquired infections with these organisms leads to higher morbidity and mortality compared with the sensitive strains of the same species and both VRE and MRSA are on the rise worldwide including in Australian hospitals. Emerging community infectious diseases are also having an impact on hospitals. The Severe Acute Respiratory Syndrome virus (SARS Co-V) was noted for its propensity to spread throughout hospitals, and was contained largely through social distancing interventions including hospital isolation. A detailed understanding of the transmission of these and other emerging pathogens is crucial for their containment. The statistical inference and mathematical models used in this thesis aim to improve understanding of pathogen transmission by estimating the transmission rates of contagions and predicting the impact of interventions. Datasets used for these studies come from the Princess Alexandra Hospital in Brisbane, Australia and Shanxi province, mainland China. Epidemiological data on infection outbreaks are challenging to analyse due to the censored nature of infection transmission events. Most datasets record the time on symptom onset, but the transmission time is not observable. There are many ways of managing censored data, in this study we use Bayesian inference, with transmission times incorporated into the augmented dataset as latent variables. Hospital infection surveillance data is often much less detailed that data collected for epidemiological studies, often consisting of serial incidence or prevalence of patient colonisation with a resistant pathogen without individual patient event histories. Despite the lack of detailed data, transmission characteristics can be inferred from such a dataset using structured HiddenMarkovModels (HMMs). Each new transmission in an epidemic increases the infection pressure on those remaining susceptible, hence infection outbreak data are serially dependent. Statistical methods that assume independence of infection events are misleading and prone to over-estimating the impact of infection control interventions. Structured mathematical models that include transmission pressure are essential. Mathematical models can also give insights into the potential impact of interventions. The complex interaction of different infection control strategies, and their likely impact on transmission can be predicted using mathematical models. This dissertation uses modified or novel mathematical models that are specific to the pathogen and dataset being analysed. The first study estimates MRSA transmission in an Intensive Care Unit, using a structured four compartment model, Bayesian inference and a piecewise hazard methods. The model predicts the impact of interventions, such as changes to staff/patient ratios, ward size and decolonisation. A comparison of results of the stochastic and deterministic model is made and reason for differences given. The second study constructs a Hidden Markov Model to describe longitudinal data on weekly VRE prevalence. Transmission is assumed to be either from patient to patient cross-transmission or sporadic (independent of cross-transmission) and parameters for each mode of acquisition are estimated from the data. The third study develops a new model with a compartment representing an environmental reservoir. Parameters for the model are gathered from literature sources and the implications of the environmental reservoir are explored. The fourth study uses a modified Susceptible-Exposed-Infectious-Removed (SEIR) model to analyse data from a SARS outbreak in Shanxi province, China. Infectivity is determined before and after interventions as well as separately for hospitalised and community symptomatic SARS cases. Model diagnostics including sensitivity analysis, model comparison and bootstrapping are implemented.
APA, Harvard, Vancouver, ISO, and other styles
37

McBryde, Emma Sue. "Mathematical and statistical modelling of infectious diseases in hospitals." Thesis, Queensland University of Technology, 2006. https://eprints.qut.edu.au/16330/1/Emma_McBryde_Thesis.pdf.

Full text
Abstract:
Antibiotic resistant pathogens, such as methicillin-resistant Staphylococcus aureus (MRSA), and vancomycin-resistant enterococci (VRE), are an increasing burden on healthcare systems. Hospital acquired infections with these organisms leads to higher morbidity and mortality compared with the sensitive strains of the same species and both VRE and MRSA are on the rise worldwide including in Australian hospitals. Emerging community infectious diseases are also having an impact on hospitals. The Severe Acute Respiratory Syndrome virus (SARS Co-V) was noted for its propensity to spread throughout hospitals, and was contained largely through social distancing interventions including hospital isolation. A detailed understanding of the transmission of these and other emerging pathogens is crucial for their containment. The statistical inference and mathematical models used in this thesis aim to improve understanding of pathogen transmission by estimating the transmission rates of contagions and predicting the impact of interventions. Datasets used for these studies come from the Princess Alexandra Hospital in Brisbane, Australia and Shanxi province, mainland China. Epidemiological data on infection outbreaks are challenging to analyse due to the censored nature of infection transmission events. Most datasets record the time on symptom onset, but the transmission time is not observable. There are many ways of managing censored data, in this study we use Bayesian inference, with transmission times incorporated into the augmented dataset as latent variables. Hospital infection surveillance data is often much less detailed that data collected for epidemiological studies, often consisting of serial incidence or prevalence of patient colonisation with a resistant pathogen without individual patient event histories. Despite the lack of detailed data, transmission characteristics can be inferred from such a dataset using structured HiddenMarkovModels (HMMs). Each new transmission in an epidemic increases the infection pressure on those remaining susceptible, hence infection outbreak data are serially dependent. Statistical methods that assume independence of infection events are misleading and prone to over-estimating the impact of infection control interventions. Structured mathematical models that include transmission pressure are essential. Mathematical models can also give insights into the potential impact of interventions. The complex interaction of different infection control strategies, and their likely impact on transmission can be predicted using mathematical models. This dissertation uses modified or novel mathematical models that are specific to the pathogen and dataset being analysed. The first study estimates MRSA transmission in an Intensive Care Unit, using a structured four compartment model, Bayesian inference and a piecewise hazard methods. The model predicts the impact of interventions, such as changes to staff/patient ratios, ward size and decolonisation. A comparison of results of the stochastic and deterministic model is made and reason for differences given. The second study constructs a Hidden Markov Model to describe longitudinal data on weekly VRE prevalence. Transmission is assumed to be either from patient to patient cross-transmission or sporadic (independent of cross-transmission) and parameters for each mode of acquisition are estimated from the data. The third study develops a new model with a compartment representing an environmental reservoir. Parameters for the model are gathered from literature sources and the implications of the environmental reservoir are explored. The fourth study uses a modified Susceptible-Exposed-Infectious-Removed (SEIR) model to analyse data from a SARS outbreak in Shanxi province, China. Infectivity is determined before and after interventions as well as separately for hospitalised and community symptomatic SARS cases. Model diagnostics including sensitivity analysis, model comparison and bootstrapping are implemented.
APA, Harvard, Vancouver, ISO, and other styles
38

Burgain, Pierrick Antoine. "On the control of airport departure operations." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/37261.

Full text
Abstract:
This thesis is focused on airport departure operations; its objective is to assign a value to surface surveillance information within a collaborative framework. The research develops a cooperative concept that improves the control of departure operations at busy airports and evaluates its merit using a classical and widely accepted airport departure model. The research then assumes departure operations are collaboratively controlled and develops a stochastic model of taxi operations on the airport surface. Finally, this study investigates the effect of feeding back different levels of surface surveillance information to the departure control process. More specifically, it examines the environmental and operational impact of aircraft surface location information on the taxi clearance process. Benefits are evaluated by measuring and comparing engine emissions for given runway utilization rates.
APA, Harvard, Vancouver, ISO, and other styles
39

Van, Zyl Verena Helen. "Searching for histogram patterns due to macroscopic fluctuations in financial time series." Thesis, Stellenbosch : University of Stellenbosch, 2007. http://hdl.handle.net/10019.1/3078.

Full text
Abstract:
Thesis (MComm (Business Management))--University of Stellenbosch, 2007.
ENGLISH ABSTRACT: his study aims to investigate whether the phenomena found by Shnoll et al. when applying histogram pattern analysis techniques to stochastic processes from chemistry and physics are also present in financial time series, particularly exchange rate and index data. The phenomena are related to fine structure of non-smoothed frequency distributions drawn from statistically insufficient samples of changes and their patterns in time. Shnoll et al. use the notion of macroscopic fluctuations to explain the behaviour of sequences of histograms. Histogram patterns in time adhere to several laws that could not be detected when using time series analysis methods. In this study general approaches are reviewed that may be used to model financial markets and the volatility of price processes in particular. Special emphasis is placed on the modelling of highfrequency data sets and exchange rate data. Following previous studies of the Shnoll phenomena from other fields, different steps of the histogram sequence analysis are carried out to determine whether the findings of Shnoll et al. could also be applied to financial market data. The findings of this thesis widen the understanding of time varying volatility and can aid in financial risk measurement and management. Outcomes of the study include an investigation of time series characteristics in terms of the formation of discrete states, the detection of the near zone effect as proclaimed by Shnoll et al., the periodic recurrence of histogram shapes as well as the synchronous variation in data sets measured in the same time intervals.
APA, Harvard, Vancouver, ISO, and other styles
40

Abramowicz, Konrad. "Numerical analysis for random processes and fields and related design problems." Doctoral thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-46156.

Full text
Abstract:
In this thesis, we study numerical analysis for random processes and fields. We investigate the behavior of the approximation accuracy for specific linear methods based on a finite number of observations. Furthermore, we propose techniques for optimizing performance of the methods for particular classes of random functions. The thesis consists of an introductory survey of the subject and related theory and four papers (A-D). In paper A, we study a Hermite spline approximation of quadratic mean continuous and differentiable random processes with an isolated point singularity. We consider a piecewise polynomial approximation combining two different Hermite interpolation splines for the interval adjacent to the singularity point and for the remaining part. For locally stationary random processes, sequences of sampling designs eliminating asymptotically the effect of the singularity are constructed. In Paper B, we focus on approximation of quadratic mean continuous real-valued random fields by a multivariate piecewise linear interpolator based on a finite number of observations placed on a hyperrectangular grid. We extend the concept of local stationarity to random fields and for the fields from this class, we provide an exact asymptotics for the approximation accuracy. Some asymptotic optimization results are also provided. In Paper C, we investigate numerical approximation of integrals (quadrature) of random functions over the unit hypercube. We study the asymptotics of a stratified Monte Carlo quadrature based on a finite number of randomly chosen observations in strata generated by a hyperrectangular grid. For the locally stationary random fields (introduced in Paper B), we derive exact asymptotic results together with some optimization methods. Moreover, for a certain class of random functions with an isolated singularity, we construct a sequence of designs eliminating the effect of the singularity. In Paper D, we consider a Monte Carlo pricing method for arithmetic Asian options. An estimator is constructed using a piecewise constant approximation of an underlying asset price process. For a wide class of Lévy market models, we provide upper bounds for the discretization error and the variance of the estimator. We construct an algorithm for accurate simulations with controlled discretization and Monte Carlo errors, andobtain the estimates of the option price with a predetermined accuracy at a given confidence level. Additionally, for the Black-Scholes model, we optimize the performance of the estimator by using a suitable variance reduction technique.
APA, Harvard, Vancouver, ISO, and other styles
41

Li, Heping. "Condition-based maintenance policies for multi-component systems considering stochastic dependences." Thesis, Troyes, 2016. http://www.theses.fr/2016TROY0030/document.

Full text
Abstract:
De nos jours, les systèmes industriels sont de plus en plus complexes tant du point de vue de leur structure logique que des diverses dépendances (dépendances économique, stochastiques et structurelles) entre leurs composants qui peuvent influencer l'optimisation de la maintenance. La Maintenance conditionnelle qui permet de gérer les activités de maintenance en fonction de l’information de surveillance a fait l’objet de beaucoup d'attention au cours des dernières années, mais les dépendances stochastiques sont rarement utilisées dans le processus de prise de décision. Par conséquent, cette thèse a pour objectif de proposer des politiques de maintenance conditionnelle tenant compte des dépendances économiques et stochastiques pour les systèmes multi-composant. En termes de dépendance économique, les politiques proposées sont conçues pour permettre de favoriser les opportunités de grouper des actions de maintenance. Une règle de décision est établie qui permet le groupement de maintenances avec des périodes d'inspection différentes. La dépendance stochastique causée par une part de dégradation commune est modélisée par copules de Lévy. Des politiques de maintenance conditionnelle sont proposées pour profiter de la dépendance stochastique.Nos travaux montrent la nécessité de tenir compte des dépendances économiques et stochastiques pour la prise de décision de maintenance. Les résultats numériques confirment l’avantage de nos politiques par rapport à d’autres politiques existant dans la littérature
Nowadays, industrial systems contain numerous components so that they become more and more complex regarding the logical structures as well as the various dependences (economic, stochastic and structural dependences) between components. The dependences between components have an impact on the maintenance optimization as well as the reliability analysis. Condition-based maintenance which enables to manage maintenance activities based on information collected through monitoring has gained a lot of attention over recent years but stochastic dependences are rarely used in the decision making process. Therefore, this thesis is devoted to propose condition-based maintenance policies which take advantage of both economic and stochastic dependences for multi-component systems. In terms of economic dependence, the proposed maintenance policies are designed to be maximally effective in providing opportunities for maintenance grouping. A decision rule is established to permit the maintenance grouping with different inspection periods. Stochastic dependence due to a common degradation part is modelled by Lévy and Nested Lévy copulas. Condition-based maintenance policies with non-periodic inspection scheme are proposed to make use of stochastic dependence. Our studies show the necessity of taking account of both economic and stochastic dependences in the maintenance decisions. Numerical experiments confirm the advantages of our maintenance policies when compared with other existing policies in the literature
APA, Harvard, Vancouver, ISO, and other styles
42

Calcraft, Peter James. "Two-pore channels and NAADP-dependent calcium signalling." Thesis, St Andrews, 2010. http://hdl.handle.net/10023/888.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Shukron, Ofir. "Modélisation et analyse de modèles de polymères aléatoirement réticulé et application à l’organisation et à la dynamique de la chromatine." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE063/document.

Full text
Abstract:
Dans cette thèse nous étudions la relation entre la conformation et la dynamique de la chromatine en nous basant sur une classe de modèles de polymères aléatoirement réticulé (AR). Les modèles AR permettent de prendre en compte la variabilité de la conformation de la chromatine sur l’ensemble d’une population de cellules. Nous utilisons les outils tels que les statistiques, les processus stochastiques, les simulations numériques ainsi que la physique des polymères afin de déduire certaines propriétés des polymères AR a l’équilibre ainsi que pour des cas transitoires. Nous utilisons par la suite ces propriétés afin d’élucider l’organisation dynamique de la chromatine pour diverses échelles et conditions biologiques. Dans la première partie de ce travail, nous développons une méthode générale pour construire les polymères AR directement à partir des données expérimentales, c’est-à-dire des données de capture chromosomiques (CC). Nous montrons que des connections longue portées persistantes entre des domaines topologiquement associés (DTA) affectent le temps de rencontre transitoire entre les DTA dans le processus d’inactivation du chromosome X. Nous montrons de plus que la variabilité des exposants anormaux – mesurée en trajectoires de particules individuelles (TPI) – est une conséquence directe de l’hétérogénéité dans la position des réticulations. Dans la deuxième partie, nous utilisons les polymères AR afin d’étudier la réorganisation locale du génome au point de cassure des deux branches d’ADN (CDB). Le nombre de connecteurs dans le modèle de polymère AR est calibré à partir de TPI, mesurées avant et après la CDB. Nous avons trouvé que la perte modérée de connecteur autour des sites de la CDB affecte de façon significative le premier temps de rencontre des deux extrémités cassées lors du processus de réparation d’une CBD. Nous montrons comment un micro-environnement génomique réticulé peut confiner les extrémités d’une cassure, empêchant ainsi les deux brins de dériver l’un de l’autre. Dans la troisième partie nous déduisons une expression analytique des propriétés transitoires et a l’équilibre du modèle de polymère AR, représentant une unique région DTA. Les expressions ainsi obtenue sont ensuite utilisées afin d’extraire le nombre moyen de connexions dans les DTA provenant des données de CC, et ce à l’aide d’une simple procédure d’ajustement de courbe. Nous dérivons par la suite la formule pour le temps moyen de première rencontre (TMPR) entre deux monomères d’un polymère AR. Le TMPR est un temps clé pour des processus tels que la régulation de gènes et la réparation de dommages sur l’ADN. Dans la dernière partie, nous généralisons le modèle AR analytique afin de prendre en compte plusieurs DTA de tailles différentes ainsi que les connectivités intra-DTA et extra-DTA. Nous étudions la dynamique de réorganisation de DTA lors des stages successifs de différentiations cellulaires à partir de données de CC. Nous trouvons un effet non-négligeable de la connectivité de l’inter-DTA sur les dynamiques de la chromatique. Par la suite nous trouvons une compactification et une décompactification synchrone des DTA à travers les différents stages
In this dissertation we study the relationship between chromatin conformation and dynamics using a class of randomly cross-linked (RCL) polymer models. The RCL models account for the variability in chromatin conformation over cell population. We use tools from statistics, stochastic process, numerical simulations and polymer physics, to derive the steady-state and transient properties of the RCL polymer, and use them to elucidate the dynamic reorganization of the chromatin for various scales and biological conditions. In the first part of this dissertation work, we develop a general method to construct the RCL polymer directly from chromosomal capture (CC) data. We show that persistent long-range connection between topologically associating domain (TAD) affect transient encounter times within TADs, in the process of X chromosome inactivation. We further show that the variability in anomalous exponents, measured in single particle trajectories (SPT), is a direct consequence of the heterogeneity of cross-link positions. In the second part, we use the RCL polymer to study local genome reorganization around double strand DNA breaks (DSBs). We calibrate the number of connectors in the RCL model using SPT data, acquired before and after DSB. We find that the conservative loss of connectors around DSB sites significantly affects first encounter times of the broken ends in the process of DSB repair. We show how a cross-linked genomic micro-environment can confine the two broken ends of a DSB from drifting apart. In the third part, we derive analytical expressions for the steady-state and transient properties of the RCL model, representing a single TAD region. The derived expressions are then used to extract the mean number of cross-links in TADs of the CC data, by as simple curve fitting procedure. We further derive formula for the mean first encounter time (MFET) between any two monomers of the RCL polymer. The MFET is a key time in processes such as gene regulation. In the last part, we generalize the analytical RCL model, to account for multiple TADs with variable sizes, intra, and inter-TAD connectivity. We study the dynamic reorganization of TADs, throughout successive stages of cell differentiation, from the CC data. We find non-negligible effect of inter-TAD connectivity on the dynamics of the chromatin. We further find a synchronous compaction and decompaction of TADs during differentiation
APA, Harvard, Vancouver, ISO, and other styles
44

Silvestre, Bezerra Manoel Ivanildo 1961. "Proposta de um método sub-ótimo para estimação espectral do modelo ARMA." [s.n.], 2012. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261228.

Full text
Abstract:
Orientador: Yuzo Iano
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-20T16:16:02Z (GMT). No. of bitstreams: 1 SilvestreBezerra_ManoelIvanildo_D.pdf: 5165235 bytes, checksum: 93259721102d21ca5d681ec6df5622e0 (MD5) Previous issue date: 2012
Resumo: Neste trabalho é proposto um novo método de estimação separada (sub-ótimo) para o processo (modelo) espectral ARMA. Os métodos sub-ótimos utilizam-se das equações de Yule-Walker e do método de mínimos quadrados para as estimativas AR, e geralmente do método de Durbin para as estimativas MA. Dado que os parâmetros AR e MA já foram estimados, no método proposto é feita uma nova filtragem AR do sinal de interesse utilizando-se as estimativas da parte MA. A partir deste novo sinal estimado, determinam-se as novas estimativas das partes AR e MA do processo ARMA, e em seguida obtém-se a estimativa da densidade espectral de potência. Os resultados dependem muito do espectro de interesse, e da parametrização que foi utilizada, mas de um modo geral os resultados fornecidos foram muito bons. Um estudo descrevendo os principais métodos de estimação espectral paramétrica dos processos ARMA também é realizado neste trabalho. Esses métodos são comparados medindo a precisão através do erro relativo e do coeficiente de variação médio das estimativas dos parâmetros
Abstract: This work proposes a new method of estimating separate (sub-optimal) for the spectrum ARMA process (model). The sub-optimal methods use the Yule-Walker equations and the method of least squares estimates for the AR, and usually the method of Durbin estimates for MA. Since AR and MA parameters have been estimated, in the method it is made a new AR filtering of the signal of interest using the estimates of the MA. From this new estimated signal, the new AR and MA estimates of parts from the ARMA process are obtained, and then the power spectral density is estimated. The results depend so much on the spectrum of interest and the parameterization used in the process, but generally the final results were very good. A study describing the main methods of parametric spectral estimation of ARMA processes is also performed in this work. These methods are compared by measuring their accuracy through the relative error and the average coefficient of variation of the parameter estimates
Doutorado
Telecomunicações e Telemática
Doutor em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
45

Angoshtari, Bahman. "Stochastic modeling and methods for portfolio management in cointegrated markets." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:1ae9236c-4bf0-4d9b-a694-f08e1b8713c0.

Full text
Abstract:
In this thesis we study the utility maximization problem for assets whose prices are cointegrated, which arises from the investment practice of convergence trading and its special forms, pairs trading and spread trading. The major theme in the first two chapters of the thesis, is to investigate the assumption of market-neutrality of the optimal convergence trading strategies, which is a ubiquitous assumption taken by practitioners and academics alike. This assumption lacks a theoretical justification and, to the best of our knowledge, the only relevant study is Liu and Timmermann (2013) which implies that the optimal convergence strategies are, in general, not market-neutral. We start by considering a minimalistic pairs-trading scenario with two cointegrated stocks and solve the Merton investment problem with power and logarithmic utilities. We pay special attention to when/if the stochastic control problem is well-posed, which is overlooked in the study done by Liu and Timmermann (2013). In particular, we show that the problem is ill-posed if and only if the agent’s risk-aversion is less than a constant which is an explicit function of the market parameters. This condition, in turn, yields the necessary and sufficient condition for well-posedness of the Merton problem for all possible values of agent’s risk-aversion. The resulting well-posedness condition is surprisingly strict and, in particular, is equivalent to assuming the optimal investment strategy in the stocks to be market-neutral. Furthermore, it is shown that the well-posedness condition is equivalent to applying Novikov’s condition to the market-price of risk, which is a ubiquitous sufficient condition for imposing absence of arbitrage. To the best of our knowledge, these are the only theoretical results for supporting the assumption of market-neutrality of convergence trading strategies. We then generalise the results to the more realistic setting of multiple cointegrated assets, assuming risk factors that effects the asset returns, and general utility functions for investor’s preference. In the process of generalising the bivariate results, we also obtained some well-posedness conditions for matrix Riccati differential equations which are, to the best of our knowledge, new. In the last chapter, we set up and justify a Merton problem that is related to spread-trading with two futures assets and assuming proportional transaction costs. The model possesses three characteristics whose combination makes it different from the existing literature on proportional transaction costs: 1) finite time horizon, 2) Multiple risky assets 3) stochastic opportunity set. We introduce the HJB equation and provide rigorous arguments showing that the corresponding value function is the viscosity solution of the HJB equation. We end the chapter by devising a numerical scheme, based on the penalty method of Forsyth and Vetzal (2002), to approximate the viscosity solution of the HJB equation.
APA, Harvard, Vancouver, ISO, and other styles
46

Stilgenbauer, Patrik [Verfasser]. "The Stochastic Analysis of Fiber Lay-Down Models : An Interplay between Pure and Applied Mathematics involving Langevin Processes on Manifolds, Ergodicity for Degenerate Kolmogorov Equations and Hypocoercivity [[Elektronische Ressource]] / Patrik Stilgenbauer." München : Verlag Dr. Hut, 2014. http://d-nb.info/1050331729/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Shao, Haimei. "Price discovery in the U.S. bond market trading strategies and the cost of liquidity." Doctoral diss., University of Central Florida, 2011. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5032.

Full text
Abstract:
The world bond market is nearly twice as large as the equity market. The goal of this dissertation is to study the dynamics of bond price. Among the liquidity risk, interest rate risk and default risk, this dissertation will focus on the liquidity risk and trading strategy. Under the mathematical frame of stochastic control, we model price setting in U.S. bond markets where dealers have multiple instruments to smooth inventory imbalances. The difficulty in obtaining the optimal trading strategy is that the optimal strategy and value function depend on each other, and the corresponding HJB equation is nonlinear. To solve this problem, we derived an approximate optimal explicit trading strategy. The result shows that this trading strategy is better than the benchmark central symmetric trading strategy.
ID: 029809224; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (Ph.D.)--University of Central Florida, 2011.; Includes bibliographical references (p. 101-103).
Ph.D.
Doctorate
Mathematics
Sciences
APA, Harvard, Vancouver, ISO, and other styles
48

Gobira, Diogo Barboza. "Precificação de derivativos exóticos no mercado de petróleo." reponame:Repositório Institucional do BNDES, 2014. http://web.bndes.gov.br/bib/jspui/handle/1408/7023.

Full text
Abstract:
Bibliografia: p. 109-111
Dissertação (mestrado) - Instituto Nacional de Matemática Pura e Aplicada, Rio de Janeiro, 2014.
Estudamos a precificação de opções exóticas nos mercados de petróleo e de seus derivados. Iniciamos com uma análise exploratória dos dados, revisitando suas propriedades estatísticas e fatos estilizados relacionados às volatilidades e correlações. Subsidiados pelos resultados de tal análise, apresentamos alguns dos principais modelos forward para commodities e um vasto conjunto de estruturas determinísticas de volatilidades, bem como os respectivos métodos de calibragem, para os quais executamos testes com dados reais. Para melhorar o desempenho de tais modelos na precificação do smile de volatilidade, reformulamos o modelo de volatilidade estocástica de Heston para lidar com uma ou múltiplas curvas forward, permitindo sua utilização na precificação de contratos definidos sobre múltiplas commodities. Calibramos e testamos tais modelos a partir de dados reais dos mercados de petróleo, gasolina e gás, e comprovamos a sua superioridade frente aos modelos de volatilidade determinística. Para subsidiar a precificação de opções exóticas e contratos OTC, revisitamos dos pontos de vista teórico e prático assuntos como simulação de Monte Carlo, soluções numéricas para SDEs e exercício americano. Finalmente, por meio de uma bateria de simulações numéricas, mostramos como os modelos podem ser utilizados na precificação de opções exóticas que tipicamente ocorrem nos mercados de commodities, como as calendar spread options, crack spread options e as opções asiáticas.
We study the pricing of exotic options in the oil and its derivatives markets. We begin with a exploratory analysis of the data, revisiting statistical properties and stylized facts related to the volatilities and correlations. Based on this results, we present some of the main commodity forward models and a wide range of deterministic volatility structures, as well as its calibration methods, for which we ran tests with real market data. To improve the performance of such models in pricing the volatility smile, we reformulate the Heston stochastic volatility model to cope with one or multiple forward curves together, allowing its use for the pricing of multicommodity based contracts. We calibrate and test such models for the oil, gasoline and natural gas markets, confirming their superiority against deterministic volatility models. To support the tasks of exotic options and OTC contracts pricing, we also revisit, from the theoretical and practical points of view, tools and issues such as Monte Carlo simulation, numerical solutions to SDEs and American exercise. Finally, through a battery of numerical simulations, we show how the presented models can be used to price typical exotic options occurring in commodity markets, such as calendar spread options, crack spread options and Asian options.
APA, Harvard, Vancouver, ISO, and other styles
49

Zhu, Wenjin. "Maintenance of monitored systems with multiple deterioration mechanisms in dynamic environments : application to wind turbines." Thesis, Troyes, 2014. http://www.theses.fr/2014TROY0005/document.

Full text
Abstract:
Les travaux présentés contribuent à la modélisation stochastique de la maintenance de systèmes mono- ou multi-composants à détériorations et à modes de défaillances multiples en environnement dynamique. Dans ce cadre, les contributions portent d'une part sur la modélisation des processus de défaillance, et d'autre part sur la proposition de structures de décision de maintenance intégrant les différents types d'information de surveillance en ligne disponible sur le système (état de détérioration mesuré ou reconstruit, état de l'environnement, ...) et le développement des modèles mathématiques d'évaluation associés. Les modèles de détérioration et de défaillances proposés pour les systèmes mono-composants permettent de rendre compte de sources de détérioration multiples (chocs et détérioration graduelle) et d'intégrer les effets de l'environnement sur la dégradation. Pour les systèmes multi-composants, on insiste sur les risques concurrents, indépendants ou dépendants et sur l'intégration de l'environnement. Les modèles de maintenance développés sont adaptés aux modèles de détérioration proposés et permettent de prendre en compte la contribution de chaque source de détérioration dans la décision de maintenance, ou d'intégrer de l'information de surveillance indirecte dans la décision, ou encore de combiner plusieurs types d'actions de maintenance. Dans chaque cas, on montre comment les modèles développés répondent aux problématiques de la maintenance de turbines et de parcs éoliens
The thesis contributes to stochastic maintenance modeling of single or multi-components deteriorating systems with several failure modes evolving in a dynamic environment. In one hand, the failure process modeling is addressed and in the other hand, the thesis proposes maintenance decision rules taking into account available on-line monitoring information (system state, deterioration level, environmental conditions …) and develops mathematical models to measure the performances of the latter decision rules.In the framework of single component systems, the proposed deterioration and failure models take into account several deterioration causes (chocks and wear) and also the impact of environmental conditions on the deterioration. For multi-components systems, the competing risk models are considered and the dependencies and the impact of the environmental conditions are also studied. The proposed maintenance models are suitable for deterioration models and permit to consider different deterioration causes and to analyze the impact of the monitoring on the performances of the maintenance policies. For each case, the interest and applicability of models are analyzed through the example of wind turbine and wind turbine farm maintenance
APA, Harvard, Vancouver, ISO, and other styles
50

Yamazato, Makoto. "Non-life Insurance Mathematics." Pontificia Universidad Católica del Perú, 2014. http://repositorio.pucp.edu.pe/index/handle/123456789/96535.

Full text
Abstract:
In this work we describe the basic facts of non-life insurance and then explain risk processes. In particular, we will explain in detail the asymptotic behavior of the probability that an insurance product may end up in ruin during its lifetime. As expected, the behavior of such asymptotic probability will be highly dependent on the tail distribution of each claim.
En este artículo describimos los conceptos básicos relacionados a seguros que no sean de vida y luego explicamos procesos de riesgo. En particular, tratamos al detalle el comportamiento asintótico de la probabilidad de que un producto sea declarado en ruina. Como es suponible, el comportamiento en el horizonte depende de la cola de la distribución de las primas.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography