To see the other types of publications on this topic, follow the link: Discretization of stochastic integrals.

Dissertations / Theses on the topic 'Discretization of stochastic integrals'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 47 dissertations / theses for your research on the topic 'Discretization of stochastic integrals.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Pokalyuk, Stanislav [Verfasser], and Christian [Akademischer Betreuer] Bender. "Discretization of backward stochastic Volterra integral equations / Stanislav Pokalyuk. Betreuer: Christian Bender." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2012. http://d-nb.info/1052338488/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pei, Yuchen. "Robinson-Schensted algorithms and quantum stochastic double product integrals." Thesis, University of Warwick, 2015. http://wrap.warwick.ac.uk/74169/.

Full text
Abstract:
This thesis is divided into two parts. In the first part (Chapters 1, 2, 3) various Robinson-Schensted (RS) algorithms are discussed. An introduction to the classical RS algorithm is presented, including the symmetry property, and the result of the algorithm Doob h-transforming the kernel from the Pieri rule of Schur functions h when taking a random word [O'C03a]. This is followed by the extension to a q-weighted version that has a branching structure, which can be alternatively viewed as a randomisation of the classical algorithm. The q-weighted RS algorithm is related to the q-Whittaker functions in the same way as the classical algorithm is to the Schur functions. That is, when taking a random word, the algorithm Doob h-transforms the Hamiltonian of the quantum Toda lattice where h are the q-Whittaker functions. Moreover, it can also be applied to model the q-totally asymmetric simple exclusion process introduced in [SW98]. Furthermore, the q-RS algorithm also enjoys a symmetry property analogous to that of the symmetry property of the classical algorithm. This is proved by extending Fomin's growth diagram technique [Fom79, Fom88, Fom94, Fom95], which covers a family of the so-called branching insertion algorithms, including the row insertion proposed in [BP13]. In the second part (Chapters 4, 5) we work with quantum stochastic analysis. First we introduce the basic elements in quantum stochastic analysis, including the quantum probability space, the momentum and position Brownian motions [CH77], and the relation between rotations and angular momenta via the second quantisation, which is generalised to a family of rotation-like operators [HP15a]. Then we discuss a family of unitary quantum causal stochastic double product integrals E, which are expected to be the second quantisation of the continuous limit W of a discrete double product of aforementioned rotation-like operators. In one special case, the operator E is related to the quantum Levy stochastic area, while in another case it is related to the quantum 2-d Bessel process. The explicit formula for the kernel of W is obtained by enumerating linear extensions of partial orderings related to a path model, and the combinatorial aspect is closely related to generalisations of the Catalan numbers and the Dyck paths. Furthermore W is shown to be unitary using integrals of the Bessel functions.
APA, Harvard, Vancouver, ISO, and other styles
3

Brooks, Martin George. "Quantum spectral stochastic integrals and levy flows in Fock space." Thesis, Nottingham Trent University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.266915.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

SONG, YUKUN SONG. "Stochastic Integrals with Respect to Tempered $\alpha$-Stable Levy Process." Case Western Reserve University School of Graduate Studies / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=case1501506513936836.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gross, Joshua. "An exploration of stochastic models." Kansas State University, 2014. http://hdl.handle.net/2097/17656.

Full text
Abstract:
Master of Science
Department of Mathematics
Nathan Albin
The term stochastic is defined as having a random probability distribution or pattern that may be analyzed statistically but may not be predicted precisely. A stochastic model attempts to estimate outcomes while allowing a random variation in one or more inputs over time. These models are used across a number of fields from gene expression in biology, to stock, asset, and insurance analysis in finance. In this thesis, we will build up the basic probability theory required to make an ``optimal estimate", as well as construct the stochastic integral. This information will then allow us to introduce stochastic differential equations, along with our overall model. We will conclude with the "optimal estimator", the Kalman Filter, along with an example of its application.
APA, Harvard, Vancouver, ISO, and other styles
6

Jones, Matthew O. "Spatial Service Systems Modelled as Stochastic Integrals of Marked Point Processes." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/7174.

Full text
Abstract:
We characterize the equilibrium behavior of a class of stochastic particle systems, where particles (representing customers, jobs, animals, molecules, etc.) enter a space randomly through time, interact, and eventually leave. The results are useful for analyzing the dynamics of randomly evolving systems including spatial service systems, species populations, and chemical reactions. Such models with interactions arise in the study of species competitions and systems where customers compete for service (such as wireless networks). The models we develop are space-time measure-valued Markov processes. Specifically, particles enter a space according to a space-time Poisson process and are assigned independent and identically distributed attributes. The attributes may determine their movement in the space, and whenever a new particle arrives, it randomly deletes particles from the system according to their attributes. Our main result establishes that spatial Poisson processes are natural temporal limits for a large class of particle systems. Other results include the probability distributions of the sojourn times of particles in the systems, and probabilities of numbers of customers in spatial polling systems without Poisson limits.
APA, Harvard, Vancouver, ISO, and other styles
7

Kuwada, Kazumasa. "On large deviations for current-valued processes induced from stochastic line integrals." 京都大学 (Kyoto University), 2004. http://hdl.handle.net/2433/147585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Leoff, Elisabeth [Verfasser]. "Stochastic Filtering in Regime-Switching Models: Econometric Properties, Discretization and Convergence / Elisabeth Leoff." München : Verlag Dr. Hut, 2017. http://d-nb.info/1126297348/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Geiss, Stefan. "On quantitative approximation of stochastic integrals with respect to the geometric Brownian motion." SFB Adaptive Information Systems and Modelling in Economics and Management Science, WU Vienna University of Economics and Business, 1999. http://epub.wu.ac.at/1774/1/document.pdf.

Full text
Abstract:
We approximate stochastic integrals with respect to the geometric Brownian motion by stochastic integrals over discretized integrands, where deterministic, but not necessarily equidistant, time nets are used. This corresponds to the approximation of a continuously adjusted portfolio by a discretely adjusted one. We compute the approximation orders of European Options in the Black Scholes model with respect to L_2 and the approximation order of the standard European-Call and Put Option with respect to an appropriate BMO space, which gives information about the cost process of the discretely adjusted portfolio. (author's abstract)
Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
APA, Harvard, Vancouver, ISO, and other styles
10

Yeadon, Cyrus. "Approximating solutions of backward doubly stochastic differential equations with measurable coefficients using a time discretization scheme." Thesis, Loughborough University, 2015. https://dspace.lboro.ac.uk/2134/20643.

Full text
Abstract:
It has been shown that backward doubly stochastic differential equations (BDSDEs) provide a probabilistic representation for a certain class of nonlinear parabolic stochastic partial differential equations (SPDEs). It has also been shown that the solution of a BDSDE with Lipschitz coefficients can be approximated by first discretizing time and then calculating a sequence of conditional expectations. Given fixed points in time and space, this approximation has been shown to converge in mean square. In this thesis, we investigate the approximation of solutions of BDSDEs with coefficients that are measurable in time and space using a time discretization scheme with a view towards applications to SPDEs. To achieve this, we require the underlying forward diffusion to have smooth coefficients and we consider convergence in a norm which includes a weighted spatial integral. This combination of smoother forward coefficients and weaker norm allows the use of an equivalence of norms result which is key to our approach. We additionally take a brief look at the approximation of solutions of a class of infinite horizon BDSDEs with a view towards approximating stationary solutions of SPDEs. Whilst we remain agnostic with regards to the implementation of our discretization scheme, our scheme should be amenable to a Monte Carlo simulation based approach. If this is the case, we propose that in addition to being attractive from a performance perspective in higher dimensions, such an approach has a potential advantage when considering measurable coefficients. Specifically, since we only discretize time and effectively rely on simulations of the underlying forward diffusion to explore space, we are potentially less vulnerable to systematically overestimating or underestimating the effects of coefficients with spatial discontinuities than alternative approaches such as finite difference or finite element schemes that do discretize space. Another advantage of the BDSDE approach is that it is possible to derive an upper bound on the error of our method for a fairly broad class of conditions in a single analysis. Furthermore, our conditions seem more general in some respects than is typically considered in the SPDE literature.
APA, Harvard, Vancouver, ISO, and other styles
11

Blöthner, Florian [Verfasser]. "Non-Uniform Semi-Discretization of Linear Stochastic Partial Differential Equations in R / Florian Blöthner." München : Verlag Dr. Hut, 2019. http://d-nb.info/1181514207/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Zhang, Xiling. "On numerical approximations for stochastic differential equations." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/28931.

Full text
Abstract:
This thesis consists of several problems concerning numerical approximations for stochastic differential equations, and is divided into three parts. The first one is on the integrability and asymptotic stability with respect to a certain class of Lyapunov functions, and the preservation of the comparison theorem for the explicit numerical schemes. In general, those properties of the original equation can be lost after discretisation, but it will be shown that by some suitable modification of the Euler scheme they can be preserved to some extent while keeping the strong convergence rate maintained. The second part focuses on the approximation of iterated stochastic integrals, which is the essential ingredient for the construction of higher-order approximations. The coupling method is adopted for that purpose, which aims at finding a random variable whose law is easy to generate and is close to the target distribution. The last topic is motivated by the simulation of equations driven by Lévy processes, for which the main difficulty is to generalise some coupling results for the one-dimensional central limit theorem to the multi-dimensional case.
APA, Harvard, Vancouver, ISO, and other styles
13

Yam, Sheung Chi Phillip. "Analytical and topological aspects of signatures." Thesis, University of Oxford, 2008. http://ora.ox.ac.uk/objects/uuid:87892930-f329-4431-bcdc-bf32b0b1a7c6.

Full text
Abstract:
In both physical and social sciences, we usually use controlled differential equation to model various continuous evolving system; describing how a response y relates to another process x called control. For regular controls x, the unique existence of the response y is guaranteed while it would never be the case for non-smooth controls via the classical approach. Besides, uniform closeness of controls may not imply closeness of their corresponding responses. Theory of rough paths provides a solution to both concerns. Since the creation of rough path theory, it enjoys a fruitful development and finds wide applications in stochastic analysis. In particular, rough path theory provides an effective method to study irregularity of curves and its geometric consequences in relation to integration of differential forms. In the chapter 2, we demonstrate the power of rough path theory in classical complex analysis by showing the rough path nature of the boundaries of a class of Holder's domains; as an immediate application, we extend the classical Gauss-Green's theorem. Until recently, there has been only limited research on applications of theory of rough paths to high dimensional geometry. It is clear to us that many geometric objects, in some senses appearing as solids, are actually comprised of filaments. In the chapter 3, two basic results in the theory of rough paths which will motivate later development of my thesis has been included. In the chapters 4 and 5, we identify a sensible way to do geometric calculus via those filaments (more precisely, space-filling rough paths) in dimension 3. In a recent joint work of Hambly and Lyons, they have shown that every rectifiable path can be completely characterized, up to tree-like deformation, by an algebraic object called the signature, tensor of all iterated integrals, of the path. It is clear that all tree-like deformation of the path would not change its topological features. For instance, the number of times a planar loop of finite length winds around a point (not lying on the path) is unaltered if one deforms the path in tree-like ways. Therefore, it should be plausible to extract this topological information out from the signature of the loop since the signature is a complete algebraic invariant. In the chapter 6, we express the winding number of a nice loop (respectively linking number of a pair of nice loops) as a linear functional of the signature of the loop (respectively signatures of the pair of loops).
APA, Harvard, Vancouver, ISO, and other styles
14

Best, Jörg Thomas [Verfasser], Angelika Akademischer Betreuer] May, and Marcus C. [Akademischer Betreuer] [Christiansen. "Examination of the closedness of spaces of stochastic integrals / Jörg Thomas Best ; Angelika May, Marcus Christiansen." Oldenburg : BIS der Universität Oldenburg, 2020. http://nbn-resolving.de/urn:nbn:de:gbv:715-oops-47370.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Best, Jörg Thomas [Verfasser], Angelika [Akademischer Betreuer] May, and Marcus [Akademischer Betreuer] Christiansen. "Examination of the closedness of spaces of stochastic integrals / Jörg Thomas Best ; Angelika May, Marcus Christiansen." Oldenburg : BIS der Universität Oldenburg, 2020. http://d-nb.info/1215293542/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Matthews, Charles. "Error in the invariant measure of numerical discretization schemes for canonical sampling of molecular dynamics." Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/8949.

Full text
Abstract:
Molecular dynamics (MD) computations aim to simulate materials at the atomic level by approximating molecular interactions classically, relying on the Born-Oppenheimer approximation and semi-empirical potential energy functions as an alternative to solving the difficult time-dependent Schrodinger equation. An approximate solution is obtained by discretization in time, with an appropriate algorithm used to advance the state of the system between successive timesteps. Modern MD simulations simulate complex systems with as many as a trillion individual atoms in three spatial dimensions. Many applications use MD to compute ensemble averages of molecular systems at constant temperature. Langevin dynamics approximates the effects of weakly coupling an external energy reservoir to a system of interest, by adding the stochastic Ornstein-Uhlenbeck process to the system momenta, where the resulting trajectories are ergodic with respect to the canonical (Boltzmann-Gibbs) distribution. By solving the resulting stochastic differential equations (SDEs), we can compute trajectories that sample the accessible states of a system at a constant temperature by evolving the dynamics in time. The complexity of the classical potential energy function requires the use of efficient discretization schemes to evolve the dynamics. In this thesis we provide a systematic evaluation of splitting-based methods for the integration of Langevin dynamics. We focus on the weak properties of methods for confiurational sampling in MD, given as the accuracy of averages computed via numerical discretization. Our emphasis is on the application of discretization algorithms to high performance computing (HPC) simulations of a wide variety of phenomena, where configurational sampling is the goal. Our first contribution is to give a framework for the analysis of stochastic splitting methods in the spirit of backward error analysis, which provides, in certain cases, explicit formulae required to correct the errors in observed averages. A second contribution of this thesis is the investigation of the performance of schemes in the overdamped limit of Langevin dynamics (Brownian or Smoluchowski dynamics), showing the inconsistency of some numerical schemes in this limit. A new method is given that is second-order accurate (in law) but requires only one force evaluation per timestep. Finally we compare the performance of our derived schemes against those in common use in MD codes, by comparing the observed errors introduced by each algorithm when sampling a solvated alanine dipeptide molecule, based on our implementation of the schemes in state-of-the-art molecular simulation software. One scheme is found to give exceptional results for the computed averages of functions purely of position.
APA, Harvard, Vancouver, ISO, and other styles
17

Hoyt, Pamela J. "Discretization and learning of Bayesian Networks using stochastic search, with application to Base Realignment and Closure (BRAC)." Fairfax, VA : George Mason University, 2008. http://hdl.handle.net/1920/3141.

Full text
Abstract:
Thesis (Ph.D.)--George Mason University, 2008.
Vita: p. 183. Thesis director: Kathryn B. Laskey. Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Information Technology. Title from PDF t.p. (viewed July 7, 2008). Includes bibliographical references (p. 168-182). Also issued in print.
APA, Harvard, Vancouver, ISO, and other styles
18

Pedjeu, Jean-Claude. "Multi-time Scales Stochastic Dynamic Processes: Modeling, Methods, Algorithms, Analysis, and Applications." Scholar Commons, 2012. http://scholarcommons.usf.edu/etd/4383.

Full text
Abstract:
By introducing a concept of dynamic process operating under multi-time scales in sciences and engineering, a mathematical model is formulated and it leads to a system of multi-time scale stochastic differential equations. The classical Picard-Lindel\"{o}f successive approximations scheme is expended to the model validation problem, namely, existence and uniqueness of solution process. Naturally, this generates to a problem of finding closed form solutions of both linear and nonlinear multi-time scale stochastic differential equations. To illustrate the scope of ideas and presented results, multi-time scale stochastic models for ecological and epidemiological processes in population dynamic are exhibited. Without loss in generality, the modeling and analysis of three time-scale fractional stochastic differential equations is followed by the development of the numerical algorithm for multi-time scale dynamic equations. The development of numerical algorithm is based on the idea if numerical integration in the context of the notion of multi-time scale integration. The multi-time scale approach is applied to explore the study of higher order stochastic differential equations (HOSDE) is presented. This study utilizes the variation of constant parameter technique to develop a method for finding closed form solution processes of classes of HOSDE. Then then probability distribution of the solution processes in the context of the second order equations is investigated.
APA, Harvard, Vancouver, ISO, and other styles
19

Tamayo, Palau José María. "Multilevel adaptive cross approximation and direct evaluation method for fast and accurate discretization of electromagnetic integral equations." Doctoral thesis, Universitat Politècnica de Catalunya, 2011. http://hdl.handle.net/10803/6952.

Full text
Abstract:
El Método de los Momentos (MoM) ha sido ampliamente utilizado en las últimas décadas para la discretización y la solución de las formulaciones de ecuación integral que aparecen en muchos problemas electromagnéticos de antenas y dispersión. Las más utilizadas de dichas formulaciones son la Ecuación Integral de Campo Eléctrico (EFIE), la Ecuación Integral de Campo Magnético (MFIE) y la Ecuación Integral de Campo Combinada (CFIE), que no es más que una combinación lineal de las dos anteriores.
Las formulaciones MFIE y CFIE son válidas únicamente para objetos cerrados y necesitan tratar la integración de núcleos con singularidades de orden superior al de la EFIE. La falta de técnicas eficientes y precisas para el cálculo de dichas integrales singulares a llevado a imprecisiones en los resultados. Consecuentemente, su uso se ha visto restringido a propósitos puramente académicos, incluso cuando tienen una velocidad de convergencia muy superior cuando son resuelto iterativamente, debido a su excelente número de condicionamiento.
En general, la principal desventaja del MoM es el alto coste de su construcción, almacenamiento y solución teniendo en cuenta que es inevitablemente un sistema denso, que crece con el tamaño eléctrico del objeto a analizar. Por tanto, un gran número de métodos han sido desarrollados para su compresión y solución. Sin embargo, muchos de ellos son absolutamente dependientes del núcleo de la ecuación integral, necesitando de una reformulación completa para cada núcleo, en caso de que sea posible.
Esta tesis presenta nuevos enfoques o métodos para acelerar y incrementar la precisión de ecuaciones integrales discretizadas con el Método de los Momentos (MoM) en electromagnetismo computacional.
En primer lugar, un nuevo método iterativo rápido, el Multilevel Adaptive Cross Approximation (MLACA), ha sido desarrollado para acelerar la solución del sistema lineal del MoM. En la búsqueda por un esquema de propósito general, el MLACA es un método independiente del núcleo de la ecuación integral y es puramente algebraico. Mejora simultáneamente la eficiencia y la compresión con respecto a su versión mono-nivel, el ACA, ya existente. Por tanto, representa una excelente alternativa para la solución del sistema del MoM de problemas electromagnéticos de gran escala.
En segundo lugar, el Direct Evaluation Method, que ha provado ser la referencia principal en términos de eficiencia y precisión, es extendido para superar el cálculo del desafío que suponen las integrales hiper-singulares 4-D que aparecen en la formulación de Ecuación Integral de Campo Magnético (MFIE) así como en la de Ecuación Integral de Campo Combinada (CFIE). La máxima precisión asequible -precisión de máquina se obtiene en un tiempo más que razonable, sobrepasando a cualquier otra técnica existente en la bibliografía.
En tercer lugar, las integrales hiper-singulares mencionadas anteriormente se convierten en casi-singulares cuando los elementos discretizados están muy próximo pero sin llegar a tocarse. Se muestra como las reglas de integración tradicionales tampoco convergen adecuadamente en este caso y se propone una posible solución, basada en reglas de integración más sofisticadas, como la Double Exponential y la Gauss-Laguerre.
Finalmente, un esfuerzo en facilitar el uso de cualquier programa de simulación de antenas basado en el MoM ha llevado al desarrollo de un modelo matemático general de un puerto de excitación en el espacio discretizado. Con este nuevo modelo, ya no es necesaria la adaptación de los lados del mallado al puerto en cuestión.
The Method of Moments (MoM) has been widely used during the last decades for the discretization and the solution of integral equation formulations appearing in several electromagnetic antenna and scattering problems. The most utilized of these formulations are the Electric Field Integral Equation (EFIE), the Magnetic Field Integral Equation (MFIE) and the Combined Field Integral Equation (CFIE), which is a linear combination of the other two.
The MFIE and CFIE formulations are only valid for closed objects and need to deal with the integration of singular kernels with singularities of higher order than the EFIE. The lack of efficient and accurate techniques for the computation of these singular integrals has led to inaccuracies in the results. Consequently, their use has been mainly restricted to academic purposes, even having a much better convergence rate when solved iteratively, due to their excellent conditioning number.
In general, the main drawback of the MoM is the costly construction, storage and solution considering the unavoidable dense linear system, which grows with the electrical size of the object to analyze. Consequently, a wide range of fast methods have been developed for its compression and solution. Most of them, though, are absolutely dependent on the kernel of the integral equation, claiming for a complete re-formulation, if possible, for each new kernel.
This thesis dissertation presents new approaches to accelerate or increase the accuracy of integral equations discretized by the Method of Moments (MoM) in computational electromagnetics.
Firstly, a novel fast iterative solver, the Multilevel Adaptive Cross Approximation (MLACA), has been developed for accelerating the solution of the MoM linear system. In the quest for a general-purpose scheme, the MLACA is a method independent of the kernel of the integral equation and is purely algebraic. It improves both efficiency and compression rate with respect to the previously existing single-level version, the ACA. Therefore, it represents an excellent alternative for the solution of the MoM system of large-scale electromagnetic problems.
Secondly, the direct evaluation method, which has proved to be the main reference in terms of efficiency and accuracy, is extended to overcome the computation of the challenging 4-D hyper-singular integrals arising in the Magnetic Field Integral Equation (MFIE) and Combined Field Integral Equation (CFIE) formulations. The maximum affordable accuracy --machine precision-- is obtained in a more than reasonable computation time, surpassing any other existing technique in the literature.
Thirdly, the aforementioned hyper-singular integrals become near-singular when the discretized elements are very closely placed but not touching. It is shown how traditional integration rules fail to converge also in this case, and a possible solution based on more sophisticated integration rules, like the Double Exponential and the Gauss-Laguerre, is proposed.
Finally, an effort to facilitate the usability of any antenna simulation software based on the MoM has led to the development of a general mathematical model of an excitation port in the discretized space. With this new model, it is no longer necessary to adapt the mesh edges to the port.
APA, Harvard, Vancouver, ISO, and other styles
20

Schachermayer, Walter, and Werner Schachinger. "Is there a predictable criterion for mutual singularity of two probability measures on a filtered space?" SFB Adaptive Information Systems and Modelling in Economics and Management Science, WU Vienna University of Economics and Business, 1999. http://epub.wu.ac.at/1600/1/document.pdf.

Full text
Abstract:
The theme of providing predictable criteria for absolute continuity and for mutual singularity of two density processes on a filtered probability space is extensively studied, e.g., in the monograph by J. Jacod and A. N. Shiryaev [JS]. While the issue of absolute continuity is settled there in full generality, for the issue of mutual singularity one technical difficulty remained open ([JS], p210): "We do not know whether it is possible to derive a predictable criterion (necessary and sufficient condition) for "P'T..." (expression not representable in this abstract). It turns out that to this question raised in [JS] which we also chose as the title of this note, there are two answers: on the negative side we give an easy example, showing that in general the answer is no, even when we use a rather wide interpretation of the concept of "predictable criterion". The difficulty comes from the fact that the density process of a probability measure P with respect to another measure P' may suddenly jump to zero. On the positive side we can characterize the set, where P' becomes singular with respect to P - provided this does not happen in a sudden but rather in a continuous way - as the set where the Hellinger process diverges, which certainly is a "predictable criterion". This theorem extends results in the book of J. Jacod and A. N. Shiryaev [JS]. (author's abstract)
Series: Working Papers SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
APA, Harvard, Vancouver, ISO, and other styles
21

Jones, Paul. "Unitary double products as implementors of Bogolubov transformations." Thesis, Loughborough University, 2013. https://dspace.lboro.ac.uk/2134/14306.

Full text
Abstract:
This thesis is about double product integrals with pseudo rotational generator, and aims to exhibit them as unitary implementors of Bogolubov transformations. We further introduce these concepts in this abstract and describe their roles in the thesis's chapters. The notion of product integral, (simple product integral, not double) is not a new one, but is unfamiliar to many a mathematician. Product integrals were first investigated by Volterra in the nineteenth century. Though often regarded as merely a notation for solutions of differential equations, they provide a priori a multiplicative analogue of the additive integration theories of Riemann, Stieltjes and Lebesgue. See Slavik [2007] for a historical overview of the subject. Extensions of the theory of product integrals to multiplicative versions of Ito and especially quantum Ito calculus were first studied by Hudson, Ion and Parthasarathy in the 1980's, Hudson et al. [1982]. The first developments of double product integrals was a theory of an algebraic kind developed by Hudson and Pulmannova motivated by the study of the solution of the quantum Yang-Baxter equation by the construction of quantum groups, see Hudson and Pulmaanova [2005]. This was a purely algebraic theory based on formal power series in a formal parameter. However, there also exists a developing analytic theory of double product integral. This thesis contributes to this analytic theory. The first papers in that direction are Hudson [2005b] and Hudson and Jones [2012]. Other motivations include quantum extension of Girsanov's theorem and hence a quantum version of the Black-Scholes model in finance. They may also provide a general model for causal interactions in noisy environments in quantum physics. From a different direction "causal" double products, (see Hudson [2005b]), have become of interest in connection with quantum versions of the Levy area, and in particular quantum Levy area formula (Hudson [2011] and Chen and Hudson [2013]) for its characteristic function. There is a close association of causal double products with the double products of rectangular type (Hudson and Jones [2012] pp 3). For this reason it is of interest to study "forwardforward" rectangular double products. In the first chapter we give our notation which will be used in the following chapters and we introduce some simple double products and show heuristically that they are the solution of two different quantum stochastic differential equations. For each example the order in which the products are taken is shown to be unimportant; either calculation gives the same answer. This is in fact a consequence of the so called multiplicative Fubini Theorem Hudson and Pulmaanova [2005]. In Chapter two we formally introduce the notion of product integral as a solution of two particular quantum stochastic differential equations. In Chapter three we introduce the Fock representation of the canonical commutation relations, and discuss the Stone-von Neumann uniqueness theorem. We define the notion of Bogolubov transformation (often called a symplectic automorphism, see Parthasarathy [1992] for example), implementation of these transformations by an implementor (a unitary operator) and introduce Shale's theorem which will be relevant to the following chapters. For an alternative coverage of Shale's Theorem, symplectic automorphism and their implementors see Derezinski [2003]. In Chapter four we study double product integrals of the pseudo rotational type. This is in contrast to double product integrals of the rotational type that have been studied in (Hudson and Jones [2012] and Hudson [2005b]). The notation of the product integral is suggestive of a natural discretisation scheme where the infinitesimals are replaced by discrete increments i.e. discretised creation and annihilation operators of quantum mechanics. Because of a weak commutativity condition, between the discretised creation and annihilation operators corresponding on different subintervals of R, the order of the factors of the product are unimportant (Hudson [2005a]), and hence the discrete product is well defined; we call this result the discrete multiplicative Fubini Theorem. It is also the case that the order in which the products are taken in the continuous (non-discretised case) does not matter (Hudson [2005a], Hudson and Jones [2012]). The resulting discrete double product is shown to be the implementor (a unitary operator) of a Bogolubov transformation acting on discretised creation and annihilation operators (Bogolubov transformations are invertible real linear operators on a Hilbert space that preserve the imaginary part of the inner product, but here we may regard them equivalently as liner transformations acting directly on creation and annihilations operators but preserving adjointness and commutation relations). Unitary operators on the same Hilbert space are a subgroup of the group of Bogolubov transformations. Essentially Bogolubov transformations are used to construct new canonical pairs from old ones (In the literature Bogolubov transformations are often called symplectic automorphisms). The aforementioned Bogolubov transformation (acting on the discretised creation and annihilation operators) can be embedded into the space L2(R+) L2(R+) and limits can be taken resulting in a limiting Bogolubov transformation in the space L2(R+) L2(R+). It has also been shown that the resulting family of Bogolubov transformation has three important properties, namely bi-evolution, shift covariance and time-reversal covariance, see (Hudson [2007]) for a detailed description of these properties. Subsequently we show rigorously that this transformation really is a Bogolubov transformation. We remark that these transformations are Hilbert-Schmidt perturbations of the identity map and satisfy a criterion specified by Shale's theorem. By Shale's theorem we then know that each Bogolubov transformation is implemented in the Fock representation of the CCR. We also compute the constituent kernels of the integral operators making up the Hilbert-Schmidt operators involved in the Bogolubov transformations, and show that the order in which the approximating discrete products are taken has no bearing on the final Bogolubov transformation got by the limiting procedure, as would be expected from the multiplicative Fubini Theorem. In Chapter five we generalise the canonical form of the double product studied in Chapter four by the use of gauge transformations. We show that all the theory of Chapter four carries over to these generalised double product integrals. This is because there is unitary equivalence between the Bogolubov transformation got from the generalised canonical form of the double product and the corresponding original one. In Chapter six we make progress towards showing that a system of implementors of this family of Bogolubov transformations can be found which inherits properties of the original family such as being a bi-evolution and being covariant under shifts. We make use of Shales theorem (Parthasarathy [1992] and Derezinski [2003]). More specifically, Shale's theorem ensures that each Bogolubov transformation of our system is implemented by a unitary operator which is unique to with multiplicaiton by a scalar of modulus 1. We expect that there is a unique system of implementors, which is a bi-evolution, shift covariant, and time reversal covariant (i.e. which inherits the properties of the corresponding system of Bogolubov transformation). This is partly on-going research. We also expect the implementor of the Bogolubov transformation to be the original double product. In Evans [1988], Evan's showed that the the implementor of a Bogolubov transformation in the simple product case is indeed the simple product. If given more time it might be possible to adapt Evan's result to the double product case.
APA, Harvard, Vancouver, ISO, and other styles
22

Stazhynski, Uladzislau. "Discrétisation de processus à des temps d’arrêt et Quantification d'incertitude pour des algorithmes stochastiques." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLX088/document.

Full text
Abstract:
Cette thèse contient deux parties qui étudient deux sujets différents. Les Chapitres 1-4 sont consacrés aux problèmes de discrétisation de processus à des temps d’arrêt. Dans le Chapitre 1 on étudie l'erreur de discrétisation optimale pour des intégrales stochastiques par rapport à une semimartingale brownienne multidimensionnelle continue. Dans ce cadre on établit une borne inférieure trajectorielle pour la variation quadratique renormalisée de l'erreur. On fournit une suite de temps d’arrêt qui donne une discrétisation asymptotiquement optimale. Cette suite est définie comme temps de sortie d'ellipsoïdes aléatoires par la semimartingale. Par rapport aux résultats précédents on permet une classe de semimartingales assez large. On démontre qui la borne inférieure est exacte. Dans le Chapitre 2 on étudie la version adaptative au modèle de la discrétisation optimale d’intégrales stochastique. Dans le Chapitre 1 la construction de la stratégie optimale utilise la connaissance du coefficient de diffusion de la semimartingale considérée. Dans ce travail on établit une stratégie de discrétisation asymptotiquement optimale qui est adaptative au modèle et n'utilise pas aucune information sur le modèle. On démontre l'optimalité pour une classe de grilles de discrétisation assez générale basée sur les technique de noyau pour l'estimation adaptative. Dans le Chapitre 3 on étudie la convergence en loi des erreurs de discrétisation renormalisées de processus d’Itô pour une classe concrète et assez générale de grilles de discrétisation données par des temps d’arrêt. Les travaux précédents sur le sujet considèrent seulement le cas de dimension 1. En plus ils concentrent sur des cas particuliers des grilles, ou démontrent des résultats sous des hypothèses abstraites. Dans notre travail on donne explicitement la distribution limite sous une forme claire et simple, les résultats sont démontré dans le cas multidimensionnel pour le processus et pour l'erreur de discrétisation. Dans le Chapitre 4 on étudie le problème d'estimation paramétrique pour des processus de diffusion basée sur des observations à temps d’arrêt. Les travaux précédents sur le sujet considèrent que des temps d'observation déterministes, fortement prévisibles ou aléatoires indépendants du processus. Sous des hypothèses faibles on construit une suite d'estimateurs consistante pour une classe large de grilles d'observation données par des temps d’arrêt. On effectue une analyse asymptotique de l'erreur d'estimation. En outre, dans le cas du paramètre de dimension 1, pour toute suite d'estimateurs qui vérifie un TCL sans biais, on démontre une borne inférieure uniforme pour la variance asymptotique; on montre que cette borne est exacte. Les Chapitres 5-6 sont consacrés au problème de quantification d'incertitude pour des limites d'approximation stochastique. Dans le Chapitre 5 on analyse la quantification d'incertitude pour des limites d'approximation stochastique (SA). Dans notre cadre la limite est définie comme un zéro d'une fonction donnée par une espérance. Cette espérance est prise par rapport à une variable aléatoire pour laquelle le modèle est supposé de dépendre d'un paramètre incertain. On considère la limite de SA comme une fonction de cette paramètre. On introduit un algorithme qui s'appelle USA (Uncertainty for SA). C'est une procédure en dimension croissante pour calculer les coefficients de base d'expansion de chaos de cette fonction dans une base d'un espace de Hilbert bien choisi. La convergence de USA dans cet espace de Hilbert est démontré. Dans le Chapitre 6 on analyse le taux de convergence dans L2 de l'algorithme USA développé dans le Chapitre 5. L'analyse est non trivial à cause de la dimension infinie de la procédure. Le taux obtenu dépend du modèle et des paramètres utilisés dans l'algorithme USA. Sa connaissance permet d'optimiser la vitesse de croissance de la dimension dans USA
This thesis consists of two parts which study two separate subjects. Chapters 1-4 are devoted to the problem of processes discretization at stopping times. In Chapter 1 we study the optimal discretization error of stochastic integrals, driven by a multidimensional continuous Brownian semimartingale. In this setting we establish a path wise lower bound for the renormalized quadratic variation of the error and we provide a sequence of discretization stopping times, which is asymptotically optimal. The latter is defined as hitting times of random ellipsoids by the semimartingale at hand. In comparison with previous available results, we allow a quite large class of semimartingales and we prove that the asymptotic lower bound is attainable. In Chapter 2 we study the model-adaptive optimal discretization error of stochastic integrals. In Chapter 1 the construction of the optimal strategy involved the knowledge about the diffusion coefficient of the semimartingale under study. In this work we provide a model-adaptive asymptotically optimal discretization strategy that does not require any prior knowledge about the model. In Chapter 3 we study the convergence in distribution of renormalized discretization errors of Ito processes for a concrete general class of random discretization grids given by stopping times. Previous works on the subject only treat the case of dimension 1. Moreover they either focus on particular cases of grids, or provide results under quite abstract assumptions with implicitly specified limit distribution. At the contrast we provide explicitly the limit distribution in a tractable form in terms of the underlying model. The results hold both for multidimensional processes and general multidimensional error terms. In Chapter 4 we study the problem of parametric inference for diffusions based on observations at random stopping times. We work in the asymptotic framework of high frequency data over a fixed horizon. Previous works on the subject consider only deterministic, strongly predictable or random, independent of the process, observation times, and do not cover our setting. Under mild assumptions we construct a consistent sequence of estimators, for a large class of stopping time observation grids. Further we carry out the asymptotic analysis of the estimation error and establish a Central Limit Theorem (CLT) with a mixed Gaussian limit. In addition, in the case of a 1-dimensional parameter, for any sequence of estimators verifying CLT conditions without bias, we prove a uniform a.s. lower bound on the asymptotic variance, and show that this bound is sharp. In Chapters 5-6 we study the problem of uncertainty quantification for stochastic approximation limits. In Chapter 5 we analyze the uncertainty quantification for the limit of a Stochastic Approximation (SA) algorithm. In our setup, this limit is defined as the zero of a function given by an expectation. The expectation is taken w.r.t. a random variable for which the model is assumed to depend on an uncertain parameter. We consider the SA limit as a function of this parameter. We introduce the so-called Uncertainty for SA (USA) algorithm, an SA algorithm in increasing dimension for computing the basis coefficients of a chaos expansion of this function on an orthogonal basis of a suitable Hilbert space. The almost-sure and Lp convergences of USA, in the Hilbert space, are established under mild, tractable conditions. In Chapter 6 we analyse the L2-convergence rate of the USA algorithm designed in Chapter 5.The analysis is non-trivial due to infinite dimensionality of the procedure. Moreover, our setting is not covered by the previous works on infinite dimensional SA. The obtained rate depends non-trivially on the model and the design parameters of the algorithm. Its knowledge enables optimization of the dimension growth speed in the USA algorithm, which is the key factor of its efficient performance
APA, Harvard, Vancouver, ISO, and other styles
23

Castrequini, Rafael Andretto 1984. "Teoria de rough paths via integração algebrica." [s.n.], 2009. http://repositorio.unicamp.br/jspui/handle/REPOSIP/306323.

Full text
Abstract:
Orientador: Pedro Jose Catuogno
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatística e Computação Cientifica
Made available in DSpace on 2018-08-14T14:39:55Z (GMT). No. of bitstreams: 1 Castrequini_RafaelAndretto_M.pdf: 934326 bytes, checksum: e4c45bc1efde09bbe52710c44eab8bbf (MD5) Previous issue date: 2009
Resumo: Introduzimos a teoria dos p-rough paths seguindo a abordagem de M. Gubinelli, conhecida por integração algébrica. Durante toda a dissertação nos restringimos ao caso 1 Abstract: We introduce p-Rough Path Theory following M. Gubinelli_s approach, as known as algebraic integration. Throughout this masters thesis, we are concerned only in the case where 1 Mestrado
Sistemas estocasticos
Mestre em Matemática
APA, Harvard, Vancouver, ISO, and other styles
24

Cai, Jiatu. "Méthodes asymptotiques en contrôle stochastique et applications à la finance." Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCC338.

Full text
Abstract:
Dans cette thèse, nous étudions plusieurs problèmes de mathématiques financières liés à la présence d’imperfections sur les marchés. Notre approche principale pour leur résolution est l’utilisation d’un cadre asymptotique pertinent dans lequel nous parvenons à obtenir des solutions approchées explicites pour les problèmes de contrôle associés. Dans la première partie de cette thèse, nous nous intéressons à l’évaluation et la couverture des options européennes. Nous considérons tout d’abord la problématique de l’optimisation des dates de rebalancement d’une couverture à temps discret en présence d’une tendance dans la dynamique du sous-jacent. Nous montrons que dans cette situation, il est possible de générer un rendement positif tout en couvrant l’option et nous décrivons une stratégie de rebalancement asymptotiquement optimale pour un critère de type moyenne-variance. Ensuite, nous proposons un cadre asymptotique pour la gestion des options européennes en présence de coûts de transaction proportionnels. En s’inspirant des travaux de Leland, nous développons une méthode alternative de construction de portefeuilles de réplication permettant de minimiser les erreurs de couverture. La seconde partie de ce manuscrit est dédiée à la question du suivi d’une cible stochastique. L’objectif de l’agent est de rester proche de cette cible tout en minimisant le coût de suivi. Dans une asymptotique de coûts petits, nous démontrons l’existence d’une borne inférieure pour la fonction valeur associée à ce problème d’optimisation. Cette borne est interprétée en terme du contrôle ergodique du mouvement brownien. Nous fournissons également de nombreux exemples pour lesquels la borne inférieure est explicite et atteinte par une stratégie que nous décrivons. Dans la dernière partie de cette thèse, nous considérons le problème de consommation et investissement en présence de taxes sur le rendement des capitaux. Nous obtenons tout d’abord un développement asymptotique de la fonction valeur associée que nous interprétons de manière probabiliste. Puis, dans le cas d’un marché avec changements de régime et pour un investisseur dont l’utilité est du type Epstein-Zin, nous résolvons explicitement le problème en décrivant une stratégie de consommation-investissement optimale. Enfin, nous étudions l’impact joint de coûts de transaction et de taxes sur le rendement des capitaux. Nous établissons dans ce cadre un système d’équations avec termes correcteurs permettant d’unifier les résultats de [ST13] et[CD13]
In this thesis, we study several mathematical finance problems related to the presence of market imperfections. Our main approach for solving them is to establish a relevant asymptotic framework in which explicit approximate solutions can be obtained for the associated control problems. In the first part of this thesis, we are interested in the pricing and hedging of European options. We first consider the question of determining the optimal rebalancing dates for a replicating portfolio in the presence of a drift in the underlying dynamics. We show that in this situation, it is possible to generate positive returns while hedging the option and describe a rebalancing strategy which is asymptotically optimal for a mean-variance type criterion. Then we propose an asymptotic framework for options risk management under proportional transaction costs. Inspired by Leland’s approach, we develop an alternative way to build hedging portfolios enabling us to minimize hedging errors. The second part of this manuscript is devoted to the issue of tracking a stochastic target. The agent aims at staying close to the target while minimizing tracking efforts. In a small costs asymptotics, we establish a lower bound for the value function associated to this optimization problem. This bound is interpreted in term of ergodic control of Brownian motion. We also provide numerous examples for which the lower bound is explicit and attained by a strategy that we describe. In the last part of this thesis, we focus on the problem of consumption-investment with capital gains taxes. We first obtain an asymptotic expansion for the associated value function that we interpret in a probabilistic way. Then, in the case of a market with regime-switching and for an investor with recursive utility of Epstein-Zin type, we solve the problem explicitly by providing a closed-form consumption-investment strategy. Finally, we study the joint impact of transaction costs and capital gains taxes. We provide a system of corrector equations which enables us to unify the results in [ST13] and [CD13]
APA, Harvard, Vancouver, ISO, and other styles
25

Saadat, Sajedeh, and Timo Kudljakov. "Deterministic Quadrature Formulae for the Black–Scholes Model." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54612.

Full text
Abstract:
There exist many numerical methods for numerical solutions of the systems of stochastic differential equations. We choose the method of deterministic quadrature formulae proposed by Müller–Gronbach, and Yaroslavtseva in 2016. The idea is to apply a simplified version of the cubature in Wiener space. We explain the method and check how good it works in the simplest case of the classical Black–Scholes model.
APA, Harvard, Vancouver, ISO, and other styles
26

Tranchida, Julien. "Multiscale description of dynamical processes in magnetic media : from atomistic models to mesoscopic stochastic processes." Thesis, Tours, 2016. http://www.theses.fr/2016TOUR4027/document.

Full text
Abstract:
Les propriétés magnétiques détaillées des solides peuvent être vu comme le résultat de l'interaction de plusieurs sous-systèmes: celui des spins effectifs, portant l'aimantation, celui des électrons et celui du réseau crystallin. Différents processus permettent à ces sous-systèmes d'échanger de l'énergie. Parmis ceux-ci, les phénomènes de relaxation jouent un rôle prépondérants. Cependant, la complexité de ces processus en rend leur modélisation ardue. Afin de prendre en compte ces interactions de façon abordable aux calculs, l'approche de Langevin est depuis longtemps appliquée à la dynamique d'aimantation, qui peut être vue comme la réponse collective des spins. Elle consiste à modéliser les interactions entre les trois sous-systèmes par des interactions effectives entre le sous-système d'intérêt, les spins, et un bain thermique, dont seulement la densité de probabilité constituerait une quantité pertinente. Après avoir présenté cette approche, nous verrons en quoi elle permet de bâtir une dynamique atomique de spin. Une fois son implémentation détaillée, cette méthodologie sera appliquée à un exemple tiré de la littérature et basé sur le superparamagnétisme de nanoaimants de fer
Detailed magnetic properties of solids can be regarded as the result of the interaction between three subsystems: the effective spins, that will be our focus in this thesis, the electrons and the crystalline lattice. These three subsystems exchange energy, in many ways, in particular, through relaxation processes. The nature of these processes remains extremely hard to understand, and even harder to simulate. A practical approach, for performing such simulations, involves adapting the description of random processes by Langevin to the collective dynamics of the spins, usually called the magnetization dynamics. It consists in describing the, complicated, interactions between the subsystems, by the effective interactions of the subsystem of interest, the spins, and a thermal bath, whose probability density is only of relevance. This approach allows us to interpret the results of atomistic spin dynamics simulations in appropriate macroscopic terms. After presenting the numerical implementation of this methodology, a typical study of a magnetic device based on superparamagnetic iron monolayers is presented, as an example. The results are compared to experimental data and allow us to validate the atomistic spin dynamics simulations
APA, Harvard, Vancouver, ISO, and other styles
27

Brandi, Rafael Bruno da Silva. "Métodos de análise da função de custo futuro em problemas convexos: aplicação nas metodologias de programação dinâmica estocástica e dual estocástica." Universidade Federal de Juiz de Fora, 2016. https://repositorio.ufjf.br/jspui/handle/ufjf/2256.

Full text
Abstract:
Submitted by Renata Lopes (renatasil82@gmail.com) on 2016-07-28T12:04:17Z No. of bitstreams: 1 rafaelbrunodasilvabrandi.pdf: 13228407 bytes, checksum: 1e92e8c2fa686ddcaea1c9ed0d33b278 (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-07-28T12:16:14Z (GMT) No. of bitstreams: 1 rafaelbrunodasilvabrandi.pdf: 13228407 bytes, checksum: 1e92e8c2fa686ddcaea1c9ed0d33b278 (MD5)
Made available in DSpace on 2016-07-28T12:16:14Z (GMT). No. of bitstreams: 1 rafaelbrunodasilvabrandi.pdf: 13228407 bytes, checksum: 1e92e8c2fa686ddcaea1c9ed0d33b278 (MD5) Previous issue date: 2016-02-29
CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
CNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológico
O Sistema Elétrico Brasileiro (SEB) apresenta características peculiares devido às grandes dimensões do país e pelo fato da geração elétrica ser proveniente predominantemente de usinas hidráulicas. Como as afluências a estas usinas possuem comportamento estocástico e grandes reservatórios proporcionam ao sistema a capacidade de uma regularização plurianual, a utilização dos recursos hidráulicos deve ser planejada de forma minuciosa em um horizonte de tamanho considerável. Assim, o planejamento da operação de médio prazo compreende um período de 5 a 10 anos com discretização mensal e é realizado por uma cadeia de modelos computacionais tal que o principal modelo desta cadeia é baseado na técnica da Programação Dinâmica Dual Estocástica (PDDE). O objetivo deste trabalho é obter avanços nas metodologias de programação dinâmica atualmente utilizadas. Partindo-se da utilização da inserção iterativa de cortes, implementa-se um modelo computacional para o planejamento da operação de médio prazo baseado na metodologia de Programação Dinâmica Estocástica (PDE) utilizando uma discretização mais eficiente do espaço de estados (PDEE). Além disso, a metodologia proposta de PDE possui um critério de convergência bem definido para o problema, de forma que a inclusão da medida de risco CVaR não altera o processo de avaliação da convergência de forma significante. Dado que a inclusão desta medida de risco à PDDE convencional dificulta a avaliação da convergência do processo pela dificuldade da estimação de um limite superior válido, o critério de convergência proposto na PDEE é, então, base para um novo critério de convergência para a PDDE tal que pode ser aplicado mesmo na consideração do CVaR e não aumenta o custo computacional envolvido. Adicionalmente, obtém-se um critério de convergência mais detalhado em que as séries utilizadas para amostras de afluência podem ser avaliadas individualmente tais que aquelas que, em certo momento, não contribuam de forma determinante para a convergência podem ser descartadas do processo, diminuindo o tempo computacional, ou ainda serem substituídas por novas séries dentro de uma reamostragem mais seletiva dos cenários utilizados na PDDE. As metodologias propostas foram aplicadas para o cálculo do planejamento de médio prazo do SIN baseando-se em subsistemas equivalentes de energia. Observa-se uma melhoria no algoritmo base utilizado para a PDE e que o critério proposto para convergência da PDDE possui validade mesmo quando CVaR é considerado na modelagem.
The Brazilian National Grid (BNG) presents peculiar characteristics due to its huge territory dimensions and hydro-generation predominancy. As the water inflows to these plants are stochastic and a pluriannual regularization for system storage capacity is provided, the use of hydro-generation must be planned in an accurate manner such that it considersalongplanningperiod. So, thelong-termoperationplanning(LTOP)problemis generallysolvedbyachainofcomputationalmodelsthatconsideraperiodof5to10years ahead such that the primary model of this chain is based on Stochastic Dual Dynamic Programming (SDDP) technique. The main contribution of this thesis is to propose some improvements in Stochastic Dynamic Programming techniques usually settled on solving LTOP problems. In the fashion of an iterative cut selection, it is firstly proposed a LTOP problem solution model that uses an ecient state space discretization for Stochastic Dynamic Programming (SDP), called ESDP. The proposed model of SDP has a welldefined convergence criterion such that including CVaR does not hinder convergence analysis. Due to the lack of good upper bound estimators in SDDP when including CVaR, additional issues are encountered on defining a convergence criterion. So, based on ESDP convergence analysis, a new criterion for SDDP convergence is proposed such that it can be used regardless of CVaR representation with no extra computational burden. Moreover, the proposed convergence criterion for SDDP has a more detailed description such that forward paths can be individually assessed and then be accordingly discarded for computational time reduction, or even define paths to be replaced in a more particular resampling scheme in SDDP. Based on aggregate reservoir representation, the proposed methodsofconvergenceofSDDPandtheESDPwereappliedonLTOPproblemsrelatedto BNG. Results show improvements in SDDP based technique and eectiveness of proposed convergence criterion for SDDP when CVaR is used.
APA, Harvard, Vancouver, ISO, and other styles
28

Bouayed, Mohamed Amine. "Modélisation stochastique par éléments finis en géomécanique." Vandoeuvre-les-Nancy, INPL, 1997. http://www.theses.fr/1997INPL087N.

Full text
Abstract:
Ce travail porte sur l'application de la méthode des éléments finis stochastiques aux problèmes géomécaniques et sur l'évaluation de son utilité pour l'ingénieur. Nous présentons tout d'abord divers rappels sur la théorie des probabilités et nous essayons d'éclaircir certains points particuliers concernant la description des milieux géotechniques au moyen de variables aléatoires et de champs stochastiques. Après un bref rappel sur la méthode des éléments finis traditionnelle, nous analysons deux aspects de la méthode des éléments finis stochastiques (MEFS). Le premier traite des techniques de discrétisation des champs aléatoires et le second des techniques probabilistes utilisées pour formuler la MEFS. Quatre méthodes sont présentées (Monte-Carlo, Rosenblueth, perturbations et premier ordre-seconds moments avec ses variantes : les méthodes numériques d'Evans et des rapports polynomiaux). Nous donnons ensuite une description des logiciels développés dans le cadre de cette thèse. Des exemples allant des plus simples (solides élémentaires) aux plus complexes (barrages) sont traités à l'aide de ces logiciels afin d'illustrer les résultats typiques donnés par la méthode. L'interprétation de ces résultats est discutée. Nous abordons dans la dernière partie l'application de la MEFS aux analyses non linéaires. Cette application est illustrée par l'analyse de deux ouvrages géotechniques dont les matériaux suivent la loi rhéologique de Duncan-Kondner. On montre que la MEFS permet l'évaluation systématique de l'importance des différents paramètres du modèle de comportement retenu. Nous présentons finalement nos conclusions concernant l'interprétation, toujours délicate, des résultats donnés par la méthode et sur son utilité pour l'ingénieur.
APA, Harvard, Vancouver, ISO, and other styles
29

Trstanova, Zofia. "Mathematical and algorithmic analysis of modified Langevin dynamics." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM054/document.

Full text
Abstract:
En physique statistique, l’information macroscopique d’intérêt pour les systèmes considérés peut être dé-duite à partir de moyennes sur des configurations microscopiques réparties selon des mesures de probabilitéµ caractérisant l’état thermodynamique du système. En raison de la haute dimensionnalité du système (quiest proportionnelle au nombre de particules), les configurations sont le plus souvent échantillonnées en util-isant des trajectoires d’équations différentielles stochastiques ou des chaînes de Markov ergodiques pourla mesure de Boltzmann-Gibbs µ, qui décrit un système à température constante. Un processus stochas-tique classique permettant d’échantillonner cette mesure est la dynamique de Langevin. En pratique, leséquations de la dynamique de Langevin ne peuvent pas être intégrées analytiquement, la solution est alorsapprochée par un schéma numérique. L’analyse numérique de ces schémas de discrétisation est maintenantbien maîtrisée pour l’énergie cinétique quadratique standard. Une limitation importante des estimateurs desmoyennes sontleurs éventuelles grandes erreurs statistiques.Sous certaines hypothèsessur lesénergies ciné-tique et potentielle, il peut être démontré qu’un théorème de limite central est vrai. La variance asymptotiquepeut être grande en raison de la métastabilité du processus de Langevin, qui se produit dès que la mesure deprobabilité µ est multimodale.Dans cette thèse, nous considérons la discrétisation de la dynamique de Langevin modifiée qui améliorel’échantillonnage de la distribution de Boltzmann-Gibbs en introduisant une fonction cinétique plus généraleà la place de la formulation quadratique standard. Nous avons en fait deux situations en tête : (a) La dy-namique de Langevin Adaptativement Restreinte, où l’énergie cinétique s’annule pour les faibles moments,et correspond à l’énergie cinétique standard pour les forts moments. L’intérêt de cette dynamique est que lesparticules avec une faible énergie sont restreintes. Le gain vient alors du fait que les interactions entre lesparticules restreintes ne doivent pas être mises à jour. En raison de la séparabilité des positions et des mo-ments marginaux de la distribution, les moyennes des observables qui dépendent de la variable de positionsont égales à celles calculées par la dynamique de Langevin standard. L’efficacité de cette méthode résidedans le compromis entre le gain de calcul et la variance asymptotique des moyennes ergodiques qui peutaugmenter par rapport à la dynamique standards car il existe a priori plus des corrélations dans le tempsen raison de particules restreintes. De plus, étant donné que l’énergie cinétique est nulle sur un ouvert, ladynamique de Langevin associé ne parvient pas à être hypoelliptique. La première tâche de cette thèse est deprouver que la dynamique de Langevin avec une telle énergie cinétique est ergodique. L’étape suivante con-siste à présenter une analyse mathématique de la variance asymptotique de la dynamique AR-Langevin. Afinde compléter l’analyse de ce procédé, on estime l’accélération algorithmique du coût d’une seule itération,en fonction des paramètres de la dynamique. (b) Nous considérons aussi la dynamique de Langevin avecdes énergies cinétiques dont la croissance est plus que quadratique à l’infini, dans une tentative de réduire lamétastabilité. La liberté supplémentaire fournie par le choix de l’énergie cinétique doit être utilisée afin deréduire la métastabilité de la dynamique. Dans cette thèse, nous explorons le choix de l’énergie cinétique etnous démontrons une convergence améliorée des moyennes ergodiques sur un exemple de faible dimension.Un des problèmes avec les situations que nous considérons est la stabilité des régimes discrétisés. Afind’obtenir une méthode de discrétisation faiblement cohérente d’ordre 2 (ce qui n’est plus trivial dans le casde l’énergie cinétique générale), nous nous reposons sur les schémas basés sur des méthodes de Metropolis
In statistical physics, the macroscopic information of interest for the systems under consideration can beinferred from averages over microscopic configurations distributed according to probability measures µcharacterizing the thermodynamic state of the system. Due to the high dimensionality of the system (whichis proportional to the number of particles), these configurations are most often sampled using trajectories ofstochastic differential equations or Markov chains ergodic for the probability measure µ, which describesa system at constant temperature. One popular stochastic process allowing to sample this measure is theLangevin dynamics. In practice, the Langevin dynamics cannot be analytically integrated, its solution istherefore approximated with a numerical scheme. The numerical analysis of such discretization schemes isby now well-understood when the kinetic energy is the standard quadratic kinetic energy.One important limitation of the estimators of the ergodic averages are their possibly large statisticalerrors.Undercertainassumptionsonpotentialandkineticenergy,itcanbeshownthatacentrallimittheoremholds true. The asymptotic variance may be large due to the metastability of the Langevin process, whichoccurs as soon as the probability measure µ is multimodal.In this thesis, we consider the discretization of modified Langevin dynamics which improve the samplingof the Boltzmann–Gibbs distribution by introducing a more general kinetic energy function U instead of thestandard quadratic one. We have in fact two situations in mind:(a) Adaptively Restrained (AR) Langevin dynamics, where the kinetic energy vanishes for small momenta,while it agrees with the standard kinetic energy for large momenta. The interest of this dynamics isthat particles with low energy are restrained. The computational gain follows from the fact that theinteractions between restrained particles need not be updated. Due to the separability of the positionand momenta marginals of the distribution, the averages of observables which depend on the positionvariable are equal to the ones computed with the standard Langevin dynamics. The efficiency of thismethod lies in the trade-off between the computational gain and the asymptotic variance on ergodic av-erages which may increase compared to the standard dynamics since there are a priori more correlationsin time due to restrained particles. Moreover, since the kinetic energy vanishes on some open set, theassociated Langevin dynamics fails to be hypoelliptic. In fact, a first task of this thesis is to prove thatthe Langevin dynamics with such modified kinetic energy is ergodic. The next step is to present a math-ematical analysis of the asymptotic variance for the AR-Langevin dynamics. In order to complementthe analysis of this method, we estimate the algorithmic speed-up of the cost of a single iteration, as afunction of the parameters of the dynamics.(b) We also consider Langevin dynamics with kinetic energies growing more than quadratically at infinity,in an attempt to reduce metastability. The extra freedom provided by the choice of the kinetic energyshould be used in order to reduce the metastability of the dynamics. In this thesis, we explore thechoice of the kinetic energy and we demonstrate on a simple low-dimensional example an improvedconvergence of ergodic averages.An issue with the situations we consider is the stability of discretized schemes. In order to obtain aweakly consistent method of order 2 (which is no longer trivial for a general kinetic energy), we rely on therecently developped Metropolis schemes
APA, Harvard, Vancouver, ISO, and other styles
30

Rey, Clément. "Étude et modélisation des équations différentielles stochastiques." Thesis, Paris Est, 2015. http://www.theses.fr/2015PESC1177/document.

Full text
Abstract:
Durant les dernières décennies, l'essor des moyens technologiques et particulièrement informatiques a permis l'émergence de la mise en œuvre de méthodes numériques pour l'approximation d'Equations Différentielles Stochastiques (EDS) ainsi que pour l'estimation de leurs paramètres. Cette thèse aborde ces deux aspects et s'intéresse plus spécifiquement à l'efficacité de ces méthodes. La première partie sera consacrée à l'approximation d'EDS par schéma numérique tandis que la deuxième partie traite l'estimation de paramètres. Dans un premier temps, nous étudions des schémas d'approximation pour les EDSs. On suppose que ces schémas sont définis sur une grille de temps de taille $n$. On dira que le schéma $X^n$ converge faiblement vers la diffusion $X$ avec ordre $h in mathbb{N}$ si pour tout $T>0$, $vert mathbb{E}[f(X_T)-f(X_T^n)] vertleqslant C_f /n^h$. Jusqu'à maintenant, sauf dans certains cas particulier (schémas d'Euler et de Ninomiya Victoir), les recherches sur le sujet imposent que $C_f$ dépende de la norme infini de $f$ mais aussi de ses dérivées. En d'autres termes $C_f =C sum_{vert alpha vert leqslant q} Vert partial_{alpha} f Vert_{ infty}$. Notre objectif est de montrer que si le schéma converge faiblement avec ordre $h$ pour un tel $C_f$, alors, sous des hypothèses de non dégénérescence et de régularité des coefficients, on peut obtenir le même résultat avec $C_f=C Vert f Vert_{infty}$. Ainsi, on prouve qu'il est possible d'estimer $mathbb{E}[f(X_T)]$ pour $f$ mesurable et bornée. On dit alors que le schéma converge en variation totale vers la diffusion avec ordre $h$. On prouve aussi qu'il est possible d'approximer la densité de $X_T$ et ses dérivées par celle $X_T^n$. Afin d'obtenir ce résultat, nous emploierons une méthode de calcul de Malliavin adaptatif basée sur les variables aléatoires utilisées dans le schéma. L'intérêt de notre approche repose sur le fait que l'on ne traite pas le cas d'un schéma particulier. Ainsi notre résultat s'applique aussi bien aux schémas d'Euler ($h=1$) que de Ninomiya Victoir ($h=2$) mais aussi à un ensemble générique de schémas. De plus les variables aléatoires utilisées dans le schéma n'ont pas de lois de probabilité imposées mais appartiennent à un ensemble de lois ce qui conduit à considérer notre résultat comme un principe d'invariance. On illustrera également ce résultat dans le cas d'un schéma d'ordre 3 pour les EDSs unidimensionnelles. La deuxième partie de cette thèse traite le sujet de l'estimation des paramètres d'une EDS. Ici, on va se placer dans le cas particulier de l'Estimateur du Maximum de Vraisemblance (EMV) des paramètres qui apparaissent dans le modèle matriciel de Wishart. Ce processus est la version multi-dimensionnelle du processus de Cox Ingersoll Ross (CIR) et a pour particularité la présence de la fonction racine carrée dans le coefficient de diffusion. Ainsi ce modèle permet de généraliser le modèle d'Heston au cas d'une covariance locale. Dans cette thèse nous construisons l'EMV des paramètres du Wishart. On donne également la vitesse de convergence et la loi limite pour le cas ergodique ainsi que pour certains cas non ergodiques. Afin de prouver ces convergences, nous emploierons diverses méthodes, en l'occurrence : les théorèmes ergodiques, des méthodes de changement de temps, ou l'étude de la transformée de Laplace jointe du Wishart et de sa moyenne. De plus, dans dernière cette étude, on étend le domaine de définition de cette transformée jointe
The development of technology and computer science in the last decades, has led the emergence of numerical methods for the approximation of Stochastic Differential Equations (SDE) and for the estimation of their parameters. This thesis treats both of these two aspects. In particular, we study the effectiveness of those methods. The first part will be devoted to SDE's approximation by numerical schemes while the second part will deal with the estimation of the parameters of the Wishart process. First, we focus on approximation schemes for SDE's. We will treat schemes which are defined on a time grid with size $n$. We say that the scheme $ X^n $ converges weakly to the diffusion $ X $, with order $ h in mathbb{N} $, if for every $ T> 0 $, $ vert mathbb{E} [f (X_T) -f (X_T^n)]vert leqslant C_f / h^n $. Until now, except in some particular cases (Euler and Victoir Ninomiya schemes), researches on this topic require that $ C_f$ depends on the supremum norm of $ f $ as well as its derivatives. In other words $C_f =C sum_{vert alpha vert leqslant q} Vert partial_{alpha} f Vert_{ infty}$. Our goal is to show that, if the scheme converges weakly with order $ h $ for such $C_f$, then, under non degeneracy and regularity assumptions, we can obtain the same result with $ C_f=C Vert f Vert_{infty}$. We are thus able to estimate $mathbb{E} [f (X_T)]$ for a bounded and measurable function $f$. We will say that the scheme converges for the total variation distance, with rate $h$. We will also prove that the density of $X^n_T$ and its derivatives converge toward the ones of $X_T$. The proof of those results relies on a variant of the Malliavin calculus based on the noise of the random variable involved in the scheme. The great benefit of our approach is that it does not treat the case of a particular scheme and it can be used for many schemes. For instance, our result applies to both Euler $(h = 1)$ and Ninomiya Victoir $(h = 2)$ schemes. Furthermore, the random variables used in this set of schemes do not have a particular distribution law but belong to a set of laws. This leads to consider our result as an invariance principle as well. Finally, we will also illustrate this result for a third weak order scheme for one dimensional SDE's. The second part of this thesis deals with the topic of SDE's parameter estimation. More particularly, we will study the Maximum Likelihood Estimator (MLE) of the parameters that appear in the matrix model of Wishart. This process is the multi-dimensional version of the Cox Ingersoll Ross (CIR) process. Its specificity relies on the square root term which appears in the diffusion coefficient. Using those processes, it is possible to generalize the Heston model for the case of a local covariance. This thesis provides the calculation of the EMV of the parameters of the Wishart process. It also gives the speed of convergence and the limit laws for the ergodic cases and for some non-ergodic case. In order to obtain those results, we will use various methods, namely: the ergodic theorems, time change methods or the study of the joint Laplace transform of the Wishart process together with its average process. Moreover, in this latter study, we extend the domain of definition of this joint Laplace transform
APA, Harvard, Vancouver, ISO, and other styles
31

Tryoen, Julie. "Méthodes de Galerkin stochastiques adaptatives pour la propagation d'incertitudes paramétriques dans les modèles hyperboliques." Phd thesis, Université Paris-Est, 2011. http://pastel.archives-ouvertes.fr/pastel-00795322.

Full text
Abstract:
On considère des méthodes de Galerkin stochastiques pour des systèmes hyperboliques faisant intervenir des données en entrée incertaines de lois de distribution connues paramétrées par des variables aléatoires. On s'intéresse à des problèmes où un choc apparaît presque sûrement en temps fini. Dans ce cas, la solution peut développer des discontinuités dans les domaines spatial et stochastique. On utilise un schéma de Volumes Finis pour la discrétisation spatiale et une projection de Galerkin basée sur une approximation polynomiale par morceaux pour la discrétisation stochastique. On propose un solveur de type Roe avec correcteur entropique pour le système de Galerkin, utilisant une technique originale pour approcher la valeur absolue de la matrice de Roe et une adaptation du correcteur entropique de Dubois et Mehlmann. La méthode proposée reste coûteuse car une discrétisation stochastique très fine est nécessaire pour représenter la solution au voisinage des discontinuités. Il est donc nécessaire de faire appel à des stratégies adaptatives. Comme les discontinuités sont localisées en espace et évoluent en temps, on propose des représentations stochastiques dépendant de l'espace et du temps. On formule cette méthodologie dans un contexte multi-résolution basé sur le concept d'arbres binaires pour décrire la discrétisation stochastique. Les étapes d'enrichissement et d'élagage adaptatifs sont réalisées en utilisant des critères d'analyse multi-résolution. Dans le cas multidimensionnel, une anisotropie de la procédure adaptative est proposée. La méthodologie est évaluée sur le système des équations d'Euler dans un tube à choc et sur l'équation de Burgers en une et deux dimensions stochastiques
APA, Harvard, Vancouver, ISO, and other styles
32

Pustějovský, Michal. "Optimalizace teplotního pole s fázovou přeměnou." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-232173.

Full text
Abstract:
This thesis deals with modelling of continuous casting of steel. This process of steel manufacturing has achieved dominant position not only in the Czech Republic but also worldwide. The solved casted bar cross-section shape is circular, because it is rarely studied in academical works nowadays. First part of thesis focuses on creating numerical model of thermal field, using finite difference method with cylindrical coordinates. This model is then employed in optimization part, which represents control problem of abrupt step change of casting speed. The main goal is to find out, whether the computation of numerical model and optimization both can be parallelized using spatial decomposition. To achieve that, Progressive Hedging Algorithm from the field of stochastic optimization has been used.
APA, Harvard, Vancouver, ISO, and other styles
33

Fauth, Alexis. "Contributions à la modélisation des données financières à hautes fréquences." Thesis, Paris 1, 2014. http://www.theses.fr/2014PA010019.

Full text
Abstract:
Cette thèse a été réalisée au sein de l’entreprise Invivoo. L’objectif principal était de trouver des stratégies d’investissement : avoir un gain important et un risque faible. Les travaux de recherche ont été principalement portés par ce dernier point. Dans ce sens, nous avons voulu généraliser un modèle fidèle à la réalité des marchés financiers, que ce soit pour des données à basse comme à haute fréquence et, à très haute fréquence, variation par variation
No English summary available
APA, Harvard, Vancouver, ISO, and other styles
34

Hamdi, Tarek. "Calcul stochastique commutatif et non-commutatif : théorie et application." Thesis, Besançon, 2013. http://www.theses.fr/2013BESA2015/document.

Full text
Abstract:
Mon travail de thèse est composé de deux parties bien distinctes, la première partie est consacrée à l’analysestochastique en temps discret des marches aléatoires obtuses quant à la deuxième partie, elle est liée aux probabili-tés libres. Dans la première partie, on donne une construction des intégrales stochastiques itérées par rapport à unefamille de martingales normales d-dimentionelles. Celle-ci permet d’étudier la propriété de représentation chaotiqueen temps discret et mène à une construction des opérateurs gradient et divergence sur les chaos de Wiener correspon-dant. [...] d’une EDP non linéaire alors que la deuxième est de nature combinatoire.Dans un second temps, on a revisité la description de la mesure spectrale de la partie radiale du mouvement Browniensur Gl(d,C) quand d ! +¥. Biane a démontré que cette mesure est absolument continue par rapport à la mesurede Lebesgue et que son support est compact dans R+. Notre contribution consiste à redémontrer le résultat de Bianeen partant d’une représentation intégrale de la suite des moments sur une courbe de Jordon autour de l’origine etmoyennant des outils simples de l’analyse réelle et complexe
My PhD work is composed of two parts, the first part is dedicated to the discrete-time stochastic analysis for obtuse random walks as to the second part, it is linked to free probability. In the first part, we present a construction of the stochastic integral of predictable square-integrable processes and the associated multiple stochastic integrals ofsymmetric functions on Nn (n_1), with respect to a normal martingale.[...] In a second step, we revisited thedescription of the marginal distribution of the Brownian motion on the large-size complex linear group. Precisely, let (Z(d)t )t_0 be a Brownian motion on GL(d,C) and consider nt the limit as d !¥ of the distribution of (Z(d)t/d)⋆Z(d)t/d with respect to E×tr
APA, Harvard, Vancouver, ISO, and other styles
35

Bauzet, Caroline. "Etude d'équations aux dérivées partielles stochastiques." Thesis, Pau, 2013. http://www.theses.fr/2013PAUU3007/document.

Full text
Abstract:
Cette thèse s’inscrit dans le domaine mathématique de l’analyse des équations aux dérivées partielles (EDP) non-linéaires stochastiques. Nous nous intéressons à des EDP paraboliques et hyperboliques que l’on perturbe stochastiquement au sens d’Itô. Il s’agit d’introduire l’aléatoire via l’ajout d’une intégrale stochastique (intégrale d’Itô) qui peut dépendre ou non de la solution, on parle alors de bruit multiplicatif ou additif. La présence de la variable de probabilité ne nous permet pas d’utiliser tous les outils classiques de l’analyse des EDP. Notre but est d’adapter les techniques connues dans le cadre déterministe aux EDP non linéaires stochastiques en proposant des méthodes alternatives. Les résultats obtenus sont décrits dans les cinq chapitres de cette thèse : Dans le Chapitre I, nous étudions une perturbation stochastique des équations de Barenblatt. En utilisant une semi- discrétisation implicite en temps, nous établissons l’existence et l’unicité d’une solution dans le cas additif, et grâce aux propriétés de la solution nous sommes en mesure d’étendre ce résultat au cas multiplicatif à l’aide d’un théorème de point fixe. Dans le Chapitre II, nous considérons une classe d’équations de type Barenblatt stochastiques dans un cadre abstrait. Il s’agit là d’une généralisation des résultats du Chapitre I. Dans le Chapitre III, nous travaillons sur l’étude du problème de Cauchy pour une loi de conservation stochastique. Nous montrons l’existence d’une solution par une méthode de viscosité artificielle en utilisant des arguments de compacité donnés par la théorie des mesures de Young. L’unicité repose sur une adaptation de la méthode de dédoublement des variables de Kruzhkov.. Dans le Chapitre IV, nous nous intéressons au problème de Dirichlet pour la loi de conservation stochastique étudiée au Chapitre III. Le point remarquable de l’étude repose sur l’utilisation des semi-entropies de Kruzhkov pour montrer l’unicité. Dans le Chapitre V, nous introduisons une méthode de splitting pour proposer une approche numérique du problème étudié au Chapitre IV, suivie de quelques simulations de l’équation de Burgers stochastique dans le cas unidimensionnel
This thesis deals with the mathematical field of stochastic nonlinear partial differential equations’ analysis. We are interested in parabolic and hyperbolic PDE stochastically perturbed in the Itô sense. We introduce randomness by adding a stochastic integral (Itô integral), which can depend or not on the solution. We thus talk about a multiplicative noise or an additive one. The presence of the random variable does not allow us to apply systematically classical tools of PDE analysis. Our aim is to adapt known techniques of the deterministic setting to nonlinear stochastic PDE analysis by proposing alternative methods. Here are the obtained results : In Chapter I, we investigate on a stochastic perturbation of Barenblatt equations. By using an implicit time discretization, we establish the existence and uniqueness of the solution in the additive case. Thanks to the properties of such a solution, we are able to extend this result to the multiplicative noise using a fixed-point theorem. In Chapter II, we consider a class of stochastic equations of Barenblatt type but in an abstract frame. It is about a generalization of results from Chapter I. In Chapter III, we deal with the study of the Cauchy problem for a stochastic conservation law. We show existence of solution via an artificial viscosity method. The compactness arguments are based on Young measure theory. The uniqueness result is proved by an adaptation of the Kruzhkov doubling variables technique. In Chapter IV, we are interested in the Dirichlet problem for the stochastic conservation law studied in Chapter III. The remarkable point is the use of the Kruzhkov semi-entropies to show the uniqueness of the solution. In Chapter V, we introduce a splitting method to propose a numerical approach of the problem studied in Chapter IV. Then we finish by some simulations of the stochastic Burgers’ equation in the one dimensional case
APA, Harvard, Vancouver, ISO, and other styles
36

"Successive discretization procedures for stochastic programming with recourse." Massachusetts Institute of Technology, Operations Research Center, 1985. http://hdl.handle.net/1721.1/5296.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Lawi, Stéphan. "Solvable integrals of stochastic processes and q-deformed processes /." 2004. http://link.library.utoronto.ca/eir/EIRdetail.cfm?Resources__ID=94718&T=F.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Cho, Nhansook. "Weak convergence of stochastic integrals and stochastic differential equations driven by martingale measure and its applications." 1994. http://catalog.hathitrust.org/api/volumes/oclc/31493948.html.

Full text
Abstract:
Thesis (Ph. D.)--University of Wisconsin--Madison, 1994.
Typescript. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaves 142-144).
APA, Harvard, Vancouver, ISO, and other styles
39

Bonnet, Frederic D. R. "Option pricing using path integrals." 2010. http://hdl.handle.net/2440/56951.

Full text
Abstract:
It is well established that stock market volatility has a memory of the past, moreover it is found that volatility correlations are long ranged. As a consequence, volatility cannot be characterized by a single correlation time in general. Recent empirical work suggests that the volatility correlation functions of various assets actually decay as a power law. Moreover it is well established that the distribution functions for the returns do not obey a Gaussian distribution, but follow more the type of distributions that incorporate what are commonly known as fat–tailed distributions. As a result, if one is to model the evolution of the stock price, stock market or any financial derivative, then standard Brownian motion models are inaccurate. One must take into account the results obtained from empirical studies and work with models that include realistic features observed on the market. In this thesis we show that it is possible to derive the path integral for a non-Gaussian option pricing model that can capture fat–tails. However we find that the path integral technique can only be used on a very small set of problems, as a number of situations of interest are shown to be intractable.
http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1378473
Thesis (Ph.D.) -- University of Adelaide, School of Electrical and Electronic Engineering, 2010
APA, Harvard, Vancouver, ISO, and other styles
40

(6368468), Daesung Kim. "Stability for functional and geometric inequalities and a stochastic representation of fractional integrals and nonlocal operators." Thesis, 2019.

Find full text
Abstract:
The dissertation consists of two research topics.

The first research direction is to study stability of functional and geometric inequalities. Stability problem is to estimate the deficit of a functional or geometric inequality in terms of the distance from the class of optimizers or a functional that identifies the optimizers. In particular, we investigate the logarithmic Sobolev inequality, the Beckner-Hirschman inequality (the entropic uncertainty principle), and isoperimetric type inequalities for the expected lifetime of Brownian motion.

The second topic of the thesis is a stochastic representation of fractional integrals and nonlocal operators. We extend the Hardy-Littlewood-Sobolev inequality to symmetric Markov semigroups. To this end, we construct a stochastic representation of the fractional integral using the background radiation process. The inequality follows from a new inequality for the fractional Littlewood-Paley square function. We also prove the Hardy-Stein identity for non-symmetric pure jump Levy processes and the L^p boundedness of a certain class of Fourier multiplier operators arising from non-symmetric pure jump Levy processes. The proof is based on Ito's formula for general jump processes and the symmetrization of Levy processes.
APA, Harvard, Vancouver, ISO, and other styles
41

Keeler, Holger Paul. "Stochastic routing models in sensor networks." 2010. http://repository.unimelb.edu.au/10187/8529.

Full text
Abstract:
Sensor networks are an evolving technology that promise numerous applications. The random and dynamic structure of sensor networks has motivated the suggestion of greedy data-routing algorithms.
In this thesis stochastic models are developed to study the advancement of messages under greedy routing in sensor networks. A model framework that is based on homogeneous spatial Poisson processes is formulated and examined to give a better understanding of the stochastic dependencies arising in the system. The effects of the model assumptions and the inherent dependencies are discussed and analyzed. A simple power-saving sleep scheme is included, and its effects on the local node density are addressed to reveal that it reduces one of the dependencies in the model.
Single hop expressions describing the advancement of messages are derived, and asymptotic expressions for the hop length moments are obtained. Expressions for the distribution of the multihop advancement of messages are derived. These expressions involve high-dimensional integrals, which are evaluated with quasi-Monte Carlo integration methods. An importance sampling function is derived to speed up the quasi-Monte Carlo methods. The subsequent results agree extremely well with those obtained via routing simulations. A renewal process model is proposed to model multihop advancements, and is justified under certain assumptions.
The model framework is extended by incorporating a spatially dependent density, which is inversely proportional to the sink distance. The aim of this extension is to demonstrate that an inhomogeneous Poisson process can be used to model a sensor network with spatially dependent node density. Elliptic integrals and asymptotic approximations are used to describe the random behaviour of hops. The final model extension entails including random transmission radii, the effects of which are discussed and analyzed. The thesis is concluded by giving future research tasks and directions.
APA, Harvard, Vancouver, ISO, and other styles
42

Psaros, Andriopoulos Apostolos. "Sparse representations and quadratic approximations in path integral techniques for stochastic response analysis of diverse systems/structures." Thesis, 2019. https://doi.org/10.7916/d8-xcxx-my55.

Full text
Abstract:
Uncertainty propagation in engineering mechanics and dynamics is a highly challenging problem that requires development of analytical/numerical techniques for determining the stochastic response of complex engineering systems. In this regard, although Monte Carlo simulation (MCS) has been the most versatile technique for addressing the above problem, it can become computationally daunting when faced with high-dimensional systems or with computing very low probability events. Thus, there is a demand for pursuing more computationally efficient methodologies. Recently, a Wiener path integral (WPI) technique, whose origins can be found in theoretical physics, has been developed in the field of engineering dynamics for determining the response transition probability density function (PDF) of nonlinear oscillators subject to non-white, non-Gaussian and non-stationary excitation processes. In the present work, the Wiener path integral technique is enhanced, extended and generalized with respect to three main aspects; namely, versatility, computational efficiency and accuracy. Specifically, the need for increasingly sophisticated modeling of excitations has led recently to the utilization of fractional calculus, which can be construed as a generalization of classical calculus. Motivated by the above developments, the WPI technique is extended herein to account for stochastic excitations modeled via fractional-order filters. To this aim, relying on a variational formulation and on the most probable path approximation yields a deterministic fractional boundary value problem to be solved numerically for obtaining the oscillator joint response PDF. Further, appropriate multi-dimensional bases are constructed for approximating, in a computationally efficient manner, the non-stationary joint response PDF. In this regard, two distinct approaches are pursued. The first employs expansions based on Kronecker products of bases (e.g., wavelets), while the second utilizes representations based on positive definite functions. Next, the localization capabilities of the WPI technique are exploited for determining PDF points in the joint space-time domain to be used for evaluating the expansion coefficients at a relatively low computational cost. Subsequently, compressive sampling procedures are employed in conjunction with group sparsity concepts and appropriate optimization algorithms for decreasing even further the associated computational cost. It is shown that the herein developed enhancement renders the technique capable of treating readily relatively high-dimensional stochastic systems. More importantly, it is shown that this enhancement in computational efficiency becomes more prevalent as the number of stochastic dimensions increases; thus, rendering the herein proposed sparse representation approach indispensable, especially for high-dimensional systems. Next, a quadratic approximation of the WPI is developed for enhancing the accuracy degree of the technique. Concisely, following a functional series expansion, higher-order terms are accounted for, which is equivalent to considering not only the most probable path but also fluctuations around it. These fluctuations are incorporated into a state-dependent factor by which the exponential part of each PDF value is multiplied. This localization of the state-dependent factor yields superior accuracy as compared to the standard most probable path WPI approximation where the factor is constant and state-invariant. An additional advantage relates to efficient structural reliability assessment, and in particular, to direct estimation of low probability events (e.g., failure probabilities), without possessing the complete transition PDF. Overall, the developments in this thesis render the WPI technique a potent tool for determining, in a reliable manner and with a minimal computational cost, the stochastic response of nonlinear oscillators subject to an extended range of excitation processes. Several numerical examples, pertaining to both nonlinear dynamical systems subject to external excitations and to a special class of engineering mechanics problems with stochastic media properties, are considered for demonstrating the reliability of the developed techniques. In all cases, the degree of accuracy and the computational efficiency exhibited are assessed by comparisons with pertinent MCS data.
APA, Harvard, Vancouver, ISO, and other styles
43

Deng, Jian. "Stochastic collocation methods for aeroelastic system with uncertainty." Master's thesis, 2009. http://hdl.handle.net/10048/557.

Full text
Abstract:
Thesis (M. Sc.)--University of Alberta, 2009.
Title from pdf file main screen (viewed on Sept. 3, 2009). "A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of the requirements for the degree of Master of Science in Applied Mathematics, Department of Mathematical and Statistical Sciences, University of Alberta." Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
44

"On the rate at which a homogeneous diffusion approaches a limit : an application of the large deviation theory of certain stochastic integrals." Laboratory for Information and Decision Systems, MIT], 1985. http://hdl.handle.net/1721.1/2884.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Barbuto, Pedro Marzagão. "LSMC for pricing american pptions under the heston model." Master's thesis, 2013. http://hdl.handle.net/10071/6899.

Full text
Abstract:
The purpose of the thesis is to price American-style options using the Least Squares Monte Carlo Method proposed by Longstaf and Schwartz (2001) combined with the well-known Heston model (1993). Regarding the discretization process of the Heston model, it will be tested three of the most important methods: Full Truncation Euler Scheme proposed by Lord et al. (2008) and, the Truncated Gaussian and Quadratic Exponential Scheme, suggested by Andersen (2008).
APA, Harvard, Vancouver, ISO, and other styles
46

Li, Z., Y. Chen, Yakun Guo, X. Zhang, and S. Du. "Element failure probability of soil slope under consideration of random groundwater level." 2021. http://hdl.handle.net/10454/18421.

Full text
Abstract:
Yes
The instability of soil slopes is directly related to both the shear parameters of the soil material and the groundwater, which usually causes some uncertainty. In this study, a novel method, the element failure probability method (EFP), is proposed to analyse the failure of soil slopes. Based on the upper bound theory, finite element discretization, and the stochastic programming theory, an upper bound stochastic programming model is established by simultaneously considering the randomness of shear parameters and groundwater level to analyse the reliability of slopes. The model is then solved by using the Monte-Carlo method based on the random shear parameters and groundwater levels. Finally, a formula is derived for the element failure probability (EFP) based on the safety factors and velocity fields of the upper bound method. The probability of a slope failure can be calculated by using the safety factor, and the distribution of failure regions in space can be determined by using the location information of the element. The proposed method is validated by using a classic example. This study has theoretical value for further research attempting to advance the application of plastic limit analysis to analyse slope reliability.
National Natural Science Foundation of China (grant no. 51564026), the Research Foundation of Kunming University of Science and Technology (grant no. KKSY201904006) and the Key Laboratory of Rock Mechanics and Geohazards of Zhejiang Province (grant no. ZJRM-2018-Z-02).
APA, Harvard, Vancouver, ISO, and other styles
47

(7483880), Zihe Zhou. "Optimizing Reflected Brownian Motion: A Numerical Study." Thesis, 2019.

Find full text
Abstract:
This thesis focuses on optimization on a generic objective function based on reflected Brownian motion (RBM). We investigate in several approaches including the partial differential equation approach where we write our objective function in terms of a Hamilton-Jacobi-Bellman equation using the dynamic programming principle and the gradient descent approach where we use two different gradient estimators. We provide extensive numerical results with the gradient descent approach and we discuss the difficulties and future study opportunities for this problem.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography