Teses / dissertações sobre o tema "Processus stochastiques en grande dimension"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 32 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Processus stochastiques en grande dimension".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.
Godichon-Baggioni, Antoine. "Algorithmes stochastiques pour la statistique robuste en grande dimension". Thesis, Dijon, 2016. http://www.theses.fr/2016DIJOS053/document.
Texto completo da fonteThis thesis focus on stochastic algorithms in high dimension as well as their application in robust statistics. In what follows, the expression high dimension may be used when the the size of the studied sample is large or when the variables we consider take values in high dimensional spaces (not necessarily finite). In order to analyze these kind of data, it can be interesting to consider algorithms which are fast, which do not need to store all the data, and which allow to update easily the estimates. In large sample of high dimensional data, outliers detection is often complicated. Nevertheless, these outliers, even if they are not many, can strongly disturb simple indicators like the mean and the covariance. We will focus on robust estimates, which are not too much sensitive to outliers.In a first part, we are interested in the recursive estimation of the geometric median, which is a robust indicator of location which can so be preferred to the mean when a part of the studied data is contaminated. For this purpose, we introduce a Robbins-Monro algorithm as well as its averaged version, before building non asymptotic confidence balls for these estimates, and exhibiting their $L^{p}$ and almost sure rates of convergence.In a second part, we focus on the estimation of the Median Covariation Matrix (MCM), which is a robust dispersion indicator linked to the geometric median. Furthermore, if the studied variable has a symmetric law, this indicator has the same eigenvectors as the covariance matrix. This last property represent a real interest to study the MCM, especially for Robust Principal Component Analysis. We so introduce a recursive algorithm which enables us to estimate simultaneously the geometric median, the MCM, and its $q$ main eigenvectors. We give, in a first time, the strong consistency of the estimators of the MCM, before exhibiting their rates of convergence in quadratic mean.In a third part, in the light of the work on the estimates of the median and of the Median Covariation Matrix, we exhibit the almost sure and $L^{p}$ rates of convergence of averaged stochastic gradient algorithms in Hilbert spaces, with less restrictive assumptions than in the literature. Then, two applications in robust statistics are given: estimation of the geometric quantiles and application in robust logistic regression.In the last part, we aim to fit a sphere on a noisy points cloud spread around a complete or truncated sphere. More precisely, we consider a random variable with a truncated spherical distribution, and we want to estimate its center as well as its radius. In this aim, we introduce a projected stochastic gradient algorithm and its averaged version. We establish the strong consistency of these estimators as well as their rates of convergence in quadratic mean. Finally, the asymptotic normality of the averaged algorithm is given
Daw, Ibrahima. "Principe de grandes déviations pour la famille des mesures invariantes associées à des processus de diffusion en dimension infinie". Rouen, 1998. http://www.theses.fr/1998ROUES039.
Texto completo da fontePhan, Duy Nhat. "Algorithmes basés sur la programmation DC et DCA pour l’apprentissage avec la parcimonie et l’apprentissage stochastique en grande dimension". Electronic Thesis or Diss., Université de Lorraine, 2016. http://www.theses.fr/2016LORR0235.
Texto completo da fonteThese days with the increasing abundance of data with high dimensionality, high dimensional classification problems have been highlighted as a challenge in machine learning community and have attracted a great deal of attention from researchers in the field. In recent years, sparse and stochastic learning techniques have been proven to be useful for this kind of problem. In this thesis, we focus on developing optimization approaches for solving some classes of optimization problems in these two topics. Our methods are based on DC (Difference of Convex functions) programming and DCA (DC Algorithms) which are wellknown as one of the most powerful tools in optimization. The thesis is composed of three parts. The first part tackles the issue of variable selection. The second part studies the problem of group variable selection. The final part of the thesis concerns the stochastic learning. In the first part, we start with the variable selection in the Fisher's discriminant problem (Chapter 2) and the optimal scoring problem (Chapter 3), which are two different approaches for the supervised classification in the high dimensional setting, in which the number of features is much larger than the number of observations. Continuing this study, we study the structure of the sparse covariance matrix estimation problem and propose four appropriate DCA based algorithms (Chapter 4). Two applications in finance and classification are conducted to illustrate the efficiency of our methods. The second part studies the L_p,0regularization for the group variable selection (Chapter 5). Using a DC approximation of the L_p,0norm, we indicate that the approximate problem is equivalent to the original problem with suitable parameters. Considering two equivalent reformulations of the approximate problem we develop DCA based algorithms to solve them. Regarding applications, we implement the proposed algorithms for group feature selection in optimal scoring problem and estimation problem of multiple covariance matrices. In the third part of the thesis, we introduce a stochastic DCA for large scale parameter estimation problems (Chapter 6) in which the objective function is a large sum of nonconvex components. As an application, we propose a special stochastic DCA for the loglinear model incorporating latent variables
Langrené, Nicolas. "Méthodes numériques probabilistes en grande dimension pour le contrôle stochastique et problèmes de valorisation sur les marchés d'électricité". Phd thesis, Université Paris-Diderot - Paris VII, 2014. http://tel.archives-ouvertes.fr/tel-00957948.
Texto completo da fontePhan, Duy Nhat. "Algorithmes basés sur la programmation DC et DCA pour l’apprentissage avec la parcimonie et l’apprentissage stochastique en grande dimension". Thesis, Université de Lorraine, 2016. http://www.theses.fr/2016LORR0235/document.
Texto completo da fonteThese days with the increasing abundance of data with high dimensionality, high dimensional classification problems have been highlighted as a challenge in machine learning community and have attracted a great deal of attention from researchers in the field. In recent years, sparse and stochastic learning techniques have been proven to be useful for this kind of problem. In this thesis, we focus on developing optimization approaches for solving some classes of optimization problems in these two topics. Our methods are based on DC (Difference of Convex functions) programming and DCA (DC Algorithms) which are wellknown as one of the most powerful tools in optimization. The thesis is composed of three parts. The first part tackles the issue of variable selection. The second part studies the problem of group variable selection. The final part of the thesis concerns the stochastic learning. In the first part, we start with the variable selection in the Fisher's discriminant problem (Chapter 2) and the optimal scoring problem (Chapter 3), which are two different approaches for the supervised classification in the high dimensional setting, in which the number of features is much larger than the number of observations. Continuing this study, we study the structure of the sparse covariance matrix estimation problem and propose four appropriate DCA based algorithms (Chapter 4). Two applications in finance and classification are conducted to illustrate the efficiency of our methods. The second part studies the L_p,0regularization for the group variable selection (Chapter 5). Using a DC approximation of the L_p,0norm, we indicate that the approximate problem is equivalent to the original problem with suitable parameters. Considering two equivalent reformulations of the approximate problem we develop DCA based algorithms to solve them. Regarding applications, we implement the proposed algorithms for group feature selection in optimal scoring problem and estimation problem of multiple covariance matrices. In the third part of the thesis, we introduce a stochastic DCA for large scale parameter estimation problems (Chapter 6) in which the objective function is a large sum of nonconvex components. As an application, we propose a special stochastic DCA for the loglinear model incorporating latent variables
Bastide, Dorinel-Marian. "Handling derivatives risks with XVAs in a one-period network model". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASM027.
Texto completo da fonteFinance regulators require banking institutions to be able to conduct regular scenario analyses to assess their resistance to various shocks (stress tests) of their exposures, in particular towards clearing houses (CCPs) to which they are largely exposed, by applying market shocks to capture market risk and economic shocks leading some financial players to bankruptcy, known as default state, to reflect both credit and counterparty risks. By interposing itself between financial actors, one of the main purposes of CCPs are to limit counterparty risk due to contractual payment failures due to one or several defaults among engaged parties. They also facilitate the various financial flows of the trading activities even in the event of default of one or more of their members by re-arranging certain positions and allocating any loss that could materialize following these defaults to the surviving members. To develop a relevant view of risks and ensure effective capital steering tools, it is essential for banks to have the capacity to comprehensively understand the losses and liquidity needs caused by these various shocks within these financial networks as well as to have an understanding of the underlying mechanisms. This thesis project aims at tackling modelling issues to answer those different needs that are at the heart of risk management practices for banks under clearing environments. We begin by defining a one-period static model for reflecting the market heterogeneous positions and possible joint defaults of multiple financial players, being members of CCPs and other financial participants, to identify the different costs, known as XVAs, generated by both clearing and bilateral activities, with explicit formulas for these costs. Various use cases of this modelling framework are illustrated with stress test exercises examples on financial networks from a member's point of view or innovation of portfolio of CCP defaulted members with other surviving members. Fat-tailed distributions are favoured to generate portfolio losses and defaults with the application of very large-dimension Monte-Carlo methods along with numerical uncertainty quantifications. We also expand on the novation aspects of portfolios of defaulted members and the associated XVA costs transfers. These innovations can be carried out either on the marketplaces (exchanges) or by the CCPs themselves by identifying the optimal buyers or by conducting auctions of defaulted positions with dedicated economic equilibrium problems. Failures of members on several CCPs in common also lead to the formulation and resolution of multidimensional optimization problems of risk transfer that are introduced in this thesis
Pommier, David. "Méthodes numériques sur des grilles sparse appliquées à l'évaluation d'options en finance". Paris 6, 2008. http://www.theses.fr/2008PA066499.
Texto completo da fonteIn this work, we present some numerical methods to approximate Partial Differential Equation(PDEs) or Partial Integro-Differential Equations (PIDEs) commonly arising in finance. This thesis is split into three part. The first one deals with the study of Sparse Grid techniques. In an introductory chapter, we present the construction of Sparse Grid spaces and give some approximation properties. The second chapter is devoted to the presentation of a numerical algorithm to solve PDEs on these spaces. This chapter gives us the opportunity to clarify the finite difference method on Sparse Grid by looking at it as a collocation method. We make a few remarks on the practical implementation. The second part of the thesis is devoted to the application of Sparse Grid techniques to mathematical finance. We will consider two practical problems. In the first one, we consider a European vanilla contract with a multivariate generalisation of the one dimensional Ornstein-Ulenbeck-based stochastic volatility model. A relevant generalisation is to assume that the underlying asset is driven by a jump process, which leads to a PIDE. Due to the curse of dimensionality, standard deterministic methods are not competitive with Monte Carlo methods. We discuss sparse grid finite difference methods for solving the PIDE arising in this model up to dimension 4. In the second problem, we consider a Basket option on several assets (five in our example) in the Black & Scholes model. We discuss Galerkin methods in a sparse tensor product space constructed with wavelets. The last part of the thesis is concerned with a posteriori error estimates in the energy norm for the numerical solutions of parabolic obstacle problems allowing space/time mesh adaptive refinement. These estimates are based on a posteriori error indicators which can be computed from the solution of the discrete problem. We present the indicators for the variational inequality obtained in the context of the pricing of an American option on a two dimensional basket using the Black & Scholes model. All these techniques are illustrated by numerical examples
Carpentier, Alexandra. "De l'échantillonage optimal en grande et petite dimension". Thesis, Lille 1, 2012. http://www.theses.fr/2012LIL10041/document.
Texto completo da fonteDuring my PhD, I had the chance to learn and work under the great supervision of my advisor Rémi (Munos) in two fields that are of particular interest to me. These domains are Bandit Theory and Compressed Sensing. While studying these domains I came to the conclusion that they are connected if one looks at them trough the prism of optimal sampling. Both these fields are concerned with strategies on how to sample the space in an efficient way: Bandit Theory in low dimension, and Compressed Sensing in high dimension. In this Dissertation, I present most of the work my co-authors and I produced during the three years that my PhD lasted
Fabre, Jean-Pierre. "Suites mélangeantes de mesures aléatoires : estimation fonctionnelle et inégalités de grande déviation". Montpellier 2, 1998. http://www.theses.fr/1998MON20098.
Texto completo da fonteLounici, Karim. "Estimation Statistique En Grande Dimension, Parcimonie et Inégalités D'Oracle". Phd thesis, Université Paris-Diderot - Paris VII, 2009. http://tel.archives-ouvertes.fr/tel-00435917.
Texto completo da fonteEtoré, Pierre. "Approximation de processus de diffusion à coefficients discontinus en dimension un et applications à la simulation". Nancy 1, 2006. https://tel.archives-ouvertes.fr/tel-00136282.
Texto completo da fonteIn this thesis numerical schemes for processes X generated by operators with discontinuous coeffcients are studied. A first scheme for the one-dimensional case uses Differential Stochastic Equations with Local Time. Indeed, in dimension one, the processes X are solutions of such equations. We construct a grid on the real line, that is transformed by a proper bijection in a uniform grid of step h. This bijection also transforms X in some process Y , that behaves locally like a Skew Brownian Motion (SBM). We know the transition probabilities of the SBM on a uniform grid, and the average time it spends on each of its cells. A random walk can then be built, that converges to X in h1/2. A second scheme, that is more general, is proposed still for the dimension one. A non uniform grid on the real line is given, whose cells have a size proportional to h. Both the transition probabilities of X on this grid, and the average time it spends on each of its cells, can be related to the solutions of proper elliptic PDE problems, using the Feynman-Kac formula. A time-space random walk can then be built, that converges to X again in h1/2. Next some directions to adapt this approach to the two-dimensional case are given. Finally numerical exemples illustrate the studied schemes
Poignard, Benjamin. "Approches nouvelles des modèles GARCH multivariés en grande dimension". Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLED010/document.
Texto completo da fonteThis document contributes to high-dimensional statistics for multivariate GARCH processes. First, the author proposes a new dynamic called vine-GARCH for correlation processes parameterized by an undirected graph called vine. The proposed approach directly specifies positive definite matrices and fosters parsimony. The author provides results for the existence and uniqueness of stationary solution of the vine-GARCH model and studies its asymptotic properties. He then proposes a general framework for penalized M-estimators with dependent processes and focuses on the asymptotic properties of the adaptive Sparse Group Lasso regularizer. The high-dimensionality setting is studied when considering a diverging number of parameters with the sample size. The asymptotic properties are illustrated through simulation experiments. Finally, the author proposes to foster sparsity for multivariate variance covariance matrix processes within the latter framework. To do so, the multivariate ARCH family is considered and the corresponding parameterizations are estimated thanks to penalized ordinary least square procedures
Bun, Joël. "Application de la théorie des matrices aléatoires pour les statistiques en grande dimension". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS245/document.
Texto completo da fonteNowadays, it is easy to get a lot ofquantitative or qualitative data in a lot ofdifferent fields. This access to new databrought new challenges about data processingand there are now many different numericaltools to exploit very large database. In atheoretical standpoint, this framework appealsfor new or refined results to deal with thisamount of data. Indeed, it appears that mostresults of classical multivariate statisticsbecome inaccurate in this era of “Big Data”.The aim of this thesis is twofold: the first one isto understand theoretically this so-called curseof dimensionality that describes phenomenawhich arise in high-dimensional space.Then, we shall see how we can use these toolsto extract signals that are consistent with thedimension of the problem. We shall study thestatistics of the eigenvalues and especially theeigenvectors of large symmetrical matrices. Wewill highlight that we can extract someuniversal properties of these eigenvectors andthat will help us to construct estimators that areoptimal, observable and consistent with thehigh dimensional framework
Renault, Vincent. "Contrôle optimal de modèles de neurones déterministes et stochastiques, en dimension finie et infinie. Application au contrôle de la dynamique neuronale par l'Optogénétique". Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066471/document.
Texto completo da fonteThe aim of this thesis is to propose different mathematical neuron models that take into account Optogenetics, and study their optimal control. We first define a controlled version of finite-dimensional, deterministic, conductance based neuron models. We study a minimal time problem for a single-input affine control system and we study its singular extremals. We implement a direct method to observe the optimal trajectories and controls. The optogenetic control appears as a new way to assess the capability of conductance-based models to reproduce the characteristics of the membrane potential dynamics experimentally observed. We then define an infinite-dimensional stochastic model to take into account the stochastic nature of the ion channel mechanisms and the action potential propagation along the axon. It is a controlled piecewise deterministic Markov process (PDMP), taking values in an Hilbert space. We define a large class of infinite-dimensional controlled PDMPs and we prove that these processes are strongly Markovian. We address a finite time optimal control problem. We study the Markov decision process (MDP) embedded in the PDMP. We show the equivalence of the two control problems. We give sufficient conditions for the existence of an optimal control for the MDP, and thus, for the initial PDMP as well. The theoretical framework is large enough to consider several modifications of the infinite-dimensional stochastic optogenetic model. Finally, we study the extension of the model to a reflexive Banach space, and then, on a particular case, to a nonreflexive Banach space
Hagendorf, Christian. "Evolutions de Schramm-Loewner et théories conformes : deux exemples de systèmes désordonnés de basse dimension". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2009. http://tel.archives-ouvertes.fr/tel-00422366.
Texto completo da fonteLe sujet de la seconde partie est l'étude de deux exemples de systèmes désordonnés de basse dimension. D'un coté nous établissons les propriétés de localisation et spectrales d'un hamiltonien aléatoire unidimensionnel qui interpole entre les cas du modèle de Halperin et le modèle supersymétrique désordonné. Un lien avec la diffusion unidimensionnelle dans un potentiel aléatoire permet d'étudier la modification de la dynamique ultra-lente de Sinai en présence d'absorbeurs. De l'autre côté nous analysons la transition vitreuse d'ARN pour des séquences aléatoires à l'aide de la théorie des champs de Lässig-Wiese-David. L'application au cas d'ARN soumis à une force extérieure conduit à la prédiction de la caractéristique force-extension pour des séquences hétérogènes. L'étude de la phase vitreuse nous amène à considérer un modèle hiérarchique combinatoire dont nous déterminons les exposants et lois d'échelle exactes ainsi que les corrections de taille finie.
Hejblum, Boris. "Analyse intégrative de données de grande dimension appliquée à la recherche vaccinale". Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0049/document.
Texto completo da fonteGene expression data is recognized as high-dimensional data that needs specific statisticaltools for its analysis. But in the context of vaccine trials, other measures, such asflow-cytometry measurements are also high-dimensional. In addition, such measurementsare often repeated over time. This work is built on the idea that using the maximum ofavailable information, by modeling prior knowledge and integrating all data at hand, willimprove the inference and the interpretation of biological results from high-dimensionaldata. First, we present an original methodological development, Time-course Gene SetAnalysis (TcGSA), for the analysis of longitudinal gene expression data, taking into accountprior biological knowledge in the form of predefined gene sets. Second, we describetwo integrative analyses of two different vaccine studies. The first study reveals lowerexpression of inflammatory pathways consistently associated with lower viral rebound followinga HIV therapeutic vaccine. The second study highlights the role of a testosteronemediated group of genes linked to lipid metabolism in sex differences in immunologicalresponse to a flu vaccine. Finally, we introduce a new model-based clustering approach forthe automated treatment of cell populations from flow-cytometry data, namely a Dirichletprocess mixture of skew t-distributions, with a sequential posterior approximation strategyfor dealing with repeated measurements. Hence, the automatic recognition of thecell populations could allow a practical improvement of the daily work of immunologistsas well as a better interpretation of gene expression data after taking into account thefrequency of all cell populations
Ismail, Boussaad. "Contribution à la conception robuste de réseaux électriques de grande dimension au moyen des métaheuristiques d’optimisation". Thesis, Paris Est, 2014. http://www.theses.fr/2014PEST1024.
Texto completo da fonteLike many systems, an electrical power grid must contend with faillures which, given its higth connectivity, could spread to entire regions: this is referred to blackout (avalanche phenomena), ie. with large-scale consequences. The size of power grids and their complexity make difficult to grasp these locally emergent phenomena. There is a number of existing works that were based on extensive use of statistical physics tools. The adaptation of percolation's methods and the Self-Organized-Criticality systems provide practical tools to describe the statistical and topological properties of a network. Optimization tools by metaheuristics particularly, particle swarm optimization (PSO) and genetic algorithms (GA) have proved to be the cornerstone of this work and helped to define operational structures. Works developed in this area are still emerging. This thesis brings a contribution in several ways. First of all, we have taken advantage in optimization technics to better "stiffen" a power grid by coupling its topology with maintaining voltages at the nodes of the network using FACTS (Flexible Alternative Current Transmission System). In the optimal location FACTS problem, the objective is to determine the optimal allocation of reactive power, in relation to the location and optimal sizing of FACTS, in order to improve the performance of the power grid. Four main issues are then discussed: 1) Where to place FACTS in the network? How many FACTS? What power attributed to these FACTS? What type(s) attributed to these FACTS? At what prices ? In this thesis, all these questions will be modeled and discussed from the point of view of optimal power by applying, firstly, the strandard particle swarm optimization and by proposing a novel particle swarm optimization (alpha-SLPOS) and a local search (alpha-LLS). These two algorithms are inspired by the basic concept of PSO and the stable distributions (alpha-stable laws). Moreover, the scope of the project defined by the team @RiskTeam Alstom Grid requires the use of several techniques (from physics, statistics, etc) for particular purposes including the alpha-stable parametere estimation problem. Facing the failure of the existing methods for estimating the parameters of alpha-stable laws for alpha<0.6, we propose a novel semi-parametric estimator for such of probability distribution familly using metaheuristic to solve the underlying problem of optimization. Finally, in the end of the thesis, a decision support tool is designed for an internal team of Alstom Grid to optimize the internal topology of a wind farm
Mourareau, Stéphane. "Gaussian geometry and tools for compressed sensing". Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30074/document.
Texto completo da fonteThis thesis fallin within the context of high-dimensional data analysis. More specificaly, the purpose is to study the possible application of some Gaussian tools to prove classical results on matrices with Gaussian entries and to extend existing test procedures for Gaussian linear models. In a first part, we focus on Gaussian matrices. Our aim is to prove, using a Kac-Rice formula on Gaussian processes, that such a matrice satisfies, with overwhelming probability, the Null Space Property (NSP) and the Restricted Isometry Property (RIP). Moreover, we derive phase transition graphs depending on the classical parameters of sparse regression, namely the number of observations, the number of predictors and the level of sparsity. In a second part, we deal with global null testing for Gaussian linear models, with application to Compressed Sensing. Following recent works of Taylor, Loftus and Tibshirani, we purpose a test for global null hypothesis in the lasso case and discuss about its power. Furthermore, we generalize these method to Gaussian processes, to include, for instance, the super-resolution case. In a third part, we present some applications of Rice formula to compute the cumulative distribution function of the maximum of a Gaussian process and derive corresponding numerical routines to investigate the efficiency of classical approximations. Finaly, we consider the asymp- totical comportement of the number of crossings of a differentiable Gaussian process for a given level u and time interval [0,T]
Yang, Xiaochuan. "Etude dimensionnelle de la régularité de processus de diffusion à sauts". Thesis, Paris Est, 2016. http://www.theses.fr/2016PESC1073/document.
Texto completo da fonteIn this dissertation, we study various dimension properties of the regularity of jump di usion processes, solution of a class of stochastic di erential equations with jumps. In particular, we de- scribe the uctuation of the Hölder regularity of these processes and that of the local dimensions of the associated occupation measure by computing their multifractal spepctra. e Hausdor dimension of the range and the graph of these processes are also calculated.In the last chapter, we use a new notion of “large scale” dimension in order to describe the asymptotics of the sojourn set of a Brownian motion under moving boundaries
Lorang, Gérard. "Trois exemples d'étude de processus stochastiques : 1) un théorème de Schilder pour des fonctionnelles browniennes non régulières : 2) étude d'une fonctionnelle liée au pont de Bessel : 3) régularité Besov des trajectoires du processus intégral de Skorohod". Nancy 1, 1994. http://www.theses.fr/1994NAN10055.
Texto completo da fontePhi, Tien Cuong. "Décomposition de Kalikow pour des processus de comptage à intensité stochastique". Thesis, Université Côte d'Azur, 2022. http://www.theses.fr/2022COAZ4029.
Texto completo da fonteThe goal of this thesis is to construct algorithms which are able to simulate the activity of a neural network. The activity of the neural network can be modeled by the spike train of each neuron, which are represented by a multivariate point processes. Most of the known approaches to simulate point processes encounter difficulties when the underlying network is large.In this thesis, we propose new algorithms using a new type of Kalikow decomposition. In particular, we present an algorithm to simulate the behavior of one neuron embedded in an infinite neural network without simulating the whole network. We focus on mathematically proving that our algorithm returns the right point processes and on studying its stopping condition. Then, a constructive proof shows that this new decomposition holds for on various point processes.Finally, we propose algorithms, that can be parallelized and that enables us to simulate a hundred of thousand neurons in a complete interaction graph, on a laptop computer. Most notably, the complexity of this algorithm seems linear with respect to the number of neurons on simulation
Ouzina, Mostafa. "Théorème du support en théorie du filtrage non-linéaire". Rouen, 1998. http://www.theses.fr/1998ROUES029.
Texto completo da fonteBӑrbos, Andrei-Cristian. "Efficient high-dimension gaussian sampling based on matrix splitting : application to bayesian Inversion". Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0002/document.
Texto completo da fonteThe thesis deals with the problem of high-dimensional Gaussian sampling.Such a problem arises for example in Bayesian inverse problems in imaging where the number of variables easily reaches an order of 106_109. The complexity of the sampling problem is inherently linked to the structure of the covariance matrix. Different solutions to tackle this problem have already been proposed among which we emphasizethe Hogwild algorithm which runs local Gibbs sampling updates in parallel with periodic global synchronisation.Our algorithm makes use of the connection between a class of iterative samplers and iterative solvers for systems of linear equations. It does not target the required Gaussian distribution, instead it targets an approximate distribution. However, we are able to control how far off the approximate distribution is with respect to the required one by means of asingle tuning parameter.We first compare the proposed sampling algorithm with the Gibbs and Hogwild algorithms on moderately sized problems for different target distributions. Our algorithm manages to out perform the Gibbs and Hogwild algorithms in most of the cases. Let us note that the performances of our algorithm are dependent on the tuning parameter.We then compare the proposed algorithm with the Hogwild algorithm on a large scalereal application, namely image deconvolution-interpolation. The proposed algorithm enables us to obtain good results, whereas the Hogwild algorithm fails to converge. Let us note that for small values of the tuning parameter our algorithm fails to converge as well.Not with standing, a suitably chosen value for the tuning parameter enables our proposed sampler to converge and to deliver good results
Bettinelli, Jérémie. "Limite d'échelle de cartes aléatoires en genre quelconque". Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00638065.
Texto completo da fonteLlau, Antoine. "Méthodes de simulation du comportement mécanique non linéaire des grandes structures en béton armé et précontraint : condensation adaptative en contexte aléatoire et représentation des hétérogénéités". Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAI029/document.
Texto completo da fonteLarge-scale concrete and reinforced concrete structures, and in particular containment buildings, may undergo localized cracking when they age or endure strong loadings (LOCA for instance). In order to optimize the maintenance actions, a predictive model of concrete damage is required. This phenomenon takes place at a rather small material scale and a predictive model requires a refined mesh and a nonlinear constitutive law. This type of modelling cannot be applied directly on a large-scale civil engineering structure, as the computational load would be too heavy for the existing machines.A simulation method is proposed to focus the computational effort on the areas of interest (damaged parts) of the structure while eliminating the undamaged areas. It aims at using the available computing power for the characterization of crack properties in particular. This approach uses Guyan’s static condensation technique to reduce the elastic areas to a set of boundary conditions applied to the areas of interest. When the system evolves, a set of criteria allows to promote on the fly the elastic areas to areas of interest if damage appears. This adaptive condensation technique allows to reduce the dimension of a nonlinear problem without degrading the quality of the results when compared to a full reference simulation.However, a classical modelling does not allow to take into account the various unknowns which will impact the structural behaviour: mechanical properties, geometry, loading… In order to better characterize this behaviour while taking into account the various uncertainties, the proposed adaptive condensation method is coupled with a stochastic collocation approach. Each deterministic simulation required for the characterization of the uncertainties on the structural quantities of interest is therefore reduced and the pre-processing steps necessary to the condensation technique are also reduced using a second collocation. The proposed approach allows to produce for a reduced computational cost the probability density functions of the quantities of interest of a large structure.The proposed calculation strategies give access at the local scale to a modelling finer than what would be applicable to the full structure. In order to improve the representativeness at this scale, the tridimensional effects of the heterogeneities must be taken into account. In the civil and nuclear engineering field, one of the main issues is the modelling of prestressing tendons, usually modelled in one dimension. A new approach is proposed, which uses a 1D mesh and model to build a volume equivalent to the tendon and redistribute the forces and stiffnesses in the concrete. It combines the representativeness of a full conform 3D modelling of the tendon when the mesh is refined and the ease of use of the 1D approaches.The applicability of the proposed methodologies to a large-scale civil engineering structure is evaluated using a numerical model of a 1/3 mock-up of a double-wall containment building of a PWR 1300 MWe nuclear reactor
Jin, Xiong. "Construction et analyse multifractale de fonctions aléatoires et de leurs graphes". Phd thesis, Université Paris Sud - Paris XI, 2010. http://tel.archives-ouvertes.fr/tel-00841501.
Texto completo da fonteNguepedja, Nankep Mac jugal. "Modélisation stochastique de systèmes biologiques multi-échelles et inhomogènes en espace". Thesis, Rennes, École normale supérieure, 2018. http://www.theses.fr/2018ENSR0012/document.
Texto completo da fonteThe growing needs of precise predictions for complex systems lead to introducing stronger mathematical models, taking into account an increasing number of parameters added to time: space, stochasticity, scales of dynamics. Combining these parameters gives rise to spatial --or spatially inhomogeneous-- multiscale stochastic models. However, such models are difficult to study and their simulation is extremely time consuming, making their use not easy. Still, their analysis has allowed one to develop powerful tools for one scale models, among which are the law of large numbers (LLN) and the central limit theorem (CLT), and, afterward, to derive simpler models and accelrated algorithms. In that deduction process, the so-called hybrid models and algorithms have arisen in the multiscale case, but without any prior rigorous analysis. The question of hybrid approximation then shows up, and its consistency is a particularly important motivation of this PhD thesis.In 2012, criteria for hybrid approximations of some homogeneous regulation gene network models were established by Crudu, Debussche, Muller and Radulescu. The aim of this PhD thesis is to complete their work and generalize it afterward to a spatial framework.We have developed and simplified different models. They all are time continuous pure jump Markov processes. The approach points out the conditions allowing on the the one hand deterministic approximations by solutions of evolution equations of type reaction-advection-diffusion, and, on the other hand, hybrid approximations by hybrid stochastic processes. In the field of biochemical reaction networks, we establish a CLT. It corresponds to a hybrid approximation of a simplified homogeneous model (due to Crudu et al.). Then a LLN is obtained for a spatial model with two time scales. Afterward, a hybrid approximation is established, for a two time-space scales spatial model. Finally, the asymptotic behaviour in large population and long time are respectively presented for a model of cholera epidemic, through a LLN followed by the upper bound for compact sets, in the context of a corresponding large deviation principle (LDP).Interesting future works would be, among others, to study other spatial geometries, to generalize the CLT, to complete the LDP estimates, and to study complex systems from other fields
Hoock, Jean-Baptiste. "Contributions to Simulation-based High-dimensional Sequential Decision Making". Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00912338.
Texto completo da fonteLuu, Duy tung. "Exponential weighted aggregation : oracle inequalities and algorithms". Thesis, Normandie, 2017. http://www.theses.fr/2017NORMC234/document.
Texto completo da fonteIn many areas of statistics, including signal and image processing, high-dimensional estimation is an important task to recover an object of interest. However, in the overwhelming majority of cases, the recovery problem is ill-posed. Fortunately, even if the ambient dimension of the object to be restored (signal, image, video) is very large, its intrinsic ``complexity'' is generally small. The introduction of this prior information can be done through two approaches: (i) penalization (very popular) and (ii) aggregation by exponential weighting (EWA). The penalized approach aims at finding an estimator that minimizes a data loss function penalized by a term promoting objects of low (simple) complexity. The EWA combines a family of pre-estimators, each associated with a weight exponentially promoting the same objects of low complexity.This manuscript consists of two parts: a theoretical part and an algorithmic part. In the theoretical part, we first propose the EWA with a new family of priors promoting analysis-group sparse signals whose performance is guaranteed by oracle inequalities. Next, we will analysis the penalized estimator and EWA, with a general prior promoting simple objects, in a unified framework for establishing some theoretical guarantees. Two types of guarantees will be established: (i) prediction oracle inequalities, and (ii) estimation bounds. We will exemplify them for particular cases some of which studied in the literature. In the algorithmic part, we will propose an implementation of these estimators by combining Monte-Carlo simulation (Langevin diffusion process) and proximal splitting algorithms, and show their guarantees of convergence. Several numerical experiments will be considered for illustrating our theoretical guarantees and our algorithms
Bussy, Simon. "Introduction of high-dimensional interpretable machine learning models and their applications". Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS488.
Texto completo da fonteThis dissertation focuses on the introduction of new interpretable machine learning methods in a high-dimensional setting. We developped first the C-mix, a mixture model of censored durations that automatically detects subgroups based on the risk that the event under study occurs early; then the binarsity penalty combining a weighted total variation penalty with a linear constraint per block, that applies on one-hot encoding of continuous features; and finally the binacox model that uses the binarsity penalty within a Cox model to automatically detect cut-points in the continuous features. For each method, theoretical properties are established: algorithm convergence, non-asymptotic oracle inequalities, and comparison studies with state-of-the-art methods are carried out on both simulated and real data. All proposed methods give good results in terms of prediction performances, computing time, as well as interpretability abilities
Stephenson, Robin. "Divers aspects des arbres aléatoires : des arbres de fragmentation aux cartes planaires infinies". Thesis, Paris 9, 2014. http://www.theses.fr/2014PA090024/document.
Texto completo da fonteWe study three problems related to discrete and continuous random trees. First, we do a general study of self-similar fragmentation trees, extending some results established by Haas and Miermont in 2006, in particular by computing the Hausdorff dimension of these trees under some Malthusian hypotheses. We then work on a particular sequence of k-ary growing trees, defined recursively with a similar method to Rémy’s algorithm from 1985. We show that the size of the tree obtained at the n-th step if of order n^(1/k), and, after renormalization, we prove that the sequence convergences to a fragmentation tree. We also study embeddings of the limiting trees as k varies. In the last chapter, we show the local convergence in distribution of critical multi-type Galton-Watson trees conditioned to have a large number of vertices of a fixed type. We then apply this result to the world of random planar maps, obtaining that large critical Boltzmann-distributed maps converge locally in distribution to an infinite planar map
Lestoille, Nicolas. "Stochastic model of high-speed train dynamics for the prediction of long-time evolution of the track irregularities". Thesis, Paris Est, 2015. http://www.theses.fr/2015PEST1094/document.
Texto completo da fonteRailways tracks are subjected to more and more constraints, because the number of high-speed trains using the high-speed lines, the trains speed, and the trains load keep increasing. These solicitations contribute to produce track irregularities. In return, track irregularities influence the train dynamic responses, inducing degradation of the comfort. To guarantee good conditions of comfort in the train, railways companies perform maintenance operations of the track, which are very costly. Consequently, there is a great interest for the railways companies to predict the long-time evolution of the track irregularities for a given track portion, in order to be able to anticipate the start off of the maintenance operations, and therefore to reduce the maintenance costs and to improve the running conditions. In this thesis, the long-time evolution of a given track portion is analyzed through a vector-valued indicator on the train dynamics. For this given track portion, a local stochastic model of the track irregularities is constructed using a global stochastic model of the track irregularities and using big data made up of experimental measurements of the track irregularities performed by a measuring train. This local stochastic model takes into account the variability of the track irregularities and allows for generating realizations of the track irregularities at each long time. After validating the computational model of the train dynamics, the train dynamic responses on the measured track portion are numerically simulated using the local stochastic model of the track irregularities. A vector-valued random dynamic indicator is defined to characterize the train dynamic responses on the given track portion. This dynamic indicator is constructed such that it takes into account the model uncertainties in the train dynamics computational model. For the identification of the track irregularities stochastic model and the characterization of the model uncertainties, advanced stochastic methods such as the polynomial chaos expansion and the multivariate maximum likelihood are applied to non-Gaussian and non-stationary random fields. Finally, a stochastic predictive model is proposed for predicting the statistical quantities of the random dynamic indicator, which allows for anticipating the need for track maintenance. This modeling is constructed using the results of the train dynamics simulation and consists in using a non-stationary Kalman-filter type model with a non-Gaussian initial condition. The proposed model is validated using experimental data for the French railways network for the high-speed trains