To see the other types of publications on this topic, follow the link: Stochastic simulation algorithms.

Dissertations / Theses on the topic 'Stochastic simulation algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Stochastic simulation algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hu, Liujia. "Convergent algorithms in simulation optimization." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54883.

Full text
Abstract:
It is frequently the case that deterministic optimization models could be made more practical by explicitly incorporating uncertainty. The resulting stochastic optimization problems are in general more difficult to solve than their deterministic counterparts, because the objective function cannot be evaluated exactly and/or because there is no explicit relation between the objective function and the corresponding decision variables. This thesis develops random search algorithms for solving optimization problems with continuous decision variables when the objective function values can be estimated with some noise via simulation. Our algorithms will maintain a set of sampled solutions, and use simulation results at these solutions to guide the search for better solutions. In the first part of the thesis, we propose an Adaptive Search with Resampling and Discarding (ASRD) approach for solving continuous stochastic optimization problems. Our ASRD approach is a framework for designing provably convergent algorithms that are adaptive both in seeking new solutions and in keeping or discarding already sampled solutions. The framework is an improvement over the Adaptive Search with Resampling (ASR) method of Andradottir and Prudius in that it spends less effort on inferior solutions (the ASR method does not discard already sampled solutions). We present conditions under which the ASRD method is convergent almost surely and carry out numerical studies aimed at comparing the algorithms. Moreover, we show that whether it is beneficial to resample or not depends on the problem, and analyze when resampling is desirable. Our numerical results show that the ASRD approach makes substantial improvements on ASR, especially for difficult problems with large numbers of local optima. In traditional simulation optimization problems, noise is only involved in the objective functions. However, many real world problems involve stochastic constraints. Such problems are more difficult to solve because of the added uncertainty about feasibility. The second part of the thesis presents an Adaptive Search with Discarding and Penalization (ASDP) method for solving continuous simulation optimization problems involving stochastic constraints. Rather than addressing feasibility separately, ASDP utilizes the penalty function method from deterministic optimization to convert the original problem into a series of simulation optimization problems without stochastic constraints. We present conditions under which the ASDP algorithm converges almost surely from inside the feasible region, and under which it converges to the optimal solution but without feasibility guarantee. We also conduct numerical studies aimed at assessing the efficiency and tradeoff under the two different convergence modes. Finally, in the third part of the thesis, we propose a random search method named Gaussian Search with Resampling and Discarding (GSRD) for solving simulation optimization problems with continuous decision spaces. The method combines the ASRD framework with a sampling distribution based on a Gaussian process that not only utilizes the current best estimate of the optimal solution but also learns from past sampled solutions and their objective function observations. We prove that our GSRD algorithm converges almost surely, and carry out numerical studies aimed at studying the effects of utilizing the Gaussian sampling strategy. Our numerical results show that the GSRD framework performs well when the underlying objective function is multi-modal. However, it takes much longer to sample solutions, especially in higher dimensions.
APA, Harvard, Vancouver, ISO, and other styles
2

Qureshi, Sumaira Ejaz. "Comparative study of simulation algorithms in mapping spaces of uncertainty /." St. Lucia, Qld, 2002. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe16450.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

MOSCA, ETTORE. "Membrane systems and stochastic simulation algorithms for the modelling of biological systems." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2011. http://hdl.handle.net/10281/19296.

Full text
Abstract:
Membrane Computing is a branch of computer science that was born after the introduction of Membrane Systems (or P systems) by a seminal paper by Gh. Paun. Membrane systems are computing devices inspired by the structure and functioning of living cells as well as from the way the cells are organized in tissues and higher order structures. The aim of membrane computing is to abstract computing ideas and models imitating these products of natural evolution. A typical membrane system is composed by a number of regions surrounded by membranes; regions contains multisets of objects (molecules) and rules (cellular processes) that specify how objects must be re-written and moved among regions. In spite of the fact that the initial primary goal of membrane systems concerned computability theory, the properties of membrane systems such as compartmentalisation, modularity, scalability/extensibility, understandability, programmability and discreteness promoted their use for an important task of the current scientific research: the modelling of biological systems (the topic “systems biology, including modelling of complex systems” has now appeared explicitly in the Seventh Framework Programme of the European Community for research, technological development and demonstration activities). To accomplish this task some features of membrane systems (such as nondeterminism and maximal parallelism) have to be mitigated while other properties have to be considered (e.g. description of the time evolution of the modelled system) to ensure the accurateness of the results gained with the models. Many approaches for the modelling and simulation of biological systems exist and can be classified according to features such as continuous/discrete, deterministic/stochastic, macroscopic/mesoscopic/microscopic, predictive/explorative, quantitative/qualitative and so on. Recently, stochastic methods have gained more attention since many biological processes, such as gene transcription and translation into proteins, are controlled by noisy mechanisms. Considering the branch of modelling focused at the molecular level and dealing with systems of biochemical processes (e.g. a signalling or metabolic pathway inside a living cell), an important class of stochastic simulation methods is the one inspired by the Gillespie's stochastic simulation algorithm (SSA). This method provides exact numerical realisations of the stochastic process defined by the chemical master equation. A series of methods (e.g. next reaction method, tau leaping, next subvolume method) and software (StochKit and MesoRD), belonging to this class, were developed for the modelling and simulation of homogeneous and/or reaction-diffusion (mesoscopic) systems. A stochastic approach that couples the expressive power of a membrane system (and more precisely of Dynamical Probabilistic P systems or DPPs) with a modified version of the tau leaping method in order to quantitatively describe the evolution of multi-compartmental systems in time is the tau-DPP approach. Both current membrane systems variants and stochastic methods inspired by the SSA lack the consideration of some properties of living cells, such as the molecular crowding or the presence of membrane potential differences. Thus, the current versions of these formalisms and computational methods do not allow to model and simulate all those biological processes where these features play an essential role. A common task in the field of stochastic simulations (mainly based on numerical rather than analytical solutions) is the repetition of a large number of simulations. This activity is required, for example, to characterise the dynamics of the modelled system and by some parameter estimation or sensitivity analysis algorithms. In this thesis we extend the tau-DPP approach taking into account additional properties of living cells in order to expand tau-DPPs modelling (and simulation) capabilities to a broader set of scenarios. Within this scope, we also exploit the main European grid computing platform as a computational platform usable to compute stochastic simulations, developing a framework specific to this purpose, able to manage a large number of simulations of stochastic models. In our formalism, we considered the explicit modelling of both the objects' (or molecules) and membranes' (or compartments) volume occupation, as mandated by the mutual impenetrability of molecules. As a consequence, the dynamics of the system are affected by the availability of free space. In living cells, for example, molecular crowding has important effects such as anomalous diffusion, variation of reaction rates and spatial segregation, which have significant consequences on the dynamics of cellular processes. At a theoretical level, we demonstrated that the explicit consideration of the volume occupation of objects and membranes (and their consequences on the system's evolution) does not reduce the computational universality of membrane systems. We achieved this aim showing that is it possible to simulate a deterministic Turing machine and that the volume required by the membrane systems that carry out this task is a linear function of the space required by the Turing machine. After this, we presented a novel version of both membrane systems (designated as Stau-DPPs) and stochastic simulation algorithm (Stau-DPP algorithm) considering the property of mutual impenetrability of molecules. In addition, we made the communication of objects independent from the system's structure in order to obtain a strong expressive power. After showing that the Stau-DPP algorithm can accurately reproduce particle diffusion (in a comparison with the heat equation), we presented two test cases to illustrate that Stau-DPPs can effectively capture some effects of crowding, namely the reduction of particle diffusion rate and the increase of reaction rate, considering a bidimensional discrete space domain. We presented also a test case to illustrate that the strong expressive power of Stau-DPPs allows the modelling and simulation (by means of the Stau-DPP algorithm) of processes taking place in structured environments; more precisely, we modelled and simulate the diffusion of molecules enhanced by the presence of a structure resembling the role of a microtubule (a sort of “railway” for intracellular trafficking) in living cells. Subsequently, we further extended Stau-DPPs and the respective evolution algorithm to explicitly consider the membrane potential difference and its effect over charged particles and voltage gated channel (VGC, a particular type of membrane protein) state transitions. In fact, the membrane potential difference exhibited by biological membranes plays a crucial role in many cellular processes (e.g. action potential and synaptic signalling cascades). Similarly to what we did for the Stau-DPPs, we presented the novel version of both the membrane systems (designated as EStau-DPPs) and the stochastic simulation algorithm (EStau-DPP algorithm) to capture the additional properties we had considered. In order to describe the probability of charged particle diffusion in a discrete space domain, we defined a propensity function starting from the deterministic and continuous description of charged particle diffusion due to an electric potential gradient. We showed by means of a focused test case that a model for ion diffusion between two regions, in which the number of ions is maintained at two different constant values and where an electric potential difference is available, correctly reaches the expected state as predicted by the Nernst equation. To describe the probability of transition between two VGC states, we derived a propensity function taking into consideration the Boltzmann-Maxwell distribution. We considered a model describing the state transitions of a VGC and we showed that the model predictions are in close agreement with the experimental data collected from literature. Lastly, we presented the framework to manage a large number of stochastic simulations on a grid computing platform. While creating this framework, we considered the parameter sweep application (PSA) approach, in which an application is run a large number of times with different parameter values. We ran a set of PSAs concerning the simulations of a stochastic bacterial chemotaxis model and the computation of the difference between the dynamics of one of its components (as a consequence of model parameter variation) compared to a reference dynamics of the same component. We then used this set of PSAs to evaluate the performance of the EGEE project's grid infrastructure (Enabling Grid for the E-sciencE). On the one hand, the EGEE grid proved to be a useful solution for the distribution of PSAs concerning the stochastic simulations of biochemical systems. The platform demonstrated its efficiency in the context of our middle-size test, and considering that the more intensive the computation, the more scalable the infrastructure, grid computing can be a suitable technology for large scale biological models analysis. On the other hand, the use of a distributed file system, the granularity of the jobs and the heterogeneity of the resources can present issues. In conclusion, in this thesis we extended previous membrane systems variants and stochastic simulation methods for the analysis of biological systems, and exploited grid computing for large scale stochastic simulations. Stau-DPPs and EStau-DPPs (and their respective algorithms to calculate the temporal evolution) increase the set of biological systems that can be investigated \textit{in silico in the context of the stochastic methods inspired by the SSA. In fact, compared to its precursor approach (tau-DPPs), Stau-DPPs allow the stochastic and discrete analysis of crowded systems, structured geometries, while EStau-DPPs also take into account some electric properties (membrane electric potential and its consequences), enabling, for example, the modelling of cellular signalling systems influenced by the membrane potential. In future, we plan to improve both the formalisations and the algorithms that we presented in this thesis. For example, Stau-DPPs can not model and simulate objects bigger than a single compartment, which conversely can be convenient for the analysis of big crowding agents in a tightly discretised space domain; instead, EStau-DPPs are, for instance, currently limited to the modelling of systems composed by two compartments separated by a boundary that can be assumed to act as a capacitor (e.g biological membranes). Moreover, we plan to optimize the parallel (MPI) implementation of both the Stau-DPP and EStau-DPP algorithms, which are presently based on a one-to-one relationship between processes and compartments, a limiting factor for the simulation of discrete spaces composed by a high number of compartments. Lastly, as grid computing demonstrated to be a useful approach to handle a large number of simulations, we plan to develop a solution to handle the simulations required in the context of sensitivity analysis.
APA, Harvard, Vancouver, ISO, and other styles
4

Xu, Guanglei. "Adiabatic processes, noise, and stochastic algorithms for quantum computing and quantum simulation." Thesis, University of Strathclyde, 2018. http://digitool.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=30919.

Full text
Abstract:
Rapid developments in experiments provide promising platforms for realising quantum computation and quantum simulation. This, in turn, opens new possibilities for developing useful quantum algorithms and explaining complex many-body physics. The advantages of quantum computation have been demonstrated in a small range of subjects, but the potential applications of quantum algorithms for solving complex classical problems are still under investigation. Deeper understanding of complex many-body systems can lead to realising quantum simulation to study systems which are inaccessible by other means. This thesis studies different topics of quantum computation and quantum simulation. The first one is improving a quantum algorithm in adiabatic quantum computing, which can be used to solve classical problems like combinatorial optimisation problems and simulated annealing. We are able to reach a new bound of time cost for the algorithm which has a potential to achieve a speed up over standard adiabatic quantum computing. The second topic is to understand the amplitude noise in optical lattices in the context of adiabatic state preparation and the thermalisation of the energy introduced to the system. We identify regimes where introducing certain type of noise in experiments would improve the final fidelity of adiabatic state preparation, and demonstrate the robustness of the state preparation to imperfect noise implementations. We also discuss the competition between heating and dephasing effects, the energy introduced by non-adiabaticity and heating, and the thermalisation of the system after an application of amplitude noise on the lattice. The third topic is to design quantum algorithms to solve classical problems of fluid dynamics. We develop a quantum algorithm based around phase estimation that can be tailored to specific fluid dynamics problems and demonstrate a quantum speed up over classical Monte Carlo methods. This generates new bridge between quantum physics and fluid dynamics engineering, can be used to estimate the potential impact of quantum computers and provides feedback on requirements for implementing quantum algorithms on quantum devices.
APA, Harvard, Vancouver, ISO, and other styles
5

Park, Chuljin. "Discrete optimization via simulation with stochastic constraints." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49088.

Full text
Abstract:
In this thesis, we first develop a new method called penalty function with memory (PFM). PFM consists of a penalty parameter and a measure of constraint violation and it converts a discrete optimization via simulation (DOvS) problem with stochastic constraints into a series of DOvS problems without stochastic constraints. PFM determines a penalty of a visited solution based on past results of feasibility checks on the solution. Specifically, assuming a minimization problem, a penalty parameter of PFM, namely the penalty sequence, diverges to infinity for an infeasible solution but converges to zero almost surely for any strictly feasible solution under certain conditions. For a feasible solution located on the boundary of feasible and infeasible regions, the sequence converges to zero either with high probability or almost surely. As a result, a DOvS algorithm combined with PFM performs well even when optimal solutions are tight or nearly tight. Second, we design an optimal water quality monitoring network for river systems. The problem is to find the optimal location of a finite number of monitoring devices, minimizing the expected detection time of a contaminant spill event while guaranteeing good detection reliability. When uncertainties in spill and rain events are considered, both the expected detection time and detection reliability need to be estimated by stochastic simulation. This problem is formulated as a stochastic DOvS problem with the objective of minimizing expected detection time and with a stochastic constraint on the detection reliability; and it is solved by a DOvS algorithm combined with PFM. Finally, we improve PFM by combining it with an approximate budget allocation procedure. We revise an existing optimal budget allocation procedure so that it can handle active constraints and satisfy necessary conditions for the convergence of PFM.
APA, Harvard, Vancouver, ISO, and other styles
6

Yarmolskyy, Oleksandr. "Využití distribuovaných a stochastických algoritmů v síti." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2018. http://www.nusl.cz/ntk/nusl-370918.

Full text
Abstract:
This thesis deals with the distributed and stochastic algorithms including testing their convergence in networks. The theoretical part briefly describes above mentioned algorithms, including their division, problems, advantages and disadvantages. Furthermore, two distributed algorithms and two stochastic algorithms are chosen. The practical part is done by comparing the speed of convergence on various network topologies in Matlab.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Chao Ph D. Massachusetts Institute of Technology. "Computationally efficient offline demand calibration algorithms for large-scale stochastic traffic simulation models." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120639.

Full text
Abstract:
Thesis: Ph. D. in Transportation, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 168-181).
This thesis introduces computationally efficient, robust, and scalable calibration algorithms for large-scale stochastic transportation simulators. Unlike a traditional "black-box" calibration algorithm, a macroscopic analytical network model is embedded through a metamodel simulation-based optimization (SO) framework. The computational efficiency is achieved through the analytical network model, which provides the algorithm with low-fidelity, analytical, differentiable, problem-specific structural information and can be efficiently evaluated. The thesis starts with the calibration of low-dimensional behavioral and supply parameters, it then addresses a challenging high-dimensional origin-destination (OD) demand matrix calibration problem, and finally enhances the OD demand calibration by taking advantage of additional high-resolution traffic data. The proposed general calibration framework is suitable to address a broad class of calibration problems and has the flexibility to be extended to incorporate emerging data sources. The proposed algorithms are first validated on synthetic networks and then tested through a case study of a large-scale real-world network with 24,335 links and 11,345 nodes in the metropolitan area of Berlin, Germany. Case studies indicate that the proposed calibration algorithms are computationally efficient, improve the quality of solutions, and are robust to both the initial conditions and to the stochasticity of the simulator, under a tight computational budget. Compared to a traditional "black-box" method, the proposed method improves the computational efficiency by an average of 30%, as measured by the total computational runtime, and simultaneously yields an average of 70% improvement in the quality of solutions, as measured by its objective function estimates, for the OD demand calibration. Moreover, the addition of intersection turning flows further enhances performance by improving the fit to field data by an average of 20% (resp. 14%), as measured by the root mean square normalized (RMSN) errors of traffic counts (resp. intersection turning flows).
by Chao Zhang.
Ph. D. in Transportation
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Si. "Design of Energy Storage Controls Using Genetic Algorithms for Stochastic Problems." UKnowledge, 2015. http://uknowledge.uky.edu/ece_etds/80.

Full text
Abstract:
A successful power system in military applications (warship, aircraft, armored vehicle etc.) must operate acceptably under a wide range of conditions involving different loading configurations; it must maintain war fighting ability and recover quickly and stably after being damaged. The introduction of energy storage for the power system of an electric warship integrated engineering plant (IEP) may increase the availability and survivability of the electrical power under these conditions. Herein, the problem of energy storage control is addressed in terms of maximizing the average performance. A notional medium-voltage dc system is used as the system model in the study. A linear programming model is used to simulate the power system, and two sets of states, mission states and damage states, are formulated to simulate the stochastic scenarios with which the IEP may be confronted. A genetic algorithm is applied to the design of IEP to find optimized energy storage control parameters. By using this algorithm, the maximum average performance of power system is found.
APA, Harvard, Vancouver, ISO, and other styles
9

Shang, Xiaocheng. "Extended stochastic dynamics : theory, algorithms, and applications in multiscale modelling and data science." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/20422.

Full text
Abstract:
This thesis addresses the sampling problem in a high-dimensional space, i.e., the computation of averages with respect to a defined probability density that is a function of many variables. Such sampling problems arise in many application areas, including molecular dynamics, multiscale models, and Bayesian sampling techniques used in emerging machine learning applications. Of particular interest are thermostat techniques, in the setting of a stochastic-dynamical system, that preserve the canonical Gibbs ensemble defined by an exponentiated energy function. In this thesis we explore theory, algorithms, and numerous applications in this setting. We begin by comparing numerical methods for particle-based models. The class of methods considered includes dissipative particle dynamics (DPD) as well as a newly proposed stochastic pairwise Nosé-Hoover-Langevin (PNHL) method. Splitting methods are developed and studied in terms of their thermodynamic accuracy, two-point correlation functions, and convergence. When computational efficiency is measured by the ratio of thermodynamic accuracy to CPU time, we report significant advantages in simulation for the PNHL method compared to popular alternative schemes in the low-friction regime, without degradation of convergence rate. We propose a pairwise adaptive Langevin (PAdL) thermostat that fully captures the dynamics of DPD and thus can be directly applied in the setting of momentum-conserving simulation. These methods are potentially valuable for nonequilibrium simulation of physical systems. We again report substantial improvements in both equilibrium and nonequilibrium simulations compared to popular schemes in the literature. We also discuss the proper treatment of the Lees-Edwards boundary conditions, an essential part of modelling shear flow. We also study numerical methods for sampling probability measures in high dimension where the underlying model is only approximately identified with a gradient system. These methods are important in multiscale modelling and in the design of new machine learning algorithms for inference and parameterization for large datasets, challenges which are increasingly important in "big data" applications. In addition to providing a more comprehensive discussion of the foundations of these methods, we propose a new numerical method for the adaptive Langevin/stochastic gradient Nosé-Hoover thermostat that achieves a dramatic improvement in numerical efficiency over the most popular stochastic gradient methods reported in the literature. We demonstrate that the newly established method inherits a superconvergence property (fourth order convergence to the invariant measure for configurational quantities) recently demonstrated in the setting of Langevin dynamics. Furthermore, we propose a covariance-controlled adaptive Langevin (CCAdL) thermostat that can effectively dissipate parameter-dependent noise while maintaining a desired target distribution. The proposed method achieves a substantial speedup over popular alternative schemes for large-scale machine learning applications.
APA, Harvard, Vancouver, ISO, and other styles
10

Egilmez, Gokhan. "Stochastic Cellular Manufacturing System Design and Control." Ohio University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1354351909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Hessami, Mohammad Hessam. "Modélisation multi-échelle et hybride des maladies contagieuses : vers le développement de nouveaux outils de simulation pour contrôler les épidémies." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAS036/document.

Full text
Abstract:
Les études théoriques en épidémiologie utilisent principalement des équations différentielles pour étudier (voire tenter de prévoir) les processus infectieux liés aux maladies contagieuses, souvent sous des hypothèses peu réalistes (ex: des populations spatialement homogènes). Cependant ces modèles ne sont pas bien adaptés pour étudier les processus épidémiologiques à différentes échelles et ils ne sont pas efficaces pour prédire correctement les épidémies. De tels modèles devraient notamment être liés à la structure sociale et spatiale des populations. Dans cette thèse, nous proposons un ensemble de nouveaux modèles dans lesquels différents niveaux de spatialité (par exemple la structure locale de la population, en particulier la dynamique de groupe, la distribution spatiale des individus dans l'environnement, le rôle des personnes résistantes, etc.) sont pris en compte pour expliquer et prédire la façon dont des maladies transmissibles se développent et se répandent à différentes échelles, même à l'échelle de grandes populations. La manière dont les modèles que nous avons développé sont paramétrés leur permet en outre d'être reliés entre eux pour bien décrire en même temps le processus épidémiologique à grande échelle (population d'une grande ville, pays ...) mais avec précision dans des zones de surface limitée (immeubles de bureaux, des écoles). Nous sommes d'abord parvenus à inclure la notion de groupes dans des systèmes d'équations différentielles de modèles SIR (susceptibles, infectés, résistants) par une réécriture des dynamiques de population s'inspirant des réactions enzymatiques avec inhibition non compétitive : les groupes (une forme de complexe) se forment avec des compositions différentes en individus S, I et R, et les individus R se comportent ici comme des inhibiteurs non compétitifs. Nous avons ensuite couplé de tels modèles SIR avec la dynamique globale des groupes simulée par des algorithmes stochastiques dans un espace homogène, ou avec les dynamiques de groupe émergentes obtenues dans des systèmes multi-agents. Comme nos modèles fournissent de l'information bien détaillée à différentes échelles (c'est-à-dire une résolution microscopique en temps, en espace et en population), nous pouvons proposer une analyse de criticité des processus épidémiologiques. Nous pensons en effet que les maladies dans un environnement social et spatial donné présentent des signatures caractéristiques et que de telles mesures pourraient permettre l'identification des facteurs qui modifient leur dynamique.Nous visons ainsi à extraire l'essence des systèmes épidémiologiques réels en utilisant différents modèles mathématique et numériques. Comme nos modèles peuvent prendre en compte les comportements individuels et les dynamiques de population, ils sont en mesure d'utiliser des informations provenant du BigData, collectée par les technologies des réseaux mobiles et sociaux. Un objectif à long terme de ce travail est d'utiliser de tels modèles comme de nouveaux outils pour réduire les épidémies en guidant les rythmes et organisation humaines, par exemple en proposant de nouvelles architectures et en changeant les comportements pour limiter les propagations épidémiques
Theoretical studies in epidemiology mainly use differential equations, often under unrealistic assumptions (e.g. spatially homogeneous populations), to study the development and spreading of contagious diseases. Such models are not, however, well adapted understanding epidemiological processes at different scales, nor are they efficient for correctly predicting epidemics. Yet, such models should be closely related to the social and spatial structure of populations. In the present thesis, we propose a series of new models in which different levels of spatiality (e.g. local structure of population, in particular group dynamics, spatial distribution of individuals in the environment, role of resistant people, etc) are taken into account, to explain and predict how communicable diseases develop and spread at different scales, even at the scale of large populations. Furthermore, the manner in which our models are parametrised allow them to be connected together so as to describe the epidemiological process at a large scale (population of a big town, country ...) and with accuracy in limited areas (office buildings, schools) at the same time.We first succeed in including the notion of groups in SIR (Susceptible, Infected, Recovered) differential equation systems by a rewriting of the SIR dynamics in the form of an enzymatic reaction in which group-complexes of different composition in S, I and R individuals form and where R people behave as non-competitive inhibitors. Then, global group dynamics simulated by stochastic algorithms in a homogeneous space, as well emerging ones obtained in multi-agent systems, are coupled to such SIR epidemic models. As our group-based models provide fine-grain information (i.e. microscopical resolution of time, space and population) we propose an analysis of criticality of epidemiological processes. We think that diseases in a given social and spatial environment present characteristic signatures and that such measurements could allow the identification of the factors that modify their dynamics.We aim here to extract the essence of real epidemiological systems by using various methods based on different computer-oriented approaches. As our models can take into account individual behaviours and group dynamics, they are able to use big-data information yielded from smart-phone technologies and social networks. As a long term objective derived from the present work, one can expect good predictions in the development of epidemics, but also a tool to reduce epidemics by guiding new environmental architectures and by changing human health-related behaviours
APA, Harvard, Vancouver, ISO, and other styles
12

Boczkowski, Lucas. "Search and broadcast in stochastic environments, a biological perspective." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCC044.

Full text
Abstract:
Cette thèse s’articule autour de deux séries de travaux motivés par des expériences sur des fourmis. Bien qu’inspirés par labiologie, les modèles que nous développons utilisent une terminologie et une approche typique de l’informatique théorique.Le premier modèle s’inspire du transport collaboratif de nourriture au sein de l’espèce P. Longicornis. Certains aspectsfondamentaux du processus peuvent être décrits par un problème de recherche sur un graphe en présence d’un certain typed’indications bruitées à chaque noeud. Ces indications représentent de courtes traces de phéromones déposées devant l’objettransporté afin de faciliter la navigation. Dans cette thèse, nous donnons une analyse complète du problème lorsque le graphesous-jacent est un arbre, une hypothèse pertinente dans un cadre informatique. En particulier, notre modèle peut être vucomme une généralisation de la recherche binaire aux arbres, en présence de bruit. De manière surprenante, lescomportements des algorithmes optimaux dans ce cadre diffèrent suivant le type de garantie que l’on étudie : convergence enmoyenne ou avec grande probabilité.Le deuxième modèle présenté dans cette thèse a été conçu pour décrire la dissémination d’informations au sein de fourmis dudésert. Dans notre modèle, les échanges ont lieu uniformément au hasard, et sont sujets à du bruit. Nous prouvons une borneinférieure sur le nombre d’interactions requis en fonction de la taille du groupe. La borne, de même que les hypothèses dumodèle, semblent compatible avec les données expérimentales.Une conséquence théorique de ce résultat est une séparation dans ce cadre des variantes PUSH et PULL pour le problème du broadcast avec bruit. Nous étudions aussi une version du problème avec des garanties de convergence plus fortes. Dans cecas, le problème peut-être résolu efficacement, même si les échanges d’information au cours de chaque interaction sont très limités
This thesis is built around two series of works, each motivated by experiments on ants. We derive and analyse new models,that use computer science concepts and methodology, despite their biological roots and motivation.The first model studied in this thesis takes its inspiration in collaborative transport of food in the P. Longicornis species. Wefind that some key aspects of the process are well described by a graph search problem with noisy advice. The advicecorresponds to characteristic short scent marks laid in front of the load in order to facilitate its navigation. In this thesis, weprovide detailed analysis of the model on trees, which are relevant graph structures from a computer science standpoint. Inparticular our model may be viewed as a noisy extension of binary search to trees. Tight results in expectation and highprobability are derived with matching upper and lower bounds. Interestingly, there is a sharp phase transition phenomenon forthe expected runtime, but not when the algorithms are only required to succeed with high probability.The second model we work with was initially designed to capture information broadcast amongst desert ants. The model usesa stochastic meeting pattern and noise in the interactions, in a way that matches experimental data. Within this theoreticalmodel, we present in this document a strong lower bound on the number of interactions required before information can bespread reliably. Experimentally, we see that the time required for the recruitment process of even few ants increases sharplywith the group size, in accordance with our result. A theoretical consequence of the lower bound is a separation between theuniform noisy PUSH and PULL models of interaction. We also study a close variant of broadcast, without noise this time butunder more strict convergence requirements and show that in this case, the problem can be solved efficiently, even with verylimited exchange of information on each interaction
APA, Harvard, Vancouver, ISO, and other styles
13

Ittiwattana, Waraporn. "A Method for Simulation Optimization with Applications in Robust Process Design and Locating Supply Chain Operations." The Ohio State University, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=osu1030366020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Costa, Jardel da Silva. "Minimização do potencial de Lennard-Jones via otimização global." Universidade do Estado do Rio de Janeiro, 2010. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=1604.

Full text
Abstract:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Devido à sua importância, o chamado problema de Lennard-Jones tem atraído pesquisadores de diversos campos da ciência pura e aplicada. Tal problema resume-se em achar as coordenadas de um sistema no espaço Euclidiano tridimensional, as quais correspondem a um mínimo de um potencial de energia. Esse problema desempenha um papel de fundamental importância na determinação da estabilidade de moléculas em arranjos altamente ramificados, como das proteínas. A principal dificuldade para resolver o problema de Lennard-Jones decorre do fato de que a função objetivo é não-convexa e altamente não-linear com várias variáveis, apresentando, dessa forma, um grande número de mínimos locais. Neste trabalho, foram utilizados alguns métodos de otimização global estocástica, onde procurou-se comparar os resultados numéricos dos algoritmos, com o objetivo de verificar quais se adaptam melhor à minimização do referido potencial. No presente estudo, abordou-se somente micro agrupamentos possuindo de 3 a 10 átomos. Os resultados obtidos foram comparados também com o melhores resultados conhecidos atualmente na literatura. Os algoritmos de otimização utilizados foram todos implementados em linguagem C++.
Because of its importance, the so-called Lennard-Jones problem has attracted researchers from various fields of pure and applied science. This problem boils down to find the coordinates of a system with three-dimensional Euclidean space, which correspond to minimum potential energy. This problem plays a fundamental role in determining the stability of molecules in highly branched arrangement, such as proteins. The main difficulty in solving the problem of Lennard-Jones from the fact that the objective function is non-convex and highly nonlinear with several variables, thus presenting a large number of local minima. Here, we used some methods of stochastic global optimization, where we seek to compare the results of the numerical algorithm, in order to see which are better suited to the minimization of the potential. In this study, we addressed only micro groups having 3-10 atoms. The results were also compared with the currently best known results in literature. The optimization algorithms were all implemented in C + +.
APA, Harvard, Vancouver, ISO, and other styles
15

Dalgedaitė, Dainė. "Stochastinio modeliavimo algoritmai ieškant talpiausio geometrinių figūrų pakavimo." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2007. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2007~D_20070816_172150-44944.

Full text
Abstract:
Darbe trumpai apžvelgtos figūrų pakavimo ištakos, aprašyti keli figūrų pakavimui naudojami stochastinio modeliavimo algoritmai. Išnagrinėtas perturbacijos metodas, sukurtos dvi šiuo metodu vienetiniame kvadrate vienodus apskritimus pakuojan��ios programos, detaliai aprašyti jų algoritmai. Eksperimentiškai ištirtos programų galimybės: kiekviena programa po 30 kartų buvo pakuojami n apskritimų, kur 3 ≤ n ≤ 15 ir n = 25, 50, 75, 100. Buvo fiksuojami ir apibendrinami pakavimų rezultatai. Pastarieji lyginti tarpusavy ir kartu su Violetos Sabonienės magistriniame darbe ,,Biliardinio modeliavimo algoritmai ieškant talpiausio geometrinių figūrų pakavimo“ biliardiniu metodu atliktais pakavimo rezultatais. Prieduose pateikti programų tekstai ir skaičiavimų lentelės.
In this work are examined the sources of figure packing, described the stochastic simulation algorithms of finding the densest packing of geometric figures. Here is analysed the method of perturbation and made two programmes of equal circle packing in unit square and their algorithms are being described in detail. The possibilities of programmes were analysed experimentally: using each programme 30 times. In each experiment were packed n circles, were 3 ≤ n ≤ 15 end n = 25, 50, 75, 100. The results of packing were fixed and summarized. The latter rezults were compared between themselves and also with the results of Violeta Sabonienė master of science work “The billiarding simulation algorithms of finding the densest packing of geometric figures“. The text and the calculation charts of the program are giver in the appendices.
APA, Harvard, Vancouver, ISO, and other styles
16

Xu, Zhouyi. "Stochastic Modeling and Simulation of Gene Networks." Scholarly Repository, 2010. http://scholarlyrepository.miami.edu/oa_dissertations/645.

Full text
Abstract:
Recent research in experimental and computational biology has revealed the necessity of using stochastic modeling and simulation to investigate the functionality and dynamics of gene networks. However, there is no sophisticated stochastic modeling techniques and efficient stochastic simulation algorithms (SSA) for analyzing and simulating gene networks. Therefore, the objective of this research is to design highly efficient and accurate SSAs, to develop stochastic models for certain real gene networks and to apply stochastic simulation to investigate such gene networks. To achieve this objective, we developed several novel efficient and accurate SSAs. We also proposed two stochastic models for the circadian system of Drosophila and simulated the dynamics of the system. The K-leap method constrains the total number of reactions in one leap to a properly chosen number thereby improving simulation accuracy. Since the exact SSA is a special case of the K-leap method when K=1, the K-leap method can naturally change from the exact SSA to an approximate leap method during simulation if necessary. The hybrid tau/K-leap and the modified K-leap methods are particularly suitable for simulating gene networks where certain reactant molecular species have a small number of molecules. Although the existing tau-leap methods can significantly speed up stochastic simulation of certain gene networks, the mean of the number of firings of each reaction channel is not equal to the true mean. Therefore, all existing tau-leap methods produce biased results, which limit simulation accuracy and speed. Our unbiased tau-leap methods remove the bias in simulation results that exist in all current leap SSAs and therefore significantly improve simulation accuracy without sacrificing speed. In order to efficiently estimate the probability of rare events in gene networks, we applied the importance sampling technique to the next reaction method (NRM) of the SSA and developed a weighted NRM (wNRM). We further developed a systematic method for selecting the values of importance sampling parameters. Applying our parameter selection method to the wSSA and the wNRM, we get an improved wSSA (iwSSA) and an improved wNRM (iwNRM), which can provide substantial improvement over the wSSA in terms of simulation efficiency and accuracy. We also develop a detailed and a reduced stochastic model for circadian rhythm in Drosophila and employ our SSA to simulate circadian oscillations. Our simulations showed that both models could produce sustained oscillations and that the oscillation is robust to noise in the sense that there is very little variability in oscillation period although there are significant random fluctuations in oscillation peeks. Moreover, although average time delays are essential to simulation of oscillation, random changes in time delays within certain range around fixed average time delay cause little variability in the oscillation period. Our simulation results also showed that both models are robust to parameter variations and that oscillation can be entrained by light/dark circles.
APA, Harvard, Vancouver, ISO, and other styles
17

Geltz, Brad. "Handling External Events Efficiently in Gillespie's Stochastic Simulation Algorithm." VCU Scholars Compass, 2010. http://scholarscompass.vcu.edu/etd/168.

Full text
Abstract:
Gillespie's Stochastic Simulation Algorithm (SSA) provides an elegant simulation approach for simulating models composed of coupled chemical reactions. Although this approach can be used to describe a wide variety biological, chemical, and ecological systems, often systems have external behaviors that are difficult or impossible to characterize using chemical reactions alone. This work extends the applicability of the SSA by adding mechanisms for the inclusion of external events and external triggers. We define events as changes that occur in the system at a specified time while triggers are defined as changes that occur to the system when a particular condition is fulfilled. We further extend the SSA with the efficient implementation of these model parameters. This work allows numerous systems that would have previously been impossible or impractical to model using the SSA to take advantage of this powerful simulation technique.
APA, Harvard, Vancouver, ISO, and other styles
18

Picot, Romain. "Amélioration de la fiabilité numérique de codes de calcul industriels." Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS242.

Full text
Abstract:
De nombreux travaux sont consacrés à la performance des simulations numériques, or il est important de tenir compte aussi de l'impact des erreurs d'arrondi sur les résultats produits. Ces erreurs d'arrondi peuvent être estimées grâce à l'Arithmétique Stochastique Discrète (ASD), implantée dans la bibliothèque CADNA. Les algorithmes compensés permettent d'améliorer la précision des résultats, sans changer le type numérique utilisé. Ils ont été conçus pour être généralement exécutés en arrondi au plus près. Nous avons établi des bornes d'erreur pour ces algorithmes en arrondi dirigé et montré qu'ils peuvent être utilisés avec succès avec le mode d'arrondi aléatoire de l'ASD. Nous avons aussi étudié l’impact d’une précision cible des résultats sur les types numériques des différentes variables. Nous avons développé l'outil PROMISE qui effectue automatiquement ces modifications de types tout en validant les résultats grâce à l’ASD. L'outil PROMISE a ainsi fourni de nouvelles configurations de types mêlant simple et double précision dans divers programmes numériques et en particulier dans le code MICADO développé à EDF. Nous avons montré comment estimer avec l'ASD les erreurs d'arrondi générées en quadruple précision. Nous avons proposé une version de CADNA qui intègre la quadruple précision et qui nous a permis notamment de valider le calcul de racines multiples de polynômes. Enfin nous avons utilisé cette nouvelle version de CADNA dans l'outil PROMISE afin qu'il puisse fournir des configurations à trois types (simple, double et quadruple précision)
Many studies are devoted to performance of numerical simulations. However it is also important to take into account the impact of rounding errors on the results produced. These rounding errors can be estimated with Discrete Stochastic Arithmetic (DSA), implemented in the CADNA library. Compensated algorithms improve the accuracy of results, without changing the numerical types used. They have been designed to be generally executed with rounding to nearest. We have established error bounds for these algorithms with directed rounding and shown that they can be used successfully with the random rounding mode of DSA. We have also studied the impact of a target precision of the results on the numerical types of the different variables. We have developed the PROMISE tool which automatically performs these type changes while validating the results thanks to DSA. The PROMISE tool has thus provided new configurations of types combining single and double precision in various programs and in particular in the MICADO code developed at EDF. We have shown how to estimate with DSA rounding errors generated in quadruple precision. We have proposed a version of CADNA that integrates quadruple precision and that allowed us in particular to validate the computation of multiple roots of polynomials. Finally we have used this new version of CADNA in the PROMISE tool so that it can provide configurations with three types (single, double and quadruple precision)
APA, Harvard, Vancouver, ISO, and other styles
19

Chen, Minghan. "Stochastic Modeling and Simulation of Multiscale Biochemical Systems." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/90898.

Full text
Abstract:
Numerous challenges arise in modeling and simulation as biochemical networks are discovered with increasing complexities and unknown mechanisms. With the improvement in experimental techniques, biologists are able to quantify genes and proteins and their dynamics in a single cell, which calls for quantitative stochastic models for gene and protein networks at cellular levels that match well with the data and account for cellular noise. This dissertation studies a stochastic spatiotemporal model of the Caulobacter crescentus cell cycle. A two-dimensional model based on a Turing mechanism is investigated to illustrate the bipolar localization of the protein PopZ. However, stochastic simulations are often impeded by expensive computational cost for large and complex biochemical networks. The hybrid stochastic simulation algorithm is a combination of differential equations for traditional deterministic models and Gillespie's algorithm (SSA) for stochastic models. The hybrid method can significantly improve the efficiency of stochastic simulations for biochemical networks with multiscale features, which contain both species populations and reaction rates with widely varying magnitude. The populations of some reactant species might be driven negative if they are involved in both deterministic and stochastic systems. This dissertation investigates the negativity problem of the hybrid method, proposes several remedies, and tests them with several models including a realistic biological system. As a key factor that affects the quality of biological models, parameter estimation in stochastic models is challenging because the amount of empirical data must be large enough to obtain statistically valid parameter estimates. To optimize system parameters, a quasi-Newton algorithm for stochastic optimization (QNSTOP) was studied and applied to a stochastic budding yeast cell cycle model by matching multivariate probability distributions between simulated results and empirical data. Furthermore, to reduce model complexity, this dissertation simplifies the fundamental cooperative binding mechanism by a stochastic Hill equation model with optimized system parameters. Considering that many parameter vectors generate similar system dynamics and results, this dissertation proposes a general α-β-γ rule to return an acceptable parameter region of the stochastic Hill equation based on QNSTOP. Different objective functions are explored targeting different features of the empirical data.
Doctor of Philosophy
Modeling and simulation of biochemical networks faces numerous challenges as biochemical networks are discovered with increased complexity and unknown mechanisms. With improvement in experimental techniques, biologists are able to quantify genes and proteins and their dynamics in a single cell, which calls for quantitative stochastic models, or numerical models based on probability distributions, for gene and protein networks at cellular levels that match well with the data and account for randomness. This dissertation studies a stochastic model in space and time of a bacterium’s life cycle— Caulobacter. A two-dimensional model based on a natural pattern mechanism is investigated to illustrate the changes in space and time of a key protein population. However, stochastic simulations are often complicated by the expensive computational cost for large and sophisticated biochemical networks. The hybrid stochastic simulation algorithm is a combination of traditional deterministic models, or analytical models with a single output for a given input, and stochastic models. The hybrid method can significantly improve the efficiency of stochastic simulations for biochemical networks that contain both species populations and reaction rates with widely varying magnitude. The populations of some species may become negative in the simulation under some circumstances. This dissertation investigates negative population estimates from the hybrid method, proposes several remedies, and tests them with several cases including a realistic biological system. As a key factor that affects the quality of biological models, parameter estimation in stochastic models is challenging because the amount of observed data must be large enough to obtain valid results. To optimize system parameters, the quasi-Newton algorithm for stochastic optimization (QNSTOP) was studied and applied to a stochastic (budding) yeast life cycle model by matching different distributions between simulated results and observed data. Furthermore, to reduce model complexity, this dissertation simplifies the fundamental molecular binding mechanism by the stochastic Hill equation model with optimized system parameters. Considering that many parameter vectors generate similar system dynamics and results, this dissertation proposes a general α-β-γ rule to return an acceptable parameter region of the stochastic Hill equation based on QNSTOP. Different optimization strategies are explored targeting different features of the observed data.
APA, Harvard, Vancouver, ISO, and other styles
20

Wang, Shuo. "Analysis and Application of Haseltine and Rawlings's Hybrid Stochastic Simulation Algorithm." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/82717.

Full text
Abstract:
Stochastic effects in cellular systems are usually modeled and simulated with Gillespie's stochastic simulation algorithm (SSA), which follows the same theoretical derivation as the chemical master equation (CME), but the low efficiency of SSA limits its application to large chemical networks. To improve efficiency of stochastic simulations, Haseltine and Rawlings proposed a hybrid of ODE and SSA algorithm, which combines ordinary differential equations (ODEs) for traditional deterministic models and SSA for stochastic models. In this dissertation, accuracy analysis, efficient implementation strategies, and application of of Haseltine and Rawlings's hybrid method (HR) to a budding yeast cell cycle model are discussed. Accuracy of the hybrid method HR is studied based on a linear chain reaction system, motivated from the modeling practice used for the budding yeast cell cycle control mechanism. Mathematical analysis and numerical results both show that the hybrid method HR is accurate if either numbers of molecules of reactants in fast reactions are above certain thresholds, or rate constants of fast reactions are much larger than rate constants of slow reactions. Our analysis also shows that the hybrid method HR allows for a much greater region in system parameter space than those for the slow scale SSA (ssSSA) and the stochastic quasi steady state assumption (SQSSA) method. Implementation of the hybrid method HR requires a stiff ODE solver for numerical integration and an efficient event-handling strategy for slow reaction firings. In this dissertation, an event-handling strategy is developed based on inverse interpolation. Performances of five wildly used stiff ODE solvers are measured in three numerical experiments. Furthermore, inspired by the strategy of the hybrid method HR, a hybrid of ODE and SSA stochastic models for the budding yeast cell cycle is developed, based on a deterministic model in the literature. Simulation results of this hybrid model match very well with biological experimental data, and this model is the first to do so with these recently available experimental data. This study demonstrates that the hybrid method HR has great potential for stochastic modeling and simulation of large biochemical networks.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
21

Liu, Weigang. "A Gillespie-Type Algorithm for Particle Based Stochastic Model on Lattice." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/96455.

Full text
Abstract:
In this thesis, I propose a general stochastic simulation algorithm for particle based lattice model using the concepts of Gillespie's stochastic simulation algorithm, which was originally designed for well-stirred systems. I describe the details about this method and analyze its complexity compared with the StochSim algorithm, another simulation algorithm originally proposed to simulate stochastic lattice model. I compare the performance of both algorithms with application to two different examples: the May-Leonard model and Ziff-Gulari-Barshad model. Comparison between the simulation results from both algorithms has validate our claim that our new proposed algorithm is comparable to the StochSim in simulation accuracy. I also compare the efficiency of both algorithms using the CPU cost of each code and conclude that the new algorithm is as efficient as the StochSim in most test cases, while performing even better for certain specific cases.
Computer simulation has been developed for almost one century. Stochastic lattice model, which follows the physics concept of lattice, is defined as a kind of system in which individual entities live on grids and demonstrate certain random behaviors according to certain specific rules. It is mainly studied using computer simulations. The most widely used simulation method to for stochastic lattice systems is the StochSim algorithm, which just randomly pick an entity and then determine its behavior based on a set of specific random rules. Our goal is to develop new simulation methods so that it is more convenient to simulate and analyze stochastic lattice system. In this thesis I propose another type of simulation methods for the stochastic lattice model using totally different concepts and procedures. I developed a simulation package and applied it to two different examples using both methods, and then conducted a series of numerical experiment to compare their performance. I conclude that they are roughly equivalent and our new method performs better than the old one in certain special cases.
APA, Harvard, Vancouver, ISO, and other styles
22

Vagne, Quentin. "Stochastic models of intra-cellular organization : from non-equilibrium clustering of membrane proteins to the dynamics of cellular organelles." Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCC205/document.

Full text
Abstract:
Cette thèse a pour sujet la biologie cellulaire, et plus particulièrement l'organisation interne des cellules eucaryotes. Bien que les différents acteurs régissant cette organisation aient été en grande partie identifiées, on ignore encore comment une architecture si complexe et dynamique peut émerger de simples interactions entres molécules. Un des objectifs des différentes études présentées dans cette thèse est de construire un cadre théorique permettant d'appréhender cette auto-organisation. Pour cela, nous étudions des problèmes spécifiques à différentes échelles allant du nanomètre (dynamique des hétérogénéités dans les membranes biologiques) au micromètre (organisation des organelles cellulaires), en utilisant des simulations numériques stochastiques et des méthodes analytiques. Le texte est organisé pour présenter les résultats des plus petites au plus grandes échelles. Dans le premier chapitre, nous étudions l'organisation de la membrane d'un seul compartiment en modélisant la dynamique d'hétérogénéités membranaires. Dans le second chapitre, nous étudions la dynamique d'un compartiment unique échangeant des vésicules avec le milieu extérieur. Nous étudions également comment deux compartiments différents peuvent être générés par les mêmes mécanismes d'échanges de vésicules. Enfin, dans le troisième chapitre, nous développons un modèle global de la dynamique des organelles cellulaires, dans le contexte particulier de la biogenèse de l'appareil de Golgi
This thesis deals with cell biology, and particularly with the internal organization of eukaryotic cells. Although many of the molecular players contributing to the intra-cellular organization have been identified, we are still far from understanding how the complex and dynamical intra-cellular architecture emerges from the self-organization of individual molecules. One of the goals of the different studies presented in this thesis is to provide a theoretical framework to understand such self-organization. We cover specific problems at different scales, ranging from membrane organization at the nanometer scale to whole organelle structure at the micron scale, using analytical work and stochastic simulation algorithms. The text is organized to present the results from the smallest to the largest scales. In the first chapter, we study the membrane organization of a single compartment by modeling the dynamics of membrane heterogeneities. In the second chapter we study the dynamics of one membrane-bound compartment exchanging vesicles with the external medium. Still in the same chapter, we investigate the mechanisms by which two different compartments can be generated by vesicular sorting. Finally in the third chapter, we develop a global model of organelle biogenesis and dynamics in the specific context of the Golgi apparatus
APA, Harvard, Vancouver, ISO, and other styles
23

Charlebois, Daniel A. "An algorithm for the stochastic simulation of gene expression and cell population dynamics." Thesis, University of Ottawa (Canada), 2010. http://hdl.handle.net/10393/28755.

Full text
Abstract:
Over the past few years, it has been increasingly recognized that stochastic mechanisms play a key role in the dynamics of biological systems. Genetic networks are one example where molecular-level fluctuations are of particular importance. Here stochasticity in the expression of gene products can result in genetically identical cells in the same environment displaying significant variation in biochemical or physical attributes. This variation can influence individual and population-level fitness. In this thesis we first explore the background required to obtain analytical solutions and perform simulations of stochastic models of gene expression. Then we develop an algorithm for the stochastic simulation of gene expression and heterogeneous cell population dynamics. The algorithm combines an exact method to simulate molecular-level fluctuations in single cells and a constant-number Monte Carlo approach to simulate the statistical characteristics of growing cell populations. This approach permits biologically realistic and computationally feasible simulations of environment and time-dependent cell population dynamics. The algorithm is benchmarked against steady-state and time-dependent analytical solutions of gene expression models, including scenarios when cell growth, division, and DNA replication are incorporated into the modelling framework. Furthermore, using the algorithm we obtain the steady-state cell size distribution of a large cell population, grown from a small initial cell population undergoing stochastic and asymmetric division, to the size distribution of a small representative sample of this population simulated to steady-state. These comparisons demonstrate that the algorithm provides an accurate and efficient approach to modelling the effects of complex biological features on gene expression dynamics. The algorithm is also employed to simulate expression dynamics within 'bet-hedging' cell populations during their adaption to environmental stress. These simulations indicate that the cell population dynamics algorithm provides a framework suitable for simulating and analyzing realistic models of heterogeneous population dynamics combining molecular-level stochastic reaction kinetics, relevant physiological details, and phenotypic variability and fitness.
APA, Harvard, Vancouver, ISO, and other styles
24

NOBILE, MARCO SALVATORE. "Evolutionary Inference of Biological Systems Accelerated on Graphics Processing Units." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2015. http://hdl.handle.net/10281/75434.

Full text
Abstract:
In silico analysis of biological systems represents a valuable alternative and complementary approach to experimental research. Computational methodologies, indeed, allow to mimic some conditions of cellular processes that might be difficult to dissect by exploiting traditional laboratory techniques, therefore potentially achieving a thorough comprehension of the molecular mechanisms that rule the functioning of cells and organisms. In spite of the benefits that it can bring about in biology, the computational approach still has two main limitations: first, there is often a lack of adequate knowledge on the biological system of interest, which prevents the creation of a proper mathematical model able to produce faithful and quantitative predictions; second, the analysis of the model can require a massive number of simulations and calculations, which are computationally burdensome. The goal of the present thesis is to develop novel computational methodologies to efficiently tackle these two issues, at multiple scales of biological complexity (from single molecular structures to networks of biochemical reactions). The inference of the missing data — related to the three-dimensional structures of proteins, the number and type of chemical species and their mutual interactions, the kinetic parameters — is performed by means of novel methods based on Evolutionary Computation and Swarm Intelligence techniques. General purpose GPU computing has been adopted to reduce the computational time, achieving a relevant speedup with respect to the sequential execution of the same algorithms. The results presented in this thesis show that these novel evolutionary-based and GPU-accelerated methodologies are indeed feasible and advantageous from both the points of view of inference quality and computational performances.
APA, Harvard, Vancouver, ISO, and other styles
25

Monzon, Eduardo. "An Algorithm to Recognize Multi-Stable Behavior From an Ensemble of Stochastic Simulation Runs." DigitalCommons@USU, 2013. https://digitalcommons.usu.edu/etd/2035.

Full text
Abstract:
Synthetic biological designers are demanding tools to help with the design and verifica- tion process of new biological models. Some of the most common tools available aggregate multiple simulation results into one “clean” trajectory that hopefully is representative of the system’s behavior. However, for systems exhibiting multiple stable states, these techniques fail to show all the possible trajectories of the system. This work introduces a method capable of detecting the presence of more than one “typical” trajectory in a system, which can also be integrated with other available simulation tools.
APA, Harvard, Vancouver, ISO, and other styles
26

Boulianne, Laurier. "An algorithm and VLSI architecture for a stochastic particle based biological simulator." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=96690.

Full text
Abstract:
With the recent progress in both computer technology and systems biology, it is now possible to simulate and visualise biological systems virtually. It is expected that realistic in silico simulations will enhance our understanding of biological processes and will promote the development of effective therapeutic treatments. Realistic biochemical simulators aim to improve our understanding of biological processes that could not be, otherwise, properly understood in experimental studies. This situation calls for increasingly accurate simulators that take into account not only the stochastic nature of biological systems, but also the spatial heterogeneity and the effect of crowding of biological systems. This thesis presents a novel particle-based stochastic biological simulator named Grid- Cell. It also presents a novel VLSI architecture accelerating GridCell between one and two orders of magnitude. GridCell is a three-dimensional simulation environment for investigating the behaviour of biochemical networks under a variety of spatial influences including crowding, recruitment and localisation. GridCell enables the tracking and characterisation of individual particles, leading to insights on the behaviour of low copy number molecules participating in signalling networks. The simulation space is divided into a discrete 3D grid that provides ideal support for particle collisions without distance calculations and particle searches. SBML support enables existing networks to be simulated and visualised. The user interface provides intuitive navigation that facilitates insights into species behaviour across spatial and temporal dimensions. Crowding effects on a Michaelis- Menten system are simulated and results show they can have a huge impact on the effective rate of product formation. Tracking millions of particles is extremely computationally expensive and in order to run whole cells at the molecular resolution in less than 24 hours, a commonly expressed goal in systems biology, accelerating GridCell with parallel hardware is required. An FPGA architecture combining pipelining, parallel processing units and streaming is presented. The architecture is scalable to multiple FPGAs and the streaming approach ensures that the architecture scales well to very large systems. An architecture containing 25 processing units on each stage of the pipeline is synthesised on a single Virtex-6 XC6VLX760 FPGA device and a speedup of 76x over the serial implementation is achieved. This speedup reduces the gap between the complexity of cell simulation and the processing power of advanced simulators. Future work on GridCell could include support for highly complex compartment and high definition particles.
Grâce aux récents progrès en informatique et en biologie, il est maintenant possible de simuler et de visualiser des systèmes biologiques de façon virtuelle. Il est attendu que des simulations réalistes produites par ordinateur, in silico, nous permettront d'améliorer notre connaissance des processus biologiques et de favoriser le développement de traitements thérapeutiques efficaces. Les simulateurs biologiques visent à améliorer notre connaissance de processus biologiques qui, autrement, ne pourraient pas être correctement analysés par des études expérimentales. Cette situation requiert le développement de simulateurs de plus en plus précis qui tiennent compte non seulement de la nature stochastique des systèmes biologiques, mais aussi de l'hétérogénéité spatiale ainsi que des effets causés par la grande densité de particules présentes dans ces systèmes. Ce mémoire présente GridCell, un simulateur biologique stochastique original basé sur une représentation microscopique des particules. Ce mémoire présente aussi une architecture parallèle originale accélérant GridCell par presque deux ordres de magnitude. GridCell est un environnement de simulation tridimensionnel qui permet d'étudier le comportement des réseaux biochimique sous différentes influences spatiales, notamment l'encombrement moléculaire ainsi que les effets de recrutement et de localisation des particules. GridCell traque les particules individuellement, ce qui permet d'explorer le comportement de molécules participants en très petits nombres à divers réseaux de signalisation. L'espace de simulation est divisé en une grille 3D discrète qui permet de générer des collisions entre les particules sans avoir à faire de calculs de distance ni de recherches de particules complexes. La compatibilité avec le format SBML permet à des réseaux déjà existants d'être simulés et visualisés. L'interface visuelle permet à l'utilisateur de naviguer de façon intuitive dans la simulation afin d'observer le comportement des espèces à travers le temps et l'espace. Des effets d'encombrement moléculaire sur un système enzymatique de type Michaelis-Menten sont simulés, et les résultats montrent un effet important sur le taux de formation du produit. Tenir compte de millions de particules à la fois est extrêmement demandant pour un ordinateur et, pour pouvoir simuler des cellules complètes avec une résolution spatiale moléculaire en moins d'une journée, un but souvent exprimé en biologie des systèmes, il est essentiel d'accélérer GridCell à l'aide de matériel informatique fonctionnant en parallèle. On propose une architecture sur FPGA combinant le traitement en pipeline, le fonctionnement en mode continu ainsi que l'exécution parallèle. L'architecture peut supporter plusieurs FPGA et l'approche en mode continu permet à l'architecture de supporter très grands systèmes. Une architecture comprenant 25 unités de traitement sur chaque étage du pipeline est synthétisée sur un seul FPGA Virtex-6 XC6VLX760, ce qui permet d'obtenir des gains de performance 76 fois supérieurs à l'implémentation séquentielle de l'algorithme. Ce gain de performance réduit l'écart entre la complexité de la simulation des cellules biologiques et la puissance de calcul des simulateurs avancés. Des travaux futurs sur GridCell pourraient avoir pour objectif de supporter des compartiments de forme très complexe ainsi que des particules haute définition.
APA, Harvard, Vancouver, ISO, and other styles
27

Eid, Abdelrahman. "Stochastic simulations for graphs and machine learning." Thesis, Lille 1, 2020. http://www.theses.fr/2020LIL1I018.

Full text
Abstract:
Bien qu’il ne soit pas pratique d’étudier la population dans de nombreux domaines et applications, l’échantillonnage est une méthode nécessaire permettant d’inférer l’information.Cette thèse est consacrée au développement des algorithmes d’échantillonnage probabiliste pour déduire l’ensemble de la population lorsqu’elle est trop grande ou impossible à obtenir.Les techniques Monte Carlo par chaîne de markov (MCMC) sont l’un des outils les plus importants pour l’échantillonnage à partir de distributions de probabilités surtout lorsque ces distributions ont des constantes de normalisation difficiles à évaluer.Le travail de cette thèse s’intéresse principalement aux techniques d’échantillonnage pour les graphes. Deux méthodes pour échantillonner des sous-arbres uniformes à partir de graphes en utilisant les algorithmes de Metropolis-Hastings sont présentées dans le chapitre 2. Les méthodes proposées visent à échantillonner les arbres selon une distribution à partir d’un graphe où les sommets sont marqués. L’efficacité de ces méthodes est prouvée mathématiquement. De plus, des études de simulation ont été menées et ont confirmé les résultats théoriques de convergence vers la distribution d’équilibre.En continuant à travailler sur l’échantillonnage des graphes, une méthode est présentée au chapitre 3 pour échantillonner des ensembles de sommets similaires dans un graphe arbitraire non orienté en utilisant les propriétés des processus des points permanents PPP. Notre algorithme d’échantillonnage des ensembles de k sommets est conçu pour surmonter le problème de la complexité de calcul lors du calcul du permanent par échantillonnage d’une distribution conjointe dont la distribution marginale est un kPPP.Enfin, dans le chapitre 4, nous utilisons les définitions des méthodes MCMC et de la vitesse de convergence pour estimer la bande passante du noyau utilisée pour la classification dans l’apprentissage machine supervisé. Une méthode simple et rapide appelée KBER est présentée pour estimer la bande passante du noyau de la fonction de base radiale RBF en utilisant la courbure moyenne de Ricci de graphes
While it is impractical to study the population in many domains and applications, sampling is a necessary method allows to infer information. This thesis is dedicated to develop probability sampling algorithms to infer the whole population when it is too large or impossible to be obtained. Markov chain Monte Carlo (MCMC) techniques are one of the most important tools for sampling from probability distributions especially when these distributions haveintractable normalization constants.The work of this thesis is mainly interested in graph sampling techniques. Two methods in chapter 2 are presented to sample uniform subtrees from graphs using Metropolis-Hastings algorithms. The proposed methods aim to sample trees according to a distribution from a graph where the vertices are labelled. The efficiency of these methods is proved mathematically. Additionally, simulation studies were conducted and confirmed the theoretical convergence results to the equilibrium distribution.Continuing to the work on graph sampling, a method is presented in chapter 3 to sample sets of similar vertices in an arbitrary undirected graph using the properties of the Permanental Point processes PPP. Our algorithm to sample sets of k vertices is designed to overcome the problem of computational complexity when computing the permanent by sampling a joint distribution whose marginal distribution is a kPPP.Finally in chapter 4, we use the definitions of the MCMC methods and convergence speed to estimate the kernel bandwidth used for classification in supervised Machine learning. A simple and fast method called KBER is presented to estimate the bandwidth of the Radial basis function RBF kernel using the average Ricci curvature of graphs
APA, Harvard, Vancouver, ISO, and other styles
28

Trávníček, Jan. "Tvorba spolehlivostních modelů pro pokročilé číslicové systémy." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236226.

Full text
Abstract:
This thesis deals with the systems reliability. At First, there is discussed the concept of reliability itself and its indicators, which can specifically express reliability. The second chapter describes the different kinds of reliability models for simple and complex systems. It further describes the basic methods for construction of reliability models. The fourth chapter is devoted to a very important Markov models. Markov models are very powerful and complex model for calculating the reliability of advanced systems. Their suitability is explained here for recovered systems, which may contain absorption states. The next chapter describes the standby redundancy. Discusses the advantages and disadvantages of static, dynamic and hybrid standby. There is described the influence of different load levels on the service life. The sixth chapter is devoted to the implementation, description of the application and description of the input file in XML format. There are discussed the results obtaining in experimental calculations.
APA, Harvard, Vancouver, ISO, and other styles
29

Phi, Tien Cuong. "Décomposition de Kalikow pour des processus de comptage à intensité stochastique." Thesis, Université Côte d'Azur, 2022. http://www.theses.fr/2022COAZ4029.

Full text
Abstract:
L'objectif de cette thèse est de construire des algorithmes capables de simuler l'activité d'un réseau de neurones. L'activité du réseau de neurones peut être modélisée par le train de spikes de chaque neurone, qui sont représentés par un processus ponctuel multivarié. La plupart des approches connues pour simuler des processus ponctuels rencontrent des difficultés lorsque le réseau sous-jacent est de grande taille.Dans cette thèse, nous proposons de nouveaux algorithmes utilisant un nouveau type de décomposition de Kalikow. En particulier, nous présentons un algorithme permettant de simuler le comportement d'un neurone intégré dans un réseau neuronal infini sans simuler l'ensemble du réseau. Nous nous concentrons sur la preuve mathématique que notre algorithme renvoie les bons processus ponctuels et sur l'étude de sa condition d'arrêt. Ensuite, une preuve constructive montre que cette nouvelle décomposition est valable pour divers processus ponctuels.Enfin, nous proposons des algorithmes, qui peuvent être parallélisés et qui permettent de simuler une centaine de milliers de neurones dans un graphe d'interaction complet, sur un ordinateur portable. Plus particulièrement, la complexité de cet algorithme semble linéaire par rapport au nombre de neurones à simuler
The goal of this thesis is to construct algorithms which are able to simulate the activity of a neural network. The activity of the neural network can be modeled by the spike train of each neuron, which are represented by a multivariate point processes. Most of the known approaches to simulate point processes encounter difficulties when the underlying network is large.In this thesis, we propose new algorithms using a new type of Kalikow decomposition. In particular, we present an algorithm to simulate the behavior of one neuron embedded in an infinite neural network without simulating the whole network. We focus on mathematically proving that our algorithm returns the right point processes and on studying its stopping condition. Then, a constructive proof shows that this new decomposition holds for on various point processes.Finally, we propose algorithms, that can be parallelized and that enables us to simulate a hundred of thousand neurons in a complete interaction graph, on a laptop computer. Most notably, the complexity of this algorithm seems linear with respect to the number of neurons on simulation
APA, Harvard, Vancouver, ISO, and other styles
30

Gao, Guangyue. "A Stochastic Model for The Transmission Dynamics of Toxoplasma Gondii." Thesis, Virginia Tech, 2016. http://hdl.handle.net/10919/78106.

Full text
Abstract:
Toxoplasma gondii (T. gondii) is an intracellular protozoan parasite. The parasite can infect all warm-blooded vertebrates. Up to 30% of the world's human population carry a Toxoplasma infection. However, the transmission dynamics of T. gondii has not been well understood, although a lot of mathematical models have been built. In this thesis, we adopt a complex life cycle model developed by Turner et al. and extend their work to include diffusion of hosts. Most of researches focus on the deterministic models. However, some scientists have reported that deterministic models sometimes are inaccurate or even inapplicable to describe reaction-diffusion systems, such as gene expression. In this case stochastic models might have qualitatively different properties than its deterministic limit. Consequently, the transmission pathways of T. gondii and potential control mechanisms are investigated by both deterministic and stochastic model by us. A stochastic algorithm due to Gillespie, based on the chemical master equation, is introduced. A compartment-based model and a Smoluchowski equation model are described to simulate the diffusion of hosts. The parameter analyses are conducted based on the reproduction number. The analyses based on the deterministic model are verified by stochastic simulation near the thresholds of the parameters.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
31

Togawa, Kanali Verfasser], Antonello [Akademischer Betreuer] [Monti, and Albert [Akademischer Betreuer] Moser. "Stochastics based methods enabling testing of grid related algorithms through simulation / Kanali Togawa ; Antonello Monti, Albert Moser." Aachen : Universitätsbibliothek der RWTH Aachen, 2015. http://d-nb.info/1130792269/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Togawa, Kanali [Verfasser], Antonello [Akademischer Betreuer] Monti, and Albert [Akademischer Betreuer] Moser. "Stochastics based methods enabling testing of grid related algorithms through simulation / Kanali Togawa ; Antonello Monti, Albert Moser." Aachen : Universitätsbibliothek der RWTH Aachen, 2015. http://nbn-resolving.de/urn:nbn:de:hbz:82-rwth-2015-038861.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Ananthanpillai, Balaji. "Stochastic Simulation of the Phage Lambda System and the Bioluminescence System Using the Next Reaction Method." University of Cincinnati / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1259080814.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Bachouch, Achref. "Numerical Computations for Backward Doubly Stochastic Differential Equations and Nonlinear Stochastic PDEs." Thesis, Le Mans, 2014. http://www.theses.fr/2014LEMA1034/document.

Full text
Abstract:
L’objectif de cette thèse est l’étude d’un schéma numérique pour l’approximation des solutions d’équations différentielles doublement stochastiques rétrogrades (EDDSR). Durant les deux dernières décennies, plusieurs méthodes ont été proposées afin de permettre la résolution numérique des équations différentielles stochastiques rétrogrades standards. Dans cette thèse, on propose une extension de l’une de ces méthodes au cas doublement stochastique. Notre méthode numérique nous permet d’attaquer une large gamme d’équations aux dérivées partielles stochastiques (EDPS) nonlinéaires. Ceci est possible par le biais de leur représentation probabiliste en termes d’EDDSRs. Dans la dernière partie, nous étudions une nouvelle méthode des particules dans le cadre des études de protection en neutroniques
The purpose of this thesis is to study a numerical method for backward doubly stochastic differential equations (BDSDEs in short). In the last two decades, several methods were proposed to approximate solutions of standard backward stochastic differential equations. In this thesis, we propose an extension of one of these methods to the doubly stochastic framework. Our numerical method allows us to tackle a large class of nonlinear stochastic partial differential equations (SPDEs in short), thanks to their probabilistic interpretation. In the last part, we study a new particle method in the context of shielding studies
APA, Harvard, Vancouver, ISO, and other styles
35

Ahn, Tae-Hyuk. "Computational Techniques for the Analysis of Large Scale Biological Systems." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/77162.

Full text
Abstract:
An accelerated pace of discovery in biological sciences is made possible by a new generation of computational biology and bioinformatics tools. In this dissertation we develop novel computational, analytical, and high performance simulation techniques for biological problems, with applications to the yeast cell division cycle, and to the RNA-Sequencing of the yellow fever mosquito. Cell cycle system evolves stochastic effects when there are a small number of molecules react each other. Consequently, the stochastic effects of the cell cycle are important, and the evolution of cells is best described statistically. Stochastic simulation algorithm (SSA), the standard stochastic method for chemical kinetics, is often slow because it accounts for every individual reaction event. This work develops a stochastic version of a deterministic cell cycle model, in order to capture the stochastic aspects of the evolution of the budding yeast wild-type and mutant strain cells. In order to efficiently run large ensembles to compute statistics of cell evolution, the dissertation investigates parallel simulation strategies, and presents a new probabilistic framework to analyze the performance of dynamic load balancing algorithms. This work also proposes new accelerated stochastic simulation algorithms based on a fully implicit approach and on stochastic Taylor expansions. Next Generation RNA-Sequencing, a high-throughput technology to sequence cDNA in order to get information about a sample's RNA content, is becoming an efficient genomic approach to uncover new genes and to study gene expression and alternative splicing. This dissertation develops efficient algorithms and strategies to find new genes in Aedes aegypti, which is the most important vector of dengue fever and yellow fever. We report the discovery of a large number of new gene transcripts, and the identification and characterization of genes that showed male-biased expression profiles. This basic information may open important avenues to control mosquito borne infectious diseases.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
36

Toft, Albin. "Particle-based Parameter Inference in Stochastic Volatility Models: Batch vs. Online." Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252313.

Full text
Abstract:
This thesis focuses on comparing an online parameter estimator to an offline estimator, both based on the PaRIS-algorithm, when estimating parameter values for a stochastic volatility model. By modeling the stochastic volatility model as a hidden Markov model, estimators based on particle filters can be implemented in order to estimate the unknown parameters of the model. The results from this thesis implies that the proposed online estimator could be considered as a superior method to the offline counterpart. The results are however somewhat inconclusive, and further research regarding the subject is recommended.
Detta examensarbetefokuserar på att jämföra en online och offline parameter-skattare i stokastiskavolatilitets modeller. De två parameter-skattarna som jämförs är båda baseradepå PaRIS-algoritmen. Genom att modellera en stokastisk volatilitets-model somen dold Markov kedja, kunde partikelbaserade parameter-skattare användas föratt uppskatta de okända parametrarna i modellen. Resultaten presenterade idetta examensarbete tyder på att online-implementationen av PaRIS-algorimen kanses som det bästa alternativet, jämfört med offline-implementationen.Resultaten är dock inte helt övertygande, och ytterligare forskning inomområdet
APA, Harvard, Vancouver, ISO, and other styles
37

MAJ, CARLO. "Sensitivity analysis for computational models of biochemical systems." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2014. http://hdl.handle.net/10281/50494.

Full text
Abstract:
Systems biology is an integrated area of science which aims at the analysis of biochemical systems using an holistic perspective. In this context, sensitivity analysis, a technique studying how the output variation of a computational model can be associated to its input state plays a pivotal role. In the thesis it is described how to properly apply the different sensitivity analysis techniques according to the specific case study (i.e., continuous deterministic rather than discrete stochastic output). Moreover, we explicitly consider aspects that have been often neglected in the analysis of computational biochemical models, among others, we propose an exploratory analysis of spatial effects in diffusion processes in crowded environments. Furthermore, we developed an innovative pipeline for the partitioning of the input factor space according with the different qualitative dynamics that may be attained by a model (focusing on steady state and oscillatory behavior). Finally, we describe different implementation methods for the reduction of the computational time required to perform sensitivity analysis by evaluating distribute and parallel approaches of model simulations.
APA, Harvard, Vancouver, ISO, and other styles
38

Botha, Marthinus Ignatius. "Modelling and simulation framework incorporating redundancy and failure probabilities for evaluation of a modular automated main distribution frame." Diss., University of Pretoria, 2013. http://hdl.handle.net/2263/33345.

Full text
Abstract:
Maintaining and operating manual main distribution frames is labour-intensive. As a result, Automated Main Distribution Frames (AMDFs) have been developed to alleviate the task of maintaining subscriber loops. Commercial AMDFs are currently employed in telephone exchanges in some parts of the world. However, the most significant factors limiting their widespread adoption are costeffective scalability and reliability. Therefore, an impelling incentive is provided to create a simulation framework in order to explore typical implementations and scenarios. Such a framework will allow the evaluation and optimisation of a design in terms of both internal and external redundancies. One of the approaches to improve system performance, such as system reliability, is to allocate the optimal redundancy to all or some components in a system. Redundancy at the system or component levels can be implemented in one of two schemes: parallel redundancy or standby redundancy. It is also possible to mix these schemes for various components. Moreover, the redundant elements may or may not be of the same type. If all the redundant elements are of different types, the redundancy optimisation model is implemented with component mixing. Conversely, if all the redundant components are identical, the model is implemented without component mixing. The developed framework can be used both to develop new AMDF architectures and to evaluate existing AMDF architectures in terms of expected lifetimes, reliability and service availability. Two simulation models are presented. The first simulation model is concerned with optimising central office equipment within a telephone exchange and entails an environment of clients utilising services. Currently, such a model does not exist. The second model is a mathematical model incorporating stochastic simulation and a hybrid intelligent evolutionary algorithm to solve redundancy allocation problems. For the first model, the optimal partitioning of the model is determined to speed up the simulation run efficiently. For the second model, the hybrid intelligent algorithm is used to solve the redundancy allocation problem under various constraints. Finally, a candidate concept design of an AMDF is presented and evaluated with both simulation models.
Dissertation (MEng)--University of Pretoria, 2013.
gm2014
Electrical, Electronic and Computer Engineering
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
39

Reutenauer, Victor. "Algorithmes stochastiques pour la gestion du risque et l'indexation de bases de données de média." Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR4018/document.

Full text
Abstract:
Cette thèse s’intéresse à différents problèmes de contrôle et d’optimisation dont il n’existe à ce jour que des solutions approchées. D’une part nous nous intéressons à des techniques visant à réduire ou supprimer les approximations pour obtenir des solutions plus précises voire exactes. D’autre part nous développons de nouvelles méthodes d’approximation pour traiter plus rapidement des problèmes à plus grande échelle. Nous étudions des méthodes numériques de simulation d’équation différentielle stochastique et d’amélioration de calculs d’espérance. Nous mettons en œuvre des techniques de type quantification pour la construction de variables de contrôle ainsi que la méthode de gradient stochastique pour la résolution de problèmes de contrôle stochastique. Nous nous intéressons aussi aux méthodes de clustering liées à la quantification, ainsi qu’à la compression d’information par réseaux neuronaux. Les problèmes étudiés sont issus non seulement de motivations financières, comme le contrôle stochastique pour la couverture d’option en marché incomplet mais aussi du traitement des grandes bases de données de médias communément appelé Big data dans le chapitre 5. Théoriquement, nous proposons différentes majorations de la convergence des méthodes numériques d’une part pour la recherche d’une stratégie optimale de couverture en marché incomplet dans le chapitre 3, d’autre part pour l’extension la technique de Beskos-Roberts de simulation d’équation différentielle dans le chapitre 4. Nous présentons une utilisation originale de la décomposition de Karhunen-Loève pour une réduction de variance de l’estimateur d’espérance dans le chapitre 2
This thesis proposes different problems of stochastic control and optimization that can be solved only thanks approximation. On one hand, we develop methodology aiming to reduce or suppress approximations to obtain more accurate solutions or something exact ones. On another hand we develop new approximation methodology in order to solve quicker larger scale problems. We study numerical methodology to simulated differential equations and enhancement of computation of expectations. We develop quantization methodology to build control variate and gradient stochastic methods to solve stochastic control problems. We are also interested in clustering methods linked to quantization, and principal composant analysis or compression of data thanks neural networks. We study problems motivated by mathematical finance, like stochastic control for the hedging of derivatives in incomplete market but also to manage huge databases of media commonly known as big Data in chapter 5. Theoretically we propose some upper bound for convergence of the numerical method used. This is the case of optimal hedging in incomplete market in chapter 3 but also an extension of Beskos-Roberts methods of exact simulation of stochastic differential equations in chapter 4. We present an original application of karhunen-Loève decomposition for a control variate of computation of expectation in chapter 2
APA, Harvard, Vancouver, ISO, and other styles
40

Zhang, Jingwei. "Numerical Methods for the Chemical Master Equation." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/30018.

Full text
Abstract:
The chemical master equation, formulated on the Markov assumption of underlying chemical kinetics, offers an accurate stochastic description of general chemical reaction systems on the mesoscopic scale. The chemical master equation is especially useful when formulating mathematical models of gene regulatory networks and protein-protein interaction networks, where the numbers of molecules of most species are around tens or hundreds. However, solving the master equation directly suffers from the so called "curse of dimensionality" issue. This thesis first tries to study the numerical properties of the master equation using existing numerical methods and parallel machines. Next, approximation algorithms, namely the adaptive aggregation method and the radial basis function collocation method, are proposed as new paths to resolve the "curse of dimensionality". Several numerical results are presented to illustrate the promises and potential problems of these new algorithms. Comparisons with other numerical methods like Monte Carlo methods are also included. Development and analysis of the linear Shepard algorithm and its variants, all of which could be used for high dimensional scattered data interpolation problems, are also included here, as a candidate to help solve the master equation by building surrogate models in high dimensions.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
41

Tychonievich, Luther A. "Simulation and Visualization of Environments with Multidimensional Time." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2266.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Cerqueira, Andressa. "Statistical inference on random graphs and networks." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-04042018-094802/.

Full text
Abstract:
In this thesis we study two probabilistic models defined on graphs: the Stochastic Block model and the Exponential Random Graph. Therefore, this thesis is divided in two parts. In the first part, we introduce the Krichevsky-Trofimov estimator for the number of communities in the Stochastic Block Model and prove its eventual almost sure convergence to the underlying number of communities, without assuming a known upper bound on that quantity. In the second part of this thesis we address the perfect simulation problem for the Exponential random graph model. We propose an algorithm based on the Coupling From The Past algorithm using a Glauber dynamics. This algorithm is efficient in the case of monotone models. We prove that this is the case for a subset of the parametric space. We also propose an algorithm based on the Backward and Forward algorithm that can be applied for monotone and non monotone models. We prove the existence of an upper bound for the expected running time of both algorithms.
Nessa tese estudamos dois modelos probabilísticos definidos em grafos: o modelo estocástico por blocos e o modelo de grafos exponenciais. Dessa forma, essa tese está dividida em duas partes. Na primeira parte nós propomos um estimador penalizado baseado na mistura de Krichevsky-Trofimov para o número de comunidades do modelo estocástico por blocos e provamos sua convergência quase certa sem considerar um limitante conhecido para o número de comunidades. Na segunda parte dessa tese nós abordamos o problema de simulação perfeita para o modelo de grafos aleatórios Exponenciais. Nós propomos um algoritmo de simulação perfeita baseado no algoritmo Coupling From the Past usando a dinâmica de Glauber. Esse algoritmo é eficiente apenas no caso em que o modelo é monotóno e nós provamos que esse é o caso para um subconjunto do espaço paramétrico. Nós também propomos um algoritmo de simulação perfeita baseado no algoritmo Backward and Forward que pode ser aplicado à modelos monótonos e não monótonos. Nós provamos a existência de um limitante superior para o número esperado de passos de ambos os algoritmos.
APA, Harvard, Vancouver, ISO, and other styles
43

Sbaï, Mohamed. "Modélisation de la dépendance et simulation de processus en finance." Thesis, Paris Est, 2009. http://www.theses.fr/2009PEST1046/document.

Full text
Abstract:
La première partie de cette thèse est consacrée aux méthodes numériques pour la simulation de processus aléatoires définis par des équations différentielles stochastiques (EDS). Nous commençons par l’étude de l’algorithme de Beskos et al. [13] qui permet de simuler exactement les trajectoires d’un processus solution d’une EDS en dimension 1. Nous en proposons une extension à des fins de calcul exact d’espérances et nous étudions l’application de ces idées à l’évaluation du prix d’options asiatiques dans le modèle de Black & Scholes. Nous nous intéressons ensuite aux schémas numériques. Dans le deuxième chapitre, nous proposons deux schémas de discrétisation pour une famille de modèles à volatilité stochastique et nous en étudions les propriétés de convergence. Le premier schéma est adapté à l’évaluation du prix d’options path-dependent et le deuxième aux options vanilles. Nous étudions également le cas particulier où le processus qui dirige la volatilité est un processus d’Ornstein-Uhlenbeck et nous exhibons un schéma de discrétisation qui possède de meilleures propriétés de convergence. Enfin, dans le troisième chapitre, il est question de la convergence faible trajectorielle du schéma d’Euler. Nous apportons un début de réponse en contrôlant la distance de Wasserstein entre les marginales du processus solution et du schéma d’Euler, uniformément en temps. La deuxième partie de la thèse porte sur la modélisation de la dépendance en finance et ce à travers deux problématiques distinctes : la modélisation jointe entre un indice boursier et les actions qui le composent et la gestion du risque de défaut dans les portefeuilles de crédit. Dans le quatrième chapitre, nous proposons un cadre de modélisation original dans lequel les volatilités de l’indice et de ses composantes sont reliées. Nous obtenons un modèle simplifié quand la taille de l’indice est grande, dans lequel l’indice suit un modèle à volatilité locale et les actions individuelles suivent un modèle à volatilité stochastique composé d’une partie intrinsèque et d’une partie commune dirigée par l’indice. Nous étudions la calibration de ces modèles et montrons qu’il est possible de se caler sur les prix d’options observés sur le marché, à la fois pour l’indice et pour les actions, ce qui constitue un avantage considérable. Enfin, dans le dernier chapitre de la thèse, nous développons un modèle à intensités permettant de modéliser simultanément, et de manière consistante, toutes les transitions de ratings qui surviennent dans un grand portefeuille de crédit. Afin de générer des niveaux de dépendance plus élevés, nous introduisons le modèle dynamic frailty dans lequel une variable dynamique inobservable agit de manière multiplicative sur les intensités de transitions. Notre approche est purement historique et nous étudions l’estimation par maximum de vraisemblance des paramètres de nos modèles sur la base de données de transitions de ratings passées
The first part of this thesis deals with probabilistic numerical methods for simulating the solution of a stochastic differential equation (SDE). We start with the algorithm of Beskos et al. [13] which allows exact simulation of the solution of a one dimensional SDE. We present an extension for the exact computation of expectations and we study the application of these techniques for the pricing of Asian options in the Black & Scholes model. Then, in the second chapter, we propose and study the convergence of two discretization schemes for a family of stochastic volatility models. The first one is well adapted for the pricing of vanilla options and the second one is efficient for the pricing of path-dependent options. We also study the particular case of an Orstein-Uhlenbeck process driving the volatility and we exhibit a third discretization scheme which has better convergence properties. Finally, in the third chapter, we tackle the trajectorial weak convergence of the Euler scheme by providing a simple proof for the estimation of the Wasserstein distance between the solution and its Euler scheme, uniformly in time. The second part of the thesis is dedicated to the modelling of dependence in finance through two examples : the joint modelling of an index together with its composing stocks and intensity-based credit portfolio models. In the forth chapter, we propose a new modelling framework in which the volatility of an index and the volatilities of its composing stocks are connected. When the number of stocks is large, we obtain a simplified model consisting of a local volatility model for the index and a stochastic volatility model for the stocks composed of an intrinsic part and a systemic part driven by the index. We study the calibration of these models and show that it is possible to fit the market prices of both the index and the stocks. Finally, in the last chapter of the thesis, we define an intensity-based credit portfolio model. In order to obtain stronger dependence levels between rating transitions, we extend it by introducing an unobservable random process (frailty) which acts multiplicatively on the intensities of the firms of the portfolio. Our approach is fully historical and we estimate the parameters of our model to past rating transitions using maximum likelihood techniques
APA, Harvard, Vancouver, ISO, and other styles
44

Shabala, Alexander. "Mathematical modelling of oncolytic virotherapy." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:cca2c9bc-cbd4-4651-9b59-8a4dea7245d1.

Full text
Abstract:
This thesis is concerned with mathematical modelling of oncolytic virotherapy: the use of genetically modified viruses to selectively spread, replicate and destroy cancerous cells in solid tumours. Traditional spatially-dependent modelling approaches have previously assumed that virus spread is due to viral diffusion in solid tumours, and also neglect the time delay introduced by the lytic cycle for viral replication within host cells. A deterministic, age-structured reaction-diffusion model is developed for the spatially-dependent interactions of uninfected cells, infected cells and virus particles, with the spread of virus particles facilitated by infected cell motility and delay. Evidence of travelling wave behaviour is shown, and an asymptotic approximation for the wave speed is derived as a function of key parameters. Next, the same physical assumptions as in the continuum model are used to develop an equivalent discrete, probabilistic model for that is valid in the limit of low particle concentrations. This mesoscopic, compartment-based model is then validated against known test cases, and it is shown that the localised nature of infected cell bursts leads to inconsistencies between the discrete and continuum models. The qualitative behaviour of this stochastic model is then analysed for a range of key experimentally-controllable parameters. Two-dimensional simulations of in vivo and in vitro therapies are then analysed to determine the effects of virus burst size, length of lytic cycle, infected cell motility, and initial viral distribution on the wave speed, consistency of results and overall success of therapy. Finally, the experimental difficulty of measuring the effective motility of cells is addressed by considering effective medium approximations of diffusion through heterogeneous tumours. Considering an idealised tumour consisting of periodic obstacles in free space, a two-scale homogenisation technique is used to show the effects of obstacle shape on the effective diffusivity. A novel method for calculating the effective continuum behaviour of random walks on lattices is then developed for the limiting case where microscopic interactions are discrete.
APA, Harvard, Vancouver, ISO, and other styles
45

Charlebois, Daniel. "Computational Investigations of Noise-mediated Cell Population Dynamics." Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/30339.

Full text
Abstract:
Fluctuations, or "noise", can play a key role in determining the behaviour of living systems. The molecular-level fluctuations that occur in genetic networks are of particular importance. Here, noisy gene expression can result in genetically identical cells displaying significant variation in phenotype, even in identical environments. This variation can act as a basis for natural selection and provide a fitness benefit to cell populations under stress. This thesis focuses on the development of new conceptual knowledge about how gene expression noise and gene network topology influence drug resistance, as well as new simulation techniques to better understand cell population dynamics. Network topology may at first seem disconnected from expression noise, but genes in a network regulate each other through their expression products. The topology of a genetic network can thus amplify or attenuate noisy inputs from the environment and influence the expression characteristics of genes serving as outputs to the network. The main body of the thesis consists of five chapters: 1. A published review article on the physical basis of cellular individuality. 2. A published article presenting a novel method for simulating the dynamics of cell populations. 3. A chapter on modeling and simulating replicative aging and competition using an object-oriented framework. 4. A published research article establishing that noise in gene expression can facilitate adaptation and drug resistance independent of mutation. 5. An article submitted for publication demonstrating that gene network topology can affect the development of drug resistance. These chapters are preceded by a comprehensive introduction that covers essential concepts and theories relevant to the work presented.
APA, Harvard, Vancouver, ISO, and other styles
46

Silva, Camillo de Lellis Falcão da. "Novos algoritmos de simulação estocástica com atraso para redes gênicas." Universidade Federal de Juiz de Fora (UFJF), 2014. https://repositorio.ufjf.br/jspui/handle/ufjf/4828.

Full text
Abstract:
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-06-06T13:42:32Z No. of bitstreams: 1 camillodelellisfalcaodasilva.pdf: 1420414 bytes, checksum: f38c14f74131ea594b1e105fbfdb1619 (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-06-06T14:07:56Z (GMT) No. of bitstreams: 1 camillodelellisfalcaodasilva.pdf: 1420414 bytes, checksum: f38c14f74131ea594b1e105fbfdb1619 (MD5)
Made available in DSpace on 2017-06-06T14:07:56Z (GMT). No. of bitstreams: 1 camillodelellisfalcaodasilva.pdf: 1420414 bytes, checksum: f38c14f74131ea594b1e105fbfdb1619 (MD5) Previous issue date: 2014-05-22
CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Atualmente, a eficiência dos algoritmos de simulação estocástica para a simulação de redes de regulação gênica (RRG) tem motivado diversos trabalhos científicos. O interesse por tais algoritmos deve-se ao fato de as novas tecnologias em biologia celular — às vezes chamadas de tecnologias de alto rendimento (high throughput technology cell biology) — te-rem mostrado que a expressão gênica é um processo estocástico. Em RRG com atrasos, os algoritmos para simulação estocástica existentes possuem problemas — como crescimento linear da complexidade assintótica, descarte excessivo de números aleatórios durante a si-mulação e grande complexidade de codificação em linguagens de programação — que podem resultar em um baixo desempenho em relação ao tempo de processamento de simulação de uma RRG. Este trabalho apresenta um algoritmo para simulação estocástica que foi chamado de método da próxima reação simplificado (SNRM). Esse algoritmo mostrou-se mais eficiente que as outras abordagens existentes para simulações estocásticas realizadas com as RRGs com atrasos. Além do SNRM, um novo grafo de dependências para reações com atrasos também é apresentado. A utilização desse novo grafo, que foi nomeado de delayed dependency graph (DDG), aumentou consideravelmente a eficiência de todas as versões dos algoritmos de simulação estocástica com atrasos apresentados nesse trabalho. Finalmente, uma estrutura de dados que recebeu o nome de lista ordenada por hashing é utilizada para tratar a lista de produtos em espera em simulações de RRGs com atrasos. Essa estrutura de dados também se mostrou mais eficiente que uma heap em todas as simulações testadas. Com todas as melhorias mencionadas, este trabalho apresenta um conjunto de estratégias que contribui de forma efetiva para o desempenho dos algoritmos de simulação estocástica com atrasos de redes de regulação gênica.
Recently, the time efficiency of stochastic simulation algorithms for gene regulatory networks (GRN) has motivated several scientific works. Interest in such algorithms is because the new technologies in cell biology — called high-throughput technologies cell biology — have shown that gene expression is a stochastic process. In GRN with delays, the existing algorithms for stochastic simulation have some drawbacks — such as linear growth of complexity, excessive discard of random numbers, and the coding in a programming language can be hard — that result in poor performance during the simulation of very large GRN. This work presents an algorithm for stochastic simulation of GRN. We called it simplified next reaction method (SNRM). This algorithm was more efficient than other existing algorithms for stochastically simulation of GRN with delays. Besides SNRM, a new dependency graph for delayed reactions is also presented. The use of this new graph, which we named it delayed dependency graph (DDG), greatly increased the efficiency of all versions of the algorithms for stochastic simulation with delays presented in this work. Finally, a data structure that we named hashing sorted list is used to handle the waiting list of products in simulations of GRN with delays. This data structure was also more efficient than a heap in all tested simulations. With all the improvements mentioned, this work presents a set of strategies that contribute effectively to increasing performance of stochastic simulation algorithms with delays for gene regulatory networks.
APA, Harvard, Vancouver, ISO, and other styles
47

Toledo, Augusto Andres Torres. "Desenho de polígonos e sequenciamento de blocos de minério para planejamento de curto prazo procurando estacionarização dos teores." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/180128.

Full text
Abstract:
O planejamento de curto prazo em minas a céu aberto exige a definição de poligonais, que representam os sucessivos avanços de lavra. As poligonais, tradicionalmente, são desenhadas em um processo laborioso na tentativa de delinear como minério em qualidade e quantidade de acordo com os limites determinados. O minério delimitado deve apresentar a menor variabilidade em qualidade possível, com o objetivo de maximizar a recuperação na usina de processamento. Essa dissertação visa desenvolver um fluxo do trabalho para definir poligonais de curto prazo de forma automática, além disso, sequenciar todos os blocos de minério de cada polígono de modo a definir uma sequência interconectada lavrável de poligonais. O fluxo do trabalho foi aplicada à incerteza de teores, obtida através de simulações estocásticas. Algoritmos genéticos foram desenvolvidos em linguagem de programação Python e implementados na forma de plug-in no software geoestatístico Ar2GeMS. Múltiplas iterações são criadas para cada avanço individual, gerando regiões (ou poligonais). Então, a região que apresenta menor variabilidade de teores é selecionada. A distribuição de probabilidade dos teores dos blocos em cada avanço é comparada com a distribuição global de teores, calculada a partir de todos os blocos do corpo de minério. Os resultados mostraram que os teores dos blocos abrangidos pelas poligonais criadas dessa forma apresentam teores similares à distribuição de referência, permitindo o sequenciamento de lavra com distribuição de teores mais próximo possível da distribuição global. Modelos equiprováveis permitem avaliar a incerteza associada à solução proposta.
Open-pit short-term planning requieres the definition of polygons identifying the successive mining advances. These polygons are drawn in a labour intensive task attempting to delineate ore with the quantity and quality within established ranges. The ore delineated by the polygons should have the least possible quality variability among them, helping in maximizing ore recovery at the processing plant. This thesis aims at developíng a workflow for drawing short-term polygons automatically, sequencing all ore blocks within each polygon and leading to a mineable and connected sequence of polygons. This workflow is also tested under grade uncertainty obtained through multiple syochastic simulated models. For this, genetics algorithms were developed in Python programming language and pluged in Ar2GeMS geostatistical software. Multiple iterations were generated for each of the individual advances, generating regions or polygons, and selecting the regions of lower grade variability. The blocks probability distribution within each advance were compared to the global distribution, including all blocks within the ore body. Results show that the polygons generated are comprised by block grades similar to the ones from the reference distribution, leading to mining sequence as close as possible to the global maintaining a quasi-satationarity. Equally probable models provide the means to access the uncertainy in the solution provided.
APA, Harvard, Vancouver, ISO, and other styles
48

Trimeloni, Thomas. "Accelerating Finite State Projection through General Purpose Graphics Processing." VCU Scholars Compass, 2011. http://scholarscompass.vcu.edu/etd/175.

Full text
Abstract:
The finite state projection algorithm provides modelers a new way of directly solving the chemical master equation. The algorithm utilizes the matrix exponential function, and so the algorithm’s performance suffers when it is applied to large problems. Other work has been done to reduce the size of the exponentiation through mathematical simplifications, but efficiently exponentiating a large matrix has not been explored. This work explores implementing the finite state projection algorithm on several different high-performance computing platforms as a means of efficiently calculating the matrix exponential function for large systems. This work finds that general purpose graphics processing can accelerate the finite state projection algorithm by several orders of magnitude. Specific biological models and modeling techniques are discussed as a demonstration of the algorithm implemented on a general purpose graphics processor. The results of this work show that general purpose graphics processing will be a key factor in modeling more complex biological systems.
APA, Harvard, Vancouver, ISO, and other styles
49

Brégère, Margaux. "Stochastic bandit algorithms for demand side management Simulating Tariff Impact in Electrical Energy Consumption Profiles with Conditional Variational Autoencoders Online Hierarchical Forecasting for Power Consumption Data Target Tracking for Contextual Bandits : Application to Demand Side Management." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASM022.

Full text
Abstract:
L'électricité se stockant difficilement à grande échelle, l'équilibre entre la production et la consommation doit être rigoureusement maintenu. Une gestion par anticipation de la demande se complexifie avec l'intégration au mix de production des énergies renouvelables intermittentes. Parallèlement, le déploiement des compteurs communicants permet d'envisager un pilotage dynamique de la consommation électrique. Plus concrètement, l'envoi de signaux - tels que des changements du prix de l'électricité – permettrait d'inciter les usagers à moduler leur consommation afin qu'elle s'ajuste au mieux à la production d'électricité. Les algorithmes choisissant ces signaux devront apprendre la réaction des consommateurs face aux envois tout en les optimisant (compromis exploration-exploitation). Notre approche, fondée sur la théorie des bandits, a permis de formaliser ce problème d'apprentissage séquentiel et de proposer un premier algorithme pour piloter la demande électrique d'une population homogène de consommateurs. Une borne supérieure d'ordre T⅔ a été obtenue sur le regret de cet algorithme. Des expériences réalisées sur des données de consommation de foyers soumis à des changements dynamiques du prix de l'électricité illustrent ce résultat théorique. Un jeu de données en « information complète » étant nécessaire pour tester un algorithme de bandits, un simulateur de données de consommation fondé sur les auto-encodeurs variationnels a ensuite été construit. Afin de s'affranchir de l'hypothèse d'homogénéité de la population, une approche pour segmenter les foyers en fonction de leurs habitudes de consommation est aussi proposée. Ces différents travaux sont finalement combinés pour proposer et tester des algorithmes de bandits pour un pilotage personnalisé de la consommation électrique
As electricity is hard to store, the balance between production and consumption must be strictly maintained. With the integration of intermittent renewable energies into the production mix, the management of the balance becomes complex. At the same time, the deployment of smart meters suggests demand response. More precisely, sending signals - such as changes in the price of electricity - would encourage users to modulate their consumption according to the production of electricity. The algorithms used to choose these signals have to learn consumer reactions and, in the same time, to optimize them (exploration-exploration trade-off). Our approach is based on bandit theory and formalizes this sequential learning problem. We propose a first algorithm to control the electrical demand of a homogeneous population of consumers and offer T⅔ upper bound on its regret. Experiments on a real data set in which price incentives were offered illustrate these theoretical results. As a “full information” dataset is required to test bandit algorithms, a consumption data generator based on variational autoencoders is built. In order to drop the assumption of the population homogeneity, we propose an approach to cluster households according to their consumption profile. These different works are finally combined to propose and test a bandit algorithm for personalized demand side management
APA, Harvard, Vancouver, ISO, and other styles
50

Фот, Андрій Вікторович, and Andriy Fot. "Канал передачі мультимедійної інформації на базі радіо та лазерної технологій." Master's thesis, Тернопільський національний технічний університет імені Івана Пулюя, 2020. http://elartu.tntu.edu.ua/handle/lib/33948.

Full text
Abstract:
У дипломній роботі магістра проведено дослідження та аналіз сучасного стану широкосмугових і надширокосмугових бездротових засобів передачі мультимедійної інформації, огляд науково-технічної літератури з проблем створення змішаних каналів зв’язку на базі радіо і лазерної технологій, аналіз напрямків розвитку лазерних каналів і широкосмугових радіо засобів, що функціонують в частотних діапазонах 2,4-6,4 ГГц, 71-76 ГГц і 81-85 ГГц. Зпроектовано математичну модель змішаного каналу з використанням методів теорії стохастичних систем і мереж для оцінки характеристик продуктивності і надійності. Реалізовано машинну (імітаційну) модель змішаного каналу. Розроблено комплекс програмних засобів аналітичного і імітаційного моделювання змішаного каналу зв’язку.
The master's thesis conducted research and analysis of the current state of broadband and ultra-wideband means of transmitting multimedia information, review of scientific and technical literature on the problems of creating mixed communication channels based on radio and laser technology, analysis of the directions of development of laser channels and broadband radio means in the frequency bands 2.4-6.4 GHz, 71-76 GHz and 81-85 GHz. A mathematical model of a mixed channel designed using the methods of the theory of stochastic systems and networks to evaluate the characteristics of performance and reliability. A machine (simulation) model of a mixed channel implemented. A set of software tools for analytical and simulation modelling of a mixed communication channel designed.
ВСТУП 8 АНАЛІТИЧНА ЧАСТИНА .11 1.1. Огляд науково-технічної літератури з проблем створення змішаних радіо та лазерних каналів зв’язку 11 1.2. Аналіз напрямків розвитку лазерних каналів і широкосмугових радіо засобів, що функціонують в частотних діапазонах 2,4-6,4 ГГц, 71-76 ГГц і 81-85 ГГц 14 1.3. Дослідження стану і перспектив розвитку апаратно-програмних засобів змішаних каналів передачі мультимедійної інформації на базі радіо і лазерних технологій 26 1.4. Вибір оптимальних параметрів протоколу передачі інформації, що забезпечують максимальну продуктивність каналу передачі мультимедійної інформації 29 1.5. Висновки до розділу 1 34 ОСНОВНА ЧАСТИНА 36 2.1. Проектування математичної моделі змішаного каналу з використанням методів теорії стохастичних систем і мереж для оцінки характеристик продуктивності і надійності 36 2.2. Розробка машинної (імітаційної) моделі змішаного каналу .54 2.3. Проведення статистичної обробки метеоданих і пошук функції розподілу періодів доступності і недоступності атмосферного оптичного каналу .56 2.4. Висновки до розділу 2 . 57 НАУКОВО-ДОСЛІДНА ЧАСТИНА 58 3.1. Розробка комплексу програмних засобів аналітичного і імітаційного моделювання змішаного каналу зв’язку 58 3.2. Аналіз чисельних результатів вибору оптимальних параметрів і порівняльного аналізу варіантів побудови змішаного каналу 71 3.3. Співставлення результатів моделювання з результатами розрахунків ..... 80 3.4. Висновки до розділу 3 .83 ОХОРОНА ПРАЦІ ТА БЕЗПЕКА В НАДЗВИЧАЙНИХ СИТУАЦІЯХ..84 4.1. Охорона праці . 84 4.2. Вплив виробничого середовища на життєдіяльність людини . 87 4.3. Висновки до розділу 4 . 92 ВИСНОВКИ ..93 СПИСОК ВИКОРИСТАНИХ ДЖЕРЕЛ 95 ДОДАТКИ 99 Додаток А Копія тез конференції “Інформаційні моделі, системи та технології” 100
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography