Gotowa bibliografia na temat „Gillespie simulations”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Gillespie simulations”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Gillespie simulations"

1

Rajput, Nikhil Kumar. "Gillespie algorithm and diffusion approximation based on Monte Carlo simulation for innovation diffusion: A comparative study". Monte Carlo Methods and Applications 25, nr 3 (1.09.2019): 209–15. http://dx.doi.org/10.1515/mcma-2019-2040.

Pełny tekst źródła
Streszczenie:
Abstract Monte Carlo simulations have been utilized to make a comparative study between diffusion approximation (DA) and the Gillespie algorithm and its dependence on population in the information diffusion model. Diffusion approximation is one of the widely used approximation methods which have been applied in queuing systems, biological systems and other fields. The Gillespie algorithm, on the other hand, is used for simulating stochastic systems. In this article, the validity of diffusion approximation has been studied in relation to the Gillespie algorithm for varying population sizes. It is found that diffusion approximation results in large fluctuations which render forecasting unreliable particularly for a small population. The relative fluctuations in relation to diffusion approximation, as well as to the Gillespie algorithm have been analyzed. To carry out the study, a nonlinear stochastic model of innovation diffusion in a finite population has been considered. The nonlinearity of the problem necessitates use of approximation methods to understand the dynamics of the system. A stochastic differential equation (SDE) has been used to model the innovation diffusion process, and corresponding sample paths have been generated using Monte Carlo simulation methods.
Style APA, Harvard, Vancouver, ISO itp.
2

Kierzek, A. M. "STOCKS: STOChastic Kinetic Simulations of biochemical systems with Gillespie algorithm". Bioinformatics 18, nr 3 (1.03.2002): 470–81. http://dx.doi.org/10.1093/bioinformatics/18.3.470.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Alfonso, L., G. B. Raga i D. Baumgardner. "Monte Carlo simulations of two-component drop growth by stochastic coalescence". Atmospheric Chemistry and Physics Discussions 8, nr 2 (16.04.2008): 7289–313. http://dx.doi.org/10.5194/acpd-8-7289-2008.

Pełny tekst źródła
Streszczenie:
Abstract. The evolution of two-dimensional drop distributions is simulated in this study using a Monte Carlo method.~The stochastic algorithm of Gillespie (1976) for chemical reactions in the formulation proposed by Laurenzi et al. (2002) was used to simulate the kinetic behavior of the drop population. Within this framework species are defined as droplets of specific size and aerosol composition. The performance of the algorithm was checked by comparing the numerical with the analytical solutions found by Lushnikov (1975). Very good agreement was observed between the Monte Carlo simulations and the analytical solution. Simulation results are presented for bi-variate constant and hydrodynamic kernels. The algorithm can be easily extended to incorporate various properties of clouds such as including several crystal habits, different types of soluble CCN, particle charging and drop breakup.
Style APA, Harvard, Vancouver, ISO itp.
4

Martinecz, Antal, Fabrizio Clarelli, Sören Abel i Pia Abel zur Wiesch. "Reaction Kinetic Models of Antibiotic Heteroresistance". International Journal of Molecular Sciences 20, nr 16 (15.08.2019): 3965. http://dx.doi.org/10.3390/ijms20163965.

Pełny tekst źródła
Streszczenie:
Bacterial heteroresistance (i.e., the co-existence of several subpopulations with different antibiotic susceptibilities) can delay the clearance of bacteria even with long antibiotic exposure. Some proposed mechanisms have been successfully described with mathematical models of drug-target binding where the mechanism’s downstream of drug-target binding are not explicitly modeled and subsumed in an empirical function, connecting target occupancy to antibiotic action. However, with current approaches it is difficult to model mechanisms that involve multi-step reactions that lead to bacterial killing. Here, we have a dual aim: first, to establish pharmacodynamic models that include multi-step reaction pathways, and second, to model heteroresistance and investigate which molecular heterogeneities can lead to delayed bacterial killing. We show that simulations based on Gillespie algorithms, which have been employed to model reaction kinetics for decades, can be useful tools to model antibiotic action via multi-step reactions. We highlight the strengths and weaknesses of current models and Gillespie simulations. Finally, we show that in our models, slight normally distributed variances in the rates of any event leading to bacterial death can (depending on parameter choices) lead to delayed bacterial killing (i.e., heteroresistance). This means that a slowly declining residual bacterial population due to heteroresistance is most likely the default scenario and should be taken into account when planning treatment length.
Style APA, Harvard, Vancouver, ISO itp.
5

Chen-Charpentier, Benito. "Stochastic Modeling of Plant Virus Propagation with Biological Control". Mathematics 9, nr 5 (24.02.2021): 456. http://dx.doi.org/10.3390/math9050456.

Pełny tekst źródła
Streszczenie:
Plants are vital for man and many species. They are sources of food, medicine, fiber for clothes and materials for shelter. They are a fundamental part of a healthy environment. However, plants are subject to virus diseases. In plants most of the virus propagation is done by a vector. The traditional way of controlling the insects is to use insecticides that have a negative effect on the environment. A more environmentally friendly way to control the insects is to use predators that will prey on the vector, such as birds or bats. In this paper we modify a plant-virus propagation model with delays. The model is written using delay differential equations. However, it can also be expressed in terms of biochemical reactions, which is more realistic for small populations. Since there are always variations in the populations, errors in the measured values and uncertainties, we use two methods to introduce randomness: stochastic differential equations and the Gillespie algorithm. We present numerical simulations. The Gillespie method produces good results for plant-virus population models.
Style APA, Harvard, Vancouver, ISO itp.
6

Alfonso, L., G. B. Raga i D. Baumgardner. "Monte Carlo simulations of two-component drop growth by stochastic coalescence". Atmospheric Chemistry and Physics 9, nr 4 (18.02.2009): 1241–51. http://dx.doi.org/10.5194/acp-9-1241-2009.

Pełny tekst źródła
Streszczenie:
Abstract. The evolution of two-dimensional drop distributions is simulated in this study using a Monte Carlo method. The stochastic algorithm of Gillespie (1976) for chemical reactions in the formulation proposed by Laurenzi et al. (2002) was used to simulate the kinetic behavior of the drop population. Within this framework, species are defined as droplets of specific size and aerosol composition. The performance of the algorithm was checked by a comparison with the analytical solutions found by Lushnikov (1975) and Golovin (1963) and with finite difference solutions of the two-component kinetic collection equation obtained for the Golovin (sum) and hydrodynamic kernels. Very good agreement was observed between the Monte Carlo simulations and the analytical and numerical solutions. A simulation for realistic initial conditions is presented for the hydrodynamic kernel. As expected, the aerosol mass is shifted from small to large particles due to collection process. This algorithm could be extended to incorporate various properties of clouds such several crystals habits, different types of soluble CCN, particle charging and drop breakup.
Style APA, Harvard, Vancouver, ISO itp.
7

Chang, Qiang, Yang Lu i Donghui Quan. "Accelerated Gillespie Algorithm for Gas–Grain Reaction Network Simulations Using Quasi-steady-state Assumption". Astrophysical Journal 851, nr 1 (13.12.2017): 68. http://dx.doi.org/10.3847/1538-4357/aa99d9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Jo, Yeji, Kyusik Mun, Yeonjoo Jeong, Joon Young Kwak, Jongkil Park, Suyoun Lee, Inho Kim, Jong-Keuk Park, Gyu-Weon Hwang i Jaewook Kim. "A Poisson Process Generator Based on Multiple Thermal Noise Amplifiers for Parallel Stochastic Simulation of Biochemical Reactions". Electronics 11, nr 7 (25.03.2022): 1039. http://dx.doi.org/10.3390/electronics11071039.

Pełny tekst źródła
Streszczenie:
In this paper, we propose a novel Poisson process generator that uses multiple thermal noise amplifiers (TNAs) as a source of randomness and controls its event rate via a frequency-locked loop (FLL). The increase in the number of TNAs extends the effective bandwidth of amplified thermal noise and hence enhances the maximum event rate the proposed architecture can generate. Verilog-A simulation of the proposed Poisson process generator shows that its maximum event rate can be increased by a factor of 26.5 when the number of TNAs increases from 1 to 10. In order to realize parallel stochastic simulations of the biochemical reaction network, we present a fundamental reaction building block with continuous-time multiplication and addition using an AND gate and a 1-bit current-steering digital-to-analog converter, respectively. Stochastic biochemical reactions consisting of the fundamental reaction building blocks are simulated in Verilog-A, demonstrating that the simulation results are consistent with those of conventional Gillespie algorithm. An increase in the number of TNAs to accelerate the Poisson events and the use of digital AND gates for robust reaction rate calculations allow for faster and more accurate stochastic simulations of biochemical reactions than previous parallel stochastic simulators.
Style APA, Harvard, Vancouver, ISO itp.
9

Iwuchukwu, Edward Uchechukwu, i Ardson dos Santos Junior Vianna. "Stochastic Modelling and Simulation of Free Radical Polymerization of Styrene in Microchannels using a Hybrid Gillespie Algorithm". Journal of Engineering and Exact Sciences 9, nr 1 (13.02.2023): 15327–01. http://dx.doi.org/10.18540/jcecvl9iss1pp15327-01e.

Pełny tekst źródła
Streszczenie:
Most recently, the production of polystyrene by Free Radical Polymerization (FRP) via microchannels has been a subject of core interest due to the efficiency of a micro-or milli-reactor brings. In addition, especially in pilot experimentations, a micro or milli-reactor has been known widely to be efficient in monitoring the microstructural end-use features or properties of the polymer as the chain propagates and ultimately terminates. However, the limitations posed by using micro or milli-reactors in process intensification such as clogging of pores can be a bottleneck when tracking the common phenomena associated with FRP such as cage, gel, and glass effects. In this work, the simulation of the synthesis of polystyrene in FRP via microchannels is computed using a robust and time-efficient hybrid Gillespie Algorithm (GA) or Hybrid Stochastic Simulation Algorithm (HSSA). The obtained results of the end-use properties of polystyrene such as Monomer conversion, Polydispersity Index, Number-Average Molar Mass and Weight Average Molar Mass were compared to experimental data. The simulation results agree well with the experimental results reported in this work. Hence, stochastic simulations prove to be an effective tool in making decisions in the context of process intensification of chain growth polymerization reactions even at a large scale.
Style APA, Harvard, Vancouver, ISO itp.
10

Matzko, Richard Oliver, Laurentiu Mierla i Savas Konur. "Novel Ground-Up 3D Multicellular Simulators for Synthetic Biology CAD Integrating Stochastic Gillespie Simulations Benchmarked with Topologically Variable SBML Models". Genes 14, nr 1 (6.01.2023): 154. http://dx.doi.org/10.3390/genes14010154.

Pełny tekst źródła
Streszczenie:
The elevation of Synthetic Biology from single cells to multicellular simulations would be a significant scale-up. The spatiotemporal behavior of cellular populations has the potential to be prototyped in silico for computer assisted design through ergonomic interfaces. Such a platform would have great practical potential across medicine, industry, research, education and accessible archiving in bioinformatics. Existing Synthetic Biology CAD systems are considered limited regarding population level behavior, and this work explored the in silico challenges posed from biological and computational perspectives. Retaining the connection to Synthetic Biology CAD, an extension of the Infobiotics Workbench Suite was considered, with potential for the integration of genetic regulatory models and/or chemical reaction networks through Next Generation Stochastic Simulator (NGSS) Gillespie algorithms. These were executed using SBML models generated by in-house SBML-Constructor over numerous topologies and benchmarked in association with multicellular simulation layers. Regarding multicellularity, two ground-up multicellular solutions were developed, including the use of Unreal Engine 4 contrasted with CPU multithreading and Blender visualization, resulting in a comparison of real-time versus batch-processed simulations. In conclusion, high-performance computing and client–server architectures could be considered for future works, along with the inclusion of numerous biologically and physically informed features, whilst still pursuing ergonomic solutions.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Gillespie simulations"

1

Vagne, Quentin. "Stochastic models of intra-cellular organization : from non-equilibrium clustering of membrane proteins to the dynamics of cellular organelles". Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCC205/document.

Pełny tekst źródła
Streszczenie:
Cette thèse a pour sujet la biologie cellulaire, et plus particulièrement l'organisation interne des cellules eucaryotes. Bien que les différents acteurs régissant cette organisation aient été en grande partie identifiées, on ignore encore comment une architecture si complexe et dynamique peut émerger de simples interactions entres molécules. Un des objectifs des différentes études présentées dans cette thèse est de construire un cadre théorique permettant d'appréhender cette auto-organisation. Pour cela, nous étudions des problèmes spécifiques à différentes échelles allant du nanomètre (dynamique des hétérogénéités dans les membranes biologiques) au micromètre (organisation des organelles cellulaires), en utilisant des simulations numériques stochastiques et des méthodes analytiques. Le texte est organisé pour présenter les résultats des plus petites au plus grandes échelles. Dans le premier chapitre, nous étudions l'organisation de la membrane d'un seul compartiment en modélisant la dynamique d'hétérogénéités membranaires. Dans le second chapitre, nous étudions la dynamique d'un compartiment unique échangeant des vésicules avec le milieu extérieur. Nous étudions également comment deux compartiments différents peuvent être générés par les mêmes mécanismes d'échanges de vésicules. Enfin, dans le troisième chapitre, nous développons un modèle global de la dynamique des organelles cellulaires, dans le contexte particulier de la biogenèse de l'appareil de Golgi
This thesis deals with cell biology, and particularly with the internal organization of eukaryotic cells. Although many of the molecular players contributing to the intra-cellular organization have been identified, we are still far from understanding how the complex and dynamical intra-cellular architecture emerges from the self-organization of individual molecules. One of the goals of the different studies presented in this thesis is to provide a theoretical framework to understand such self-organization. We cover specific problems at different scales, ranging from membrane organization at the nanometer scale to whole organelle structure at the micron scale, using analytical work and stochastic simulation algorithms. The text is organized to present the results from the smallest to the largest scales. In the first chapter, we study the membrane organization of a single compartment by modeling the dynamics of membrane heterogeneities. In the second chapter we study the dynamics of one membrane-bound compartment exchanging vesicles with the external medium. Still in the same chapter, we investigate the mechanisms by which two different compartments can be generated by vesicular sorting. Finally in the third chapter, we develop a global model of organelle biogenesis and dynamics in the specific context of the Golgi apparatus
Style APA, Harvard, Vancouver, ISO itp.
2

Liu, Weigang. "A Gillespie-Type Algorithm for Particle Based Stochastic Model on Lattice". Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/96455.

Pełny tekst źródła
Streszczenie:
In this thesis, I propose a general stochastic simulation algorithm for particle based lattice model using the concepts of Gillespie's stochastic simulation algorithm, which was originally designed for well-stirred systems. I describe the details about this method and analyze its complexity compared with the StochSim algorithm, another simulation algorithm originally proposed to simulate stochastic lattice model. I compare the performance of both algorithms with application to two different examples: the May-Leonard model and Ziff-Gulari-Barshad model. Comparison between the simulation results from both algorithms has validate our claim that our new proposed algorithm is comparable to the StochSim in simulation accuracy. I also compare the efficiency of both algorithms using the CPU cost of each code and conclude that the new algorithm is as efficient as the StochSim in most test cases, while performing even better for certain specific cases.
Computer simulation has been developed for almost one century. Stochastic lattice model, which follows the physics concept of lattice, is defined as a kind of system in which individual entities live on grids and demonstrate certain random behaviors according to certain specific rules. It is mainly studied using computer simulations. The most widely used simulation method to for stochastic lattice systems is the StochSim algorithm, which just randomly pick an entity and then determine its behavior based on a set of specific random rules. Our goal is to develop new simulation methods so that it is more convenient to simulate and analyze stochastic lattice system. In this thesis I propose another type of simulation methods for the stochastic lattice model using totally different concepts and procedures. I developed a simulation package and applied it to two different examples using both methods, and then conducted a series of numerical experiment to compare their performance. I conclude that they are roughly equivalent and our new method performs better than the old one in certain special cases.
Style APA, Harvard, Vancouver, ISO itp.
3

Geltz, Brad. "Handling External Events Efficiently in Gillespie's Stochastic Simulation Algorithm". VCU Scholars Compass, 2010. http://scholarscompass.vcu.edu/etd/168.

Pełny tekst źródła
Streszczenie:
Gillespie's Stochastic Simulation Algorithm (SSA) provides an elegant simulation approach for simulating models composed of coupled chemical reactions. Although this approach can be used to describe a wide variety biological, chemical, and ecological systems, often systems have external behaviors that are difficult or impossible to characterize using chemical reactions alone. This work extends the applicability of the SSA by adding mechanisms for the inclusion of external events and external triggers. We define events as changes that occur in the system at a specified time while triggers are defined as changes that occur to the system when a particular condition is fulfilled. We further extend the SSA with the efficient implementation of these model parameters. This work allows numerous systems that would have previously been impossible or impractical to model using the SSA to take advantage of this powerful simulation technique.
Style APA, Harvard, Vancouver, ISO itp.
4

Arnold, Christian. "The Eukaryotic Chromatin Computer". Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-137584.

Pełny tekst źródła
Streszczenie:
Eukaryotic genomes are typically organized as chromatin, the complex of DNA and proteins that forms chromosomes within the cell\\\'s nucleus. Chromatin has pivotal roles for a multitude of functions, most of which are carried out by a complex system of covalent chemical modifications of histone proteins. The propagation of patterns of these histone post-translational modifications across cell divisions is particularly important for maintenance of the cell state in general and the transcriptional program in particular. The discovery of epigenetic inheritance phenomena - mitotically and/or meiotically heritable changes in gene function resulting from changes in a chromosome without alterations in the DNA sequence - was remarkable because it disproved the assumption that information is passed to daughter cells exclusively through DNA. However, DNA replication constitutes a dramatic disruption of the chromatin state that effectively amounts to partial erasure of stored information. To preserve its epigenetic state the cell reconstructs (at least part of) the histone post-translational modifications by means of processes that are still very poorly understood. A plausible hypothesis is that the different combinations of reader and writer domains in histone-modifying enzymes implement local rewriting rules that are capable of \\\"recomputing\\\" the desired parental patterns of histone post-translational modifications on the basis of the partial information contained in that half of the nucleosomes that predate replication. It is becoming increasingly clear that both information processing and computation are omnipresent and of fundamental importance in many fields of the natural sciences and the cell in particular. The latter is exemplified by the increasingly popular research areas that focus on computing with DNA and membranes. Recent work suggests that during evolution, chromatin has been converted into a powerful cellular memory device capable of storing and processing large amounts of information. Eukaryotic chromatin may therefore also act as a cellular computational device capable of performing actual computations in a biological context. A recent theoretical study indeed demonstrated that even relatively simple models of chromatin computation are computationally universal and hence conceptually more powerful than gene regulatory networks. In the first part of this thesis, I establish a deeper understanding of the computational capacities and limits of chromatin, which have remained largely unexplored. I analyze selected biological building blocks of the chromatin computer and compare it to system components of general purpose computers, particularly focusing on memory and the logical and arithmetical operations. I argue that it has a massively parallel architecture, a set of read-write rules that operate non-deterministically on chromatin, the capability of self-modification, and more generally striking analogies to amorphous computing. I therefore propose a cellular automata-like 1-D string as its computational paradigm on which sets of local rewriting rules are applied asynchronously with time-dependent probabilities. Its mode of operation is therefore conceptually similar to well-known concepts from the complex systems theory. Furthermore, the chromatin computer provides volatile memory with a massive information content that can be exploited by the cell. I estimate that its memory size lies in the realms of several hundred megabytes of writable information per cell, a value that I compare with DNA itself and cis-regulatory modules. I furthermore show that it has the potential to not only perform computations in a biological context but also in a strict informatics sense. At least theoretically it may therefore be used to calculate any computable function or algorithm more generally. Chromatin is therefore another representative of the growing number of non-standard computing examples. As an example for a biological challenge that may be solved by the \\\"chromatin computer\\\", I formulate epigenetic inheritance as a computational problem and develop a flexible stochastic simulation system for the study of recomputation-based epigenetic inheritance of individual histone post-translational modifications. The implementation uses Gillespie\\\'s stochastic simulation algorithm for exactly simulating the time evolution of the chemical master equation of the underlying stochastic process. Furthermore, it is efficient enough to use an evolutionary algorithm to find a system of enzymes that can stably maintain a particular chromatin state across multiple cell divisions. I find that it is easy to evolve such a system of enzymes even without explicit boundary elements separating differentially modified chromatin domains. However, the success of this task depends on several previously unanticipated factors such as the length of the initial state, the specific pattern that should be maintained, the time between replications, and various chemical parameters. All these factors also influence the accumulation of errors in the wake of cell divisions. Chromatin-regulatory processes and epigenetic (inheritance) mechanisms constitute an intricate and sensitive system, and any misregulation may contribute significantly to various diseases such as Alzheimer\\\'s disease. Intriguingly, the role of epigenetics and chromatin-based processes as well as non-coding RNAs in the etiology of Alzheimer\\\'s disease is increasingly being recognized. In the second part of this thesis, I explicitly and systematically address the two hypotheses that (i) a dysregulated chromatin computer plays important roles in Alzheimer\\\'s disease and (ii) Alzheimer\\\'s disease may be considered as an evolutionarily young disease. In summary, I found support for both hypotheses although for hypothesis 1, it is very difficult to establish causalities due to the complexity of the disease. However, I identify numerous chromatin-associated, differentially expressed loci for histone proteins, chromatin-modifying enzymes or integral parts thereof, non-coding RNAs with guiding functions for chromatin-modifying complexes, and proteins that directly or indirectly influence epigenetic stability (e.g., by altering cell cycle regulation and therefore potentially also the stability of epigenetic states). %Notably, we generally observed enrichment of probes located in non-coding regions, particularly antisense to known annotations (e.g., introns). For the identification of differentially expressed loci in Alzheimer\\\'s disease, I use a custom expression microarray that was constructed with a novel bioinformatics pipeline. Despite the emergence of more advanced high-throughput methods such as RNA-seq, microarrays still offer some advantages and will remain a useful and accurate tool for transcriptome profiling and expression studies. However, it is non-trivial to establish an appropriate probe design strategy for custom expression microarrays because alternative splicing and transcription from non-coding regions are much more pervasive than previously appreciated. To obtain an accurate and complete expression atlas of genomic loci of interest in the post-ENCODE era, this additional transcriptional complexity must be considered during microarray design and requires well-considered probe design strategies that are often neglected. This encompasses, for example, adequate preparation of a set of target sequences and accurate estimation of probe specificity. With the help of this pipeline, two custom-tailored microarrays have been constructed that include a comprehensive collection of non-coding RNAs. Additionally, a user-friendly web server has been set up that makes the developed pipeline publicly available for other researchers
Eukaryotische Genome sind typischerweise in Form von Chromatin organisiert, dem Komplex aus DNA und Proteinen, aus dem die Chromosomen im Zellkern bestehen. Chromatin hat lebenswichtige Funktionen in einer Vielzahl von Prozessen, von denen die meisten durch ein komplexes System von kovalenten Modifikationen an Histon-Proteinen ablaufen. Muster dieser Modifikationen sind wichtige Informationsträger, deren Weitergabe über die Zellteilung hinaus an beide Tochterzellen besonders wichtig für die Aufrechterhaltung des Zellzustandes im Allgemeinen und des Transkriptionsprogrammes im Speziellen ist. Die Entdeckung von epigenetischen Vererbungsphänomenen - mitotisch und/oder meiotisch vererbbare Veränderungen von Genfunktionen, hervorgerufen durch Veränderungen an Chromosomen, die nicht auf Modifikationen der DNA-Sequenz zurückzuführen sind - war bemerkenswert, weil es die Hypothese widerlegt hat, dass Informationen an Tochterzellen ausschließlich durch DNA übertragen werden. Die Replikation der DNA erzeugt eine dramatische Störung des Chromatinzustandes, welche letztendlich ein partielles Löschen der gespeicherten Informationen zur Folge hat. Um den epigenetischen Zustand zu erhalten, muss die Zelle Teile der parentalen Muster der Histonmodifikationen durch Prozesse rekonstruieren, die noch immer sehr wenig verstanden sind. Eine plausible Hypothese postuliert, dass die verschiedenen Kombinationen der Lese- und Schreibdomänen innerhalb von Histon-modifizierenden Enzymen lokale Umschreibregeln implementieren, die letztendlich das parentale Modifikationsmuster der Histone neu errechnen. Dies geschieht auf Basis der partiellen Informationen, die in der Hälfte der vererbten Histone gespeichert sind. Es wird zunehmend klarer, dass sowohl Informationsverarbeitung als auch computerähnliche Berechnungen omnipräsent und in vielen Bereichen der Naturwissenschaften von fundamentaler Bedeutung sind, insbesondere in der Zelle. Dies wird exemplarisch durch die zunehmend populärer werdenden Forschungsbereiche belegt, die sich auf computerähnliche Berechnungen mithilfe von DNA und Membranen konzentrieren. Jüngste Forschungen suggerieren, dass sich Chromatin während der Evolution in eine mächtige zelluläre Speichereinheit entwickelt hat und in der Lage ist, eine große Menge an Informationen zu speichern und zu prozessieren. Eukaryotisches Chromatin könnte also als ein zellulärer Computer agieren, der in der Lage ist, computerähnliche Berechnungen in einem biologischen Kontext auszuführen. Eine theoretische Studie hat kürzlich demonstriert, dass bereits relativ simple Modelle eines Chromatincomputers berechnungsuniversell und damit mächtiger als reine genregulatorische Netzwerke sind. Im ersten Teil meiner Dissertation stelle ich ein tieferes Verständnis des Leistungsvermögens und der Beschränkungen des Chromatincomputers her, welche bisher größtenteils unerforscht waren. Ich analysiere ausgewählte Grundbestandteile des Chromatincomputers und vergleiche sie mit den Komponenten eines klassischen Computers, mit besonderem Fokus auf Speicher sowie logische und arithmetische Operationen. Ich argumentiere, dass Chromatin eine massiv parallele Architektur, eine Menge von Lese-Schreib-Regeln, die nicht-deterministisch auf Chromatin operieren, die Fähigkeit zur Selbstmodifikation, und allgemeine verblüffende Ähnlichkeiten mit amorphen Berechnungsmodellen besitzt. Ich schlage deswegen eine Zellularautomaten-ähnliche eindimensionale Kette als Berechnungsparadigma vor, auf dem lokale Lese-Schreib-Regeln auf asynchrone Weise mit zeitabhängigen Wahrscheinlichkeiten ausgeführt werden. Seine Wirkungsweise ist demzufolge konzeptionell ähnlich zu den wohlbekannten Theorien von komplexen Systemen. Zudem hat der Chromatincomputer volatilen Speicher mit einem massiven Informationsgehalt, der von der Zelle benutzt werden kann. Ich schätze ab, dass die Speicherkapazität im Bereich von mehreren Hundert Megabytes von schreibbarer Information pro Zelle liegt, was ich zudem mit DNA und cis-regulatorischen Modulen vergleiche. Ich zeige weiterhin, dass ein Chromatincomputer nicht nur Berechnungen in einem biologischen Kontext ausführen kann, sondern auch in einem strikt informatischen Sinn. Zumindest theoretisch kann er deswegen für jede berechenbare Funktion benutzt werden. Chromatin ist demzufolge ein weiteres Beispiel für die steigende Anzahl von unkonventionellen Berechnungsmodellen. Als Beispiel für eine biologische Herausforderung, die vom Chromatincomputer gelöst werden kann, formuliere ich die epigenetische Vererbung als rechnergestütztes Problem. Ich entwickle ein flexibles Simulationssystem zur Untersuchung der epigenetische Vererbung von individuellen Histonmodifikationen, welches auf der Neuberechnung der partiell verlorengegangenen Informationen der Histonmodifikationen beruht. Die Implementierung benutzt Gillespies stochastischen Simulationsalgorithmus, um die chemische Mastergleichung der zugrundeliegenden stochastischen Prozesse über die Zeit auf exakte Art und Weise zu modellieren. Der Algorithmus ist zudem effizient genug, um in einen evolutionären Algorithmus eingebettet zu werden. Diese Kombination erlaubt es ein System von Enzymen zu finden, dass einen bestimmten Chromatinstatus über mehrere Zellteilungen hinweg stabil vererben kann. Dabei habe ich festgestellt, dass es relativ einfach ist, ein solches System von Enzymen zu evolvieren, auch ohne explizite Einbindung von Randelementen zur Separierung differentiell modifizierter Chromatindomänen. Dennoch ängt der Erfolg dieser Aufgabe von mehreren bisher unbeachteten Faktoren ab, wie zum Beispiel der Länge der Domäne, dem bestimmten zu vererbenden Muster, der Zeit zwischen Replikationen sowie verschiedenen chemischen Parametern. Alle diese Faktoren beeinflussen die Anhäufung von Fehlern als Folge von Zellteilungen. Chromatin-regulatorische Prozesse und epigenetische Vererbungsmechanismen stellen ein komplexes und sensitives System dar und jede Fehlregulation kann bedeutend zu verschiedenen Krankheiten, wie zum Beispiel der Alzheimerschen Krankheit, beitragen. In der Ätiologie der Alzheimerschen Krankheit wird die Bedeutung von epigenetischen und Chromatin-basierten Prozessen sowie nicht-kodierenden RNAs zunehmend erkannt. Im zweiten Teil der Dissertation adressiere ich explizit und auf systematische Art und Weise die zwei Hypothesen, dass (i) ein fehlregulierter Chromatincomputer eine wichtige Rolle in der Alzheimerschen Krankheit spielt und (ii) die Alzheimersche Krankheit eine evolutionär junge Krankheit darstellt. Zusammenfassend finde ich Belege für beide Hypothesen, obwohl es für erstere schwierig ist, aufgrund der Komplexität der Krankheit Kausalitäten zu etablieren. Dennoch identifiziere ich zahlreiche differentiell exprimierte, Chromatin-assoziierte Bereiche, wie zum Beispiel Histone, Chromatin-modifizierende Enzyme oder deren integrale Bestandteile, nicht-kodierende RNAs mit Führungsfunktionen für Chromatin-modifizierende Komplexe oder Proteine, die direkt oder indirekt epigenetische Stabilität durch veränderte Zellzyklus-Regulation beeinflussen. Zur Identifikation von differentiell exprimierten Bereichen in der Alzheimerschen Krankheit benutze ich einen maßgeschneiderten Expressions-Microarray, der mit Hilfe einer neuartigen Bioinformatik-Pipeline erstellt wurde. Trotz des Aufkommens von weiter fortgeschrittenen Hochdurchsatzmethoden, wie zum Beispiel RNA-seq, haben Microarrays immer noch einige Vorteile und werden ein nützliches und akkurates Werkzeug für Expressionsstudien und Transkriptom-Profiling bleiben. Es ist jedoch nicht trivial eine geeignete Strategie für das Sondendesign von maßgeschneiderten Expressions-Microarrays zu finden, weil alternatives Spleißen und Transkription von nicht-kodierenden Bereichen viel verbreiteter sind als ursprünglich angenommen. Um ein akkurates und vollständiges Bild der Expression von genomischen Bereichen in der Zeit nach dem ENCODE-Projekt zu bekommen, muss diese zusätzliche transkriptionelle Komplexität schon während des Designs eines Microarrays berücksichtigt werden und erfordert daher wohlüberlegte und oft ignorierte Strategien für das Sondendesign. Dies umfasst zum Beispiel eine adäquate Vorbereitung der Zielsequenzen und eine genaue Abschätzung der Sondenspezifität. Mit Hilfe der Pipeline wurden zwei maßgeschneiderte Expressions-Microarrays produziert, die beide eine umfangreiche Sammlung von nicht-kodierenden RNAs beinhalten. Zusätzlich wurde ein nutzerfreundlicher Webserver programmiert, der die entwickelte Pipeline für jeden öffentlich zur Verfügung stellt
Style APA, Harvard, Vancouver, ISO itp.
5

Ananthanpillai, Balaji. "Stochastic Simulation of the Phage Lambda System and the Bioluminescence System Using the Next Reaction Method". University of Cincinnati / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1259080814.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Gao, Guangyue. "A Stochastic Model for The Transmission Dynamics of Toxoplasma Gondii". Thesis, Virginia Tech, 2016. http://hdl.handle.net/10919/78106.

Pełny tekst źródła
Streszczenie:
Toxoplasma gondii (T. gondii) is an intracellular protozoan parasite. The parasite can infect all warm-blooded vertebrates. Up to 30% of the world's human population carry a Toxoplasma infection. However, the transmission dynamics of T. gondii has not been well understood, although a lot of mathematical models have been built. In this thesis, we adopt a complex life cycle model developed by Turner et al. and extend their work to include diffusion of hosts. Most of researches focus on the deterministic models. However, some scientists have reported that deterministic models sometimes are inaccurate or even inapplicable to describe reaction-diffusion systems, such as gene expression. In this case stochastic models might have qualitatively different properties than its deterministic limit. Consequently, the transmission pathways of T. gondii and potential control mechanisms are investigated by both deterministic and stochastic model by us. A stochastic algorithm due to Gillespie, based on the chemical master equation, is introduced. A compartment-based model and a Smoluchowski equation model are described to simulate the diffusion of hosts. The parameter analyses are conducted based on the reproduction number. The analyses based on the deterministic model are verified by stochastic simulation near the thresholds of the parameters.
Master of Science
Style APA, Harvard, Vancouver, ISO itp.
7

Roland, Jérémy. "Dynamique et mécanique de la fragmentation de filaments d'actine par l'ADF/cofiline : comparaison entre expériences et modèles". Phd thesis, Université de Grenoble, 2010. http://tel.archives-ouvertes.fr/tel-00566088.

Pełny tekst źródła
Streszczenie:
L'actine est une protéine abondante dans le cytosquelette des eucaryotes qui se polymérise pour former des filaments. Ces filaments jouent un rôle fondamental dans de nombreux processus biologiques (contraction musculaire, division et motilité cellulaire, etc...). La dynamique d'assemblage et de désassemblage des filaments est sous le contrôle de plusieurs protéines associées à l'actine, en particulier l'ADF/cofiline. Cette protéine s'associe aux filaments et les fragmente, accélérant ainsi le désassemblage du cytosquelette d'actine. Au cours de cette thèse, nous avons développé des modèles mathématiques pour étudier l'effet de l'ADF/cofiline sur le cytosquelette d'actine. Dans un premier temps, nous nous sommes intéressés à la cinétique d'assemblage des filaments en présence d'ADF/Cofiline. Des simulations basées sur l'algorithme de Gillespie ont mis en évidence un équilibre dynamique dans lequel la polymérisation des filaments est contrebalancée par la fragmentation. Nous avons pu caractériser cet équilibre et comparer nos prédictions à des données in vitro obtenues dans le cadre d'une collaboration avec l'équipe de Laurent Blanchoin (CEA/iRTSV/LPCV, Grenoble). Dans un second temps, nous nous sommes penchés sur la mécanique des polymères d'actine pour expliquer les bases physiques de la fragmentation. Un modèle mésoscopique du filament a permis de prouver l'existence d'un couplage entre les déformations en flexion et en torsion dans le filament. Ce couplage permet de convertir les fluctuations thermiques en un effort qui cisaille la section du filament, entraînant ainsi la fragmentation. Les résultats extraits de ces modèles ont donc permis d'améliorer notre compréhension de l'action d'ADF/cofiline sur le cytosquelette d'actine.
Style APA, Harvard, Vancouver, ISO itp.
8

Trimeloni, Thomas. "Accelerating Finite State Projection through General Purpose Graphics Processing". VCU Scholars Compass, 2011. http://scholarscompass.vcu.edu/etd/175.

Pełny tekst źródła
Streszczenie:
The finite state projection algorithm provides modelers a new way of directly solving the chemical master equation. The algorithm utilizes the matrix exponential function, and so the algorithm’s performance suffers when it is applied to large problems. Other work has been done to reduce the size of the exponentiation through mathematical simplifications, but efficiently exponentiating a large matrix has not been explored. This work explores implementing the finite state projection algorithm on several different high-performance computing platforms as a means of efficiently calculating the matrix exponential function for large systems. This work finds that general purpose graphics processing can accelerate the finite state projection algorithm by several orders of magnitude. Specific biological models and modeling techniques are discussed as a demonstration of the algorithm implemented on a general purpose graphics processor. The results of this work show that general purpose graphics processing will be a key factor in modeling more complex biological systems.
Style APA, Harvard, Vancouver, ISO itp.
9

Berro, Julien. "Du monomère à la cellule : modèle de la dynamique de l'actine". Université Joseph Fourier (Grenoble), 2006. http://www.theses.fr/2006GRE10226.

Pełny tekst źródła
Streszczenie:
Les filaments d'actine sont des polymères biologiques très abondants dans le cytosquelette des eucaryotes. Leur auto-assemblage et leur autoorganisation sont très dynamiques et ils jouent un rôle majeur dans la motilité cellulaire et dans les déformations de la membrane. Nous présentons dan cette thèse trois approches de modélisation, à différentes échelles, afin de mieux comprendre les mécanismes de régulation de l'assemblage, de l'organisation et de la production de forces par des filaments biologiques tels que les filaments d'actine. Nous avons tout d'abord développé un outil de simulation multi-agent stochastique pour l'étude de la dynamique de filaments biologiques prenant en compte les interactions à l'échelle du nanomètre. Ce nouvel outil nous a permis de mettre en évidence l'accélération du turnover des monomères d'actine par fragmentation des filaments par l'ADF/Cofiline ainsi que les ruptures de symétries induites par cette protéine, résultats concordant avec les expériences de l'équipe de L. Blanchoin (CEA Grenoble). Nous avons également mené l'étude d'un modèle continu pour le flambage de filaments qui a permis d'estimer les forces exercées in vivo et in vitro en fonction des conditions d'attachement des extrémités et de donner des conditions limites de certains paramètres permettant le flambage. Troisièmement, nous avons développé un cadre pour l'organisation des données de cinétique biochimique de réseaux de régulation que nous avons utilisé pour la régulation de la polymérisation de l'actine. Ces trois approches de modélisation ont permis d'améliorer la connaissance sur la dynamique de l'actine et sont complémentaires aux approches expérimentales de la biologie
Actin filaments are biological polymers that are very abundant in eucaryot cytoskeleton. Their auto-assembly and auto-organization are highly dynami. And are essential in cell motility and membrane deformations. Ln this thesis we propose three approaches, on different scales, in order to enlighten mechanisms for the regulation ofassembly of, organization of and production of force by biological filaments such as actin filaments. First, we have developed a stochastic multi-agent simulation tool for studying biological filaments taking into consideration interactions on the nanometer scale. This new tool allowed us to bring out the acceleration of actin monomer turnover due to fragmentation of filaments by ADF/Cofilin and the symmetry breaking induced by thisprotein, which agree weil with experimental data from L. Blanchoin team (CEA Grenoble). Secondly, we studied a continuou model for filament buckling, providing, on the one hand, an estimation of forces exerted in vitro or in vivo with respect to extremity attachment conditions and, on the other hand, limit conditions for buckling. Thirdly, we developed a framework for organizing kinetic biochemical data from reaction networks, which was used for the regulation of actin polymerization. These three modeling approaches improved the knowledge on actin dynamics and are useful complements for experimental approaches in biology
Style APA, Harvard, Vancouver, ISO itp.
10

Saha, Ananya. "A framework for optimizing immunotherapy for long-term control of HIV". Thesis, 2020. https://etd.iisc.ac.in/handle/2005/4832.

Pełny tekst źródła
Streszczenie:
HIV infects around 1 million people every year. Antiretroviral therapy (ART) is used to suppress viral replication and control disease progression but it cannot eradicate the virus. ART is therefore lifelong. Today, e ort is focused on using alternative strategies, particularly immune modulatory strategies, that would allow disease control after stopping ART. One such strategy is to administer HIV antibodies at the time of stopping ART to prevent viral resurgence and sustain the control established by ART over an extended duration. Antibody treatment, however, can fail due to viral mutation-driven development of resistance. Clinical trials document the failure of antibody therapy within weeks of its initiation despite the presence of high concentrations of the antibodies in circulation. A quantitative understanding of these observations is necessary to design therapies that would prevent such rapid failure. Here, we present a framework that provides such an understanding and facilitates treatment optimization. We recognize that the loss of control post-ART is associated with the stochastic reactivation of infected cells harboring latent virus because ART blocks all active virus replication. We rst adapt models of the development of resistance to antiretroviral drugs and show that the waiting time for the growth of antibody resistant strains due to the reactivation of latent cells would not capture the rapid failure observed clinically unless the latent cells already contained antibody resistant viral strains. Using ideas of population genetics, we then estimate the prevalence of such mutants before the start of antibody treatment. Using Gillespie simulations, we then estimate the distribution of the ii waiting time for the reactivation of the latent cells containing antibody resistant virus. Our simulations quantitatively capture clinical data of the rapid failure of antibody therapy. Our simulations suggest that combination therapy should be used to maximally prolong disease control. Finally, we considered the present stragies to identify optimal drug combinations. Synergistic drugs are preferred in combination therapies for many diseases, including viral infections and cancers. Maximizing synergy, however, may come at the cost of e cacy. This synergy-e cacy trade-o appears widely prevalent and independent of the speci c drug interactions yielding synergy. We present examples of the trade-o in drug combinations used in HIV, hepatitis C, and cancer therapies. We therefore believe that screens for optimal drug combinations that presently seek to maximize synergy may be improved by considering the trade-o .
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Gillespie simulations"

1

Wang, Ruiqi. "Gillespie Stochastic Simulation". W Encyclopedia of Systems Biology, 839–40. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4419-9863-7_360.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Gillespie, Daniel T. "Gillespie Algorithm for Biochemical Reaction Simulation". W Encyclopedia of Computational Neuroscience, 1293–97. New York, NY: Springer New York, 2015. http://dx.doi.org/10.1007/978-1-4614-6675-8_189.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Gillespie, Daniel T. "Gillespie Algorithm for Biochemical Reaction Simulation". W Encyclopedia of Computational Neuroscience, 1–5. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-7320-6_189-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Gillespie, Daniel T. "Gillespie Algorithm for Biochemical Reaction Simulation". W Encyclopedia of Computational Neuroscience, 1–5. New York, NY: Springer New York, 2015. http://dx.doi.org/10.1007/978-1-4614-7320-6_189-2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Montagna, Sara, Andrea Omicini i Danilo Pianini. "Extending the Gillespie’s Stochastic Simulation Algorithm for Integrating Discrete-Event and Multi-Agent Based Simulation". W Multi-Agent Based Simulation XVI, 3–18. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-31447-1_1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Gillespie simulations"

1

Ayanian, Nora, Paul J. White, A´da´m Ha´la´sz, Mark Yim i Vijay Kumar. "Stochastic Control for Self-Assembly of XBots". W ASME 2008 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2008. http://dx.doi.org/10.1115/detc2008-49535.

Pełny tekst źródła
Streszczenie:
We propose a stochastic, decentralized algorithm for the self-assembly of a group of modular robots into a geometric shape. The method is inspired by chemical kinetics simulations, particularly, the Gillespie algorithm [1, 2] that is widely used in biochemistry, and is specifically designed for modules with dynamic constraints, such as the XBot [3]. The most important feature of our algorithm is that all modules are identical and all decision making is local. Individual modules decide how to move based only on information available to them and their neighbors and the geometric, kinematic and dynamic constraints. Each module knows the details of the goal configuration, keeps track of its own location, and communicates position information locally with adjacent modules only when modules in their vicinity have reconfigured. We show that this stochastic method leads to trajectories with convergence comparable to those obtained from a brute-force exploration of the state space. However, the computational power (speed and memory) requirements are independent of the number of modules, while the brute-force approach scales quadratically with the number of modules. We present the schematic of the modules, preliminary experimental results to illustrate the basic moves, and simulation results to demonstrate the efficacy of the algorithm.
Style APA, Harvard, Vancouver, ISO itp.
2

Purutcuoglu, Vilda. "Stochastic simulation of large biochemical systems by approximate Gillespie algorithm". W 2010 5th International Symposium on Health Informatics and Bioinformatics (HIBIT). IEEE, 2010. http://dx.doi.org/10.1109/hibit.2010.5478883.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Dittamo, Cristian, i Davide Cangelosi. "Optimized Parallel Implementation of Gillespie's First Reaction Method on Graphics Processing Units". W 2009 International Conference on Computer Modeling and Simulation (ICCMS). IEEE, 2009. http://dx.doi.org/10.1109/iccms.2009.42.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Albert, J. "Stochastic simulation of reaction subnetworks: Exploiting synergy between the chemical master equation and the Gillespie algorithm". W INTERNATIONAL CONFERENCE OF COMPUTATIONAL METHODS IN SCIENCES AND ENGINEERING 2016 (ICCMSE 2016). Author(s), 2016. http://dx.doi.org/10.1063/1.4968765.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Muniyandi, Ravie Chandren, i Abdullah Mohd Zin. "Comparing membrane computing simulation strategies of Metabolic and Gillespie algorithms with Lotka-Voltera Population as a case study". W 2011 International Conference on Electrical Engineering and Informatics (ICEEI). IEEE, 2011. http://dx.doi.org/10.1109/iceei.2011.6021840.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Xavier, M. P., C. R. B. Bonin, R. W. dos Santos i M. Lobosco. "On the use of Gillespie stochastic simulation algorithm in a model of the human immune system response to the Yellow Fever vaccine". W 2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE, 2017. http://dx.doi.org/10.1109/bibm.2017.8217880.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii