Segui questo link per vedere altri tipi di pubblicazioni sul tema: Rare events simulation.

Tesi sul tema "Rare events simulation"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Rare events simulation".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Liu, Hong. "Rare events, heavy tails, and simulation". Diss., Connect to online resource, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3239435.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Booth, Jonathan James. "New applications of boxed molecular dynamics : efficient simulation of rare events". Thesis, University of Leeds, 2016. http://etheses.whiterose.ac.uk/13101/.

Testo completo
Abstract (sommario):
This work presents Boxed Molecular Dynamics (BXD), an efficient simulation tool for studying long time scale processes which are inaccessible to conventional methods of simulation. Boxed Molecular Dynamics is explained and introduced in the context of modelling the dynamics of proteins and peptides. Two major applications of Boxed Molecular Dynamics are reported. 1) - The mechanical unfolding of proteins induced by Atomic Force Microscopy methods is investigated. For the first time, experimental data is reproduced and unfolding pathways are investigated without the use of high artificial pulling forces which makes the simulation less realistic. 2) - A cheap and accurate in silico screening tool is developed to aid with the discovery and production of medicinal cyclic peptides. Enzymatic peptide cyclization is investigated by BXD and the ability of amino acid sequences to cyclize is predicted with an accuracy of 76%.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Liu, Gang. "Rare events simulation by shaking transformations : Non-intrusive resampler for dynamic programming". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLX043/document.

Testo completo
Abstract (sommario):
Cette thèse contient deux parties: la simulation des événements rares et le rééchantillonnage non-intrusif stratifié pour la programmation dynamique. La première partie consiste à quantifier des statistiques liées aux événements très improbables mais dont les conséquences sont sévères. Nous proposons des transformations markoviennes sur l'espace des trajectoires et nous les combinons avec les systèmes de particules en interaction et l'ergodicité de chaîne de Markov, pour proposer des méthodes performantes et applicables en grande généralité. La deuxième partie consiste à résoudre numériquement le problème de programmation dynamique dans un contexte où nous avons à disposition seulement des données historiques en faible nombre et nous ne connaissons pas les valeurs des paramètres du modèle. Nous développons et analysons un nouveau schéma composé de stratification et rééchantillonnage
This thesis contains two parts: rare events simulation and non-intrusive stratified resampler for dynamic programming. The first part consists of quantifying statistics related to events which are unlikely to happen but which have serious consequences. We propose Markovian transformation on path spaces and combine them with the theories of interacting particle system and of Markov chain ergodicity to propose methods which apply very generally and have good performance. The second part consists of resolving dynamic programming problem numerically in a context where we only have historical observations of small size and we do not know the values of model parameters. We propose and analyze a new scheme with stratification and resampling techniques
Gli stili APA, Harvard, Vancouver, ISO e altri
4

DHAMODARAN, RAMYA. "EFFICIENT ANALYSIS OF RARE EVENTS ASSOCIATED WITH INDIVIDUAL BUFFERS IN A TANDEM JACKSON NETWORK". University of Cincinnati / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1099073321.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Freitas, Rodrigo Moura 1989. "Molecular simulation = methods and applications = Simulações moleculares : métodos e aplicações". [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/278440.

Testo completo
Abstract (sommario):
Orientador: Maurice de Koning
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Física Gleb Wataghin
Made available in DSpace on 2018-08-23T00:50:21Z (GMT). No. of bitstreams: 1 Freitas_RodrigoMoura_M.pdf: 11496259 bytes, checksum: 41c29f22d80da01064cf7a3b9681b05f (MD5) Previous issue date: 2013
Resumo: Devido aos avanços conceptuais e técnicos feitos em física computacional e ciência dos materiais computacional nos estamos aptos a resolver problemas que eram inacessíveis a alguns anos atrás. Nessa dissertação estudamos a evolução de alguma destas técnicas, apresentando a teoria e técnicas de simulação computacional para estudar transições de fase de primeira ordem com ênfase nas técnicas mais avançadas de calculo de energia livre (Reversible Scaling) e métodos de simulação de eventos raros (Forward Flux Sampling) usando a técnica de simulação atomística da Dinâmica Molecular. A evolução e melhora da e ciência destas técnicas e apresentada junto com aplicações a sistemas simples que permitem solução exata e também ao caso mais complexo da transição de fase Martenstica. Também apresentamos a aplicação de métodos numéricos no estudo do modelo de Pauling para o gelo. Nos desenvolvemos e implementamos um novo algoritmo para a criação e ciente de estruturas de gelo desordenadas. Este algoritmo de geração de cristais de gelo nos permitiu criar células de gelo Ih de tamanhos que não eram possíveis antes. Usando este algoritmo abordamos o problema de efeitos de tamanho finito não estudados anteriormente
Abstract: Due to the conceptual and technical advances being made in computational physics and computational materials science we have been able to tackle problems that were inaccessible a few years ago. In this dissertation we study the evolution of some of these techniques, presenting the theory and simulation methods to study _rst order phase transitions with emphasis on state-of-the-art free-energy calculation (Reversible Scaling) and rare event (Forward Flux Sampling) methods using the atomistic simulation technique of Molecular Dynamics. The evolution and efficiency improvement of these techniques is presented together with applications to simple systems that allow exact solution as well as the more the complex case of Martensitic phase transitions. We also present the application of numerical methods to study Pauling\'s model of ice. We have developed and implemented a new algorithm for efficient generation of disordered ice structures. This ice generator algorithm allows us to create ice Ih cells of sizes not reported before. Using this algorithm we address finite size effects not studied before
Mestrado
Física
Mestre em Física
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Hewett, Angela Dawn. "Expecting the unexpected : to what extent does simulation help healthcare professionals prepare for rare, critical events during childbearing?" Thesis, University of Leeds, 2016. http://etheses.whiterose.ac.uk/15431/.

Testo completo
Abstract (sommario):
Pregnancy and childbirth presents both rare and critical events for which healthcare professionals are required to acquire and maintain competent clinical skills. In theory, a skill demonstrated using simulation will transfer into practice competently and confidently; the strength of simulation appears to lie in its validity with clinical context. Evidence shows that some professionals have difficulty responding appropriately to unexpected critical events and, therefore, there were two main aims: 1) to learn more about how healthcare practitioners develop skills in order to prepare for and respond to rare, critical and emergency events (RCEE) during childbearing, 2) to uncover healthcare practitioners’ experiences of simulated practice. An explanatory sequential mixed methods approach consisted of a quantitative systematic review combined with a framework analysis of curricula documentation. Subsequently, a conceptual framework of simulation was explored through qualitative inquiry with twenty five healthcare professionals who care for childbearing women. Attribution theory proved useful in analysing these experiences. Findings illustrated the multifaceted and complex nature of preparation for RCEE. Simulation is useful when clinical exposure is reduced, has the potential for practice in a safe environment and can result in increased confidence, initially. In addition, teamwork, the development of expertise with experience, debriefing and governance procedures were motivational factors in preparedness. Realism of scenarios affected engagement if they were not associated with ‘real life’; with obstetric focus, simulation fidelity was less important and, when related to play, this negatively influenced the value placed on simulation. The value of simulation is positioned in the ability to ‘practise’ within ‘safe’ parameters and there is contradiction between this assumption and observed reality. Paradoxically, confidence in responding to RCEE was linked to clinical exposure and not simulation and was felt to decay over time, although the timeframe for diminution was unclear. Overwhelmingly, simulation was perceived as anxiety provoking and this affected engagement and learning. Data highlights ambiguity between the theoretical principles of simulation and the practical application.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Rai, Ajit. "Estimation de la disponibilité par simulation, pour des systèmes incluant des contraintes logistiques". Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S105/document.

Testo completo
Abstract (sommario):
L'analyse des FDM (Reliability, Availability and Maintainability en anglais) fait partie intégrante de l'estimation du coût du cycle de vie des systèmes ferroviaires. Ces systèmes sont hautement fiables et présentent une logistique complexe. Les simulations Monte Carlo dans leur forme standard sont inutiles dans l'estimation efficace des paramètres des FDM à cause de la problématique des événements rares. C'est ici que l'échantillonnage préférentiel joue son rôle. C'est une technique de réduction de la variance et d'accélération de simulations. Cependant, l'échantillonnage préférentiel inclut un changement de lois de probabilité (changement de mesure) du modèle mathématique. Le changement de mesure optimal est inconnu même si théoriquement il existe et fournit un estimateur avec une variance zéro. Dans cette thèse, l'objectif principal est d'estimer deux paramètres pour l'analyse des FDM: la fiabilité des réseaux statiques et l'indisponibilité asymptotique pour les systèmes dynamiques. Pour ce faire, la thèse propose des méthodes pour l'estimation et l'approximation du changement de mesure optimal et l'estimateur final. Les contributions se présentent en deux parties: la première partie étend la méthode de l'approximation du changement de mesure de l'estimateur à variance zéro pour l'échantillonnage préférentiel. La méthode estime la fiabilité des réseaux statiques et montre l'application à de réels systèmes ferroviaires. La seconde partie propose un algorithme en plusieurs étapes pour l'estimation de la distance de l'entropie croisée. Cela permet d'estimer l'indisponibilité asymptotique pour les systèmes markoviens hautement fiables avec des contraintes logistiques. Les résultats montrent une importante réduction de la variance et un gain par rapport aux simulations Monte Carlo
RAM (Reliability, Availability and Maintainability) analysis forms an integral part in estimation of Life Cycle Costs (LCC) of passenger rail systems. These systems are highly reliable and include complex logistics. Standard Monte-Carlo simulations are rendered useless in efficient estimation of RAM metrics due to the issue of rare events. Systems failures of these complex passenger rail systems can include rare events and thus need efficient simulation techniques. Importance Sampling (IS) are an advanced class of variance reduction techniques that can overcome the limitations of standard simulations. IS techniques can provide acceleration of simulations, meaning, less variance in estimation of RAM metrics in same computational budget as a standard simulation. However, IS includes changing the probability laws (change of measure) that drive the mathematical models of the systems during simulations and the optimal IS change of measure is usually unknown, even though theroretically there exist a perfect one (zero-variance IS change of measure). In this thesis, we focus on the use of IS techniques and its application to estimate two RAM metrics : reliability (for static networks) and steady state availability (for dynamic systems). The thesis focuses on finding and/or approximating the optimal IS change of measure to efficiently estimate RAM metrics in rare events context. The contribution of the thesis is broadly divided into two main axis : first, we propose an adaptation of the approximate zero-variance IS method to estimate reliability of static networks and show the application on real passenger rail systems ; second, we propose a multi-level Cross-Entropy optimization scheme that can be used during pre-simulation to obtain CE optimized IS rates of Markovian Stochastic Petri Nets (SPNs) transitions and use them in main simulations to estimate steady state unavailability of highly reliably Markovian systems with complex logistics involved. Results from the methods show huge variance reduction and gain compared to MC simulations
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Picciani, Massimiliano. "Rare events in many-body systems : reactive paths and reaction constants for structural transitions". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2012. http://tel.archives-ouvertes.fr/tel-00706510.

Testo completo
Abstract (sommario):
Cette thèse aborde l'étude de phénomènes physiques fondamentaux, avec des applications aux matériaux d'intérêt nucléaire. Nous avons développé des méthodes pour l'étude d'évènements rares concernant des transitions structurales thermiquement activées dans des systèmes à N-corps. La première méthode consiste en la simulation numérique du courant de probabilité associé aux chemins réactifs. Après avoir dérivé les équations d'évolution du courant de probabilité, on échantillonne ce courant grâce à un algorithme de type Monte Carlo Diffusif. Cette technique, dénommée Transition Current Sampling, a été appliquée pour étudier les transitions structurales d'un agrégat de 38 atomes liés par un potentiel Lennard-Jones (LJ-38). Un deuxième algorithme, dénommée Transition Path Sampling avec bias de Lyapunov local (LyTPS), a ensuite été développé. LyTPS permet de calculer des taux de réaction à température finie en suivant la théorie des états de transition. Un biais statistique dérivant du maximum des exposantes de Lyapunov locaux est introduit pour accélérer l'échantillonnage de trajectoires réactives. Afin d'extraire la valeur des constantes de réaction d'équilibre depuis celle obtenues par LyTPS, on utilise le Multistate Bennett Acceptance Ratio. Nous avons à nouveau validé cette méthode sur l'agrégat LJ-38. LyTPS est ensuite utilisé pour calculer les constantes de migration des lacunes et di-lacunes dans le Fer-α, ainsi que l'entropie de migration associée. Ces constantes de réaction servent de paramètre d'input dans des codes de modélisation cinétique (First Passage Kinetic Monte Carlo) pour reproduire numériquement des recuits de résistivité de Fer-α après irradiation.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Silva, lopes Laura. "Méthodes numériques pour la simulation d'évènements rares en dynamique moléculaire". Thesis, Paris Est, 2019. http://www.theses.fr/2019PESC1045.

Testo completo
Abstract (sommario):
Dans les systèmes dynamiques aléatoires, tels ceux rencontrés en dynamique moléculaire, les événements rares apparaissent naturellement, comme étant liés à des fluctuations de probabilité faible. En dynamique moléculaire, le repliement des protéines, la dissociation protéine-ligand, et la fermeture ou l’ouverture des canaux ioniques dans les membranes, sont des exemples d’événements rares. La simulation d’événements rares est un domaine de recherche important en biophysique depuis presque trois décennies.En dynamique moléculaire, on est particulièrement intéressé par la simulation de la transition entre les états métastables, qui sont des régions de l’espace des phases dans lesquelles le système reste piégé sur des longues périodes de temps. Ces transitions sont rares, leurs simulations sont donc assez coûteuses et parfois même impossibles. Pour contourner ces difficultés, des méthodes d’échantillonnage ont été développées pour simuler efficacement ces événement rares. Parmi celles-ci les méthodes de splitting consistent à diviser l’événement rare en sous-événements successifs plus probables. Par exemple, la trajectoire réactive est divisée en morceaux qui progressent graduellement de l’état initial vers l’état final.Le Adaptive Multilevel Splitting (AMS) est une méthode de splitting où les positions des interfaces intermédiaires sont obtenues de façon naturelle au cours de l’algorithme. Les surfaces sont définies de telle sorte que les probabilités de transition entre elles soient constantes et ceci minimise la variance de l’estimateur de la probabilité de l’événement rare. AMS est une méthode avec peu de paramètres numériques à choisir par l’utilisateur, tout en garantissant une grande robustesse par rapport au choix de ces paramètres.Cette thèse porte sur l’application de la méthode adaptive multilevel splitting en dynamique moléculaire. Deux types de systèmes ont été étudiés. La première famille est constituée de modèles simples, qui nous ont permis d’améliorer la méthode. La seconde famille est faite de systèmes plus réalistes qui représentent des vrai défis, où AMS est utilisé pour avancer nos connaissances sur les mécanismes moléculaires. Cette thèse contient donc à la fois des contributions de nature méthodologique et numérique.Dans un premier temps, une étude conduite sur le changement conformationnel d’une biomolécule simple a permis de valider l’algorithme. Nous avons ensuite proposé une nouvelle technique utilisant une combinaison d’AMS avec une méthode d’échantillonnage préférentiel de l’ensemble des conditions initiales pour estimer plus efficacement le temps de transition. Celle-ci a été validée sur un problème simple et nos résultats ouvrent des perspectives prometteuses pour des applications à des systèmes plus complexes. Une nouvelle approche pour extraire les mécanismes réactionnels liés aux transitions est aussi proposée dans cette thèse. Elle consiste à appliquer des méthodes de clustering sur les trajectoires réactives générées par AMS. Pendant ce travail de thèse, l’implémentation de la méthode AMS pour NAMD a été améliorée. En particulier, ce manuscrit présente un tutoriel lié à cette implémentation. Nous avons aussi mené des études sur deux systèmes moléculaires complexes avec la méthode AMS. Le premier analyse l’influence du modèle d’eau (TIP3P et TIP4P/2005) sur le processus de dissociation ligand– β -cyclodextrine. Pour le second, la méthode AMS a été utilisée pour échantillonner des trajectoires de dissociation d’un ligand du domaine N-terminal de la protéine Hsp90
In stochastic dynamical systems, such as those encountered in molecular dynamics, rare events naturally appear as events due to some low probability stochastic fluctuations. Examples of rare events in our everyday life includes earthquakes and major floods. In chemistry, protein folding, ligandunbinding from a protein cavity and opening or closing of channels in cell membranes are examples of rare events. Simulation of rare events has been an important field of research in biophysics over the past thirty years.The events of interest in molecular dynamics generally involve transitions between metastable states, which are regions of the phase space where the system tends to stay trapped. These transitions are rare, making the use of a naive, direct Monte Carlo method computationally impracticable. To dealwith this difficulty, sampling methods have been developed to efficiently simulate rare events. Among them are splitting methods, that consists in dividing the rare event of interest into successive nested more likely events.Adaptive Multilevel Splitting (AMS) is a splitting method in which the positions of the intermediate interfaces, used to split reactive trajectories, are adapted on the fly. The surfaces are defined suchthat the probability of transition between them is constant, which minimizes the variance of the rare event probability estimator. AMS is a robust method that requires a small quantity of user defined parameters, and is therefore easy to use.This thesis focuses on the application of the adaptive multilevel splitting method to molecular dynamics. Two kinds of systems are studied. The first one contains simple models that allowed us to improve the way AMS is used. The second one contains more realistic and challenging systems, where AMS isused to get better understanding of the molecular mechanisms. Hence, the contributions of this thesis include both methodological and numerical results.We first validate the AMS method by applying it to the paradigmatic alanine dipeptide conformational change. We then propose a new technique combining AMS and importance sampling to efficiently sample the initial conditions ensemble when using AMS to obtain the transition time. This is validatedon a simple one dimensional problem, and our results show its potential for applications in complex multidimensional systems. A new way to identify reaction mechanisms is also proposed in this thesis.It consists in performing clustering techniques over the reactive trajectories ensemble generated by the AMS method.The implementation of the AMS method for NAMD has been improved during this thesis work. In particular, this manuscript includes a tutorial on how to use AMS on NAMD. The use of the AMS method allowed us to study two complex molecular systems. The first consists in the analysis of the influence of the water model (TIP3P and TIP4P/2005) on the β -cyclodextrin and ligand unbinding process. In the second, we apply the AMS method to sample unbinding trajectories of a ligand from the N-terminal domain of the Hsp90 protein
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Saggadi, Samira. "Simulation d'évènements rares par Monte Carlo dans les réseaux hautement fiables". Thesis, Rennes 1, 2013. http://www.theses.fr/2013REN1S055.

Testo completo
Abstract (sommario):
Le calcul de la fiabilité des réseaux est en général un problème NP-difficile. On peut par exemple s’intéresser à la fiabilité des systèmes de télécommunications où l'on veut évaluer la probabilité qu’un groupe sélectionné de nœuds peuvent communiquer. Dans ce cas, un ensemble de nœuds déconnectés peut avoir des conséquences critiques, que ce soit financières ou au niveau de la sécurité. Une estimation précise de la fiabilité est ainsi nécessaire. Dans le cadre de ce travail, on s'intéresse à l’étude et au calcul de la fiabilité des réseaux hautement fiables. Dans ce cas la défiabilité est très petite, ce qui rend l’approche standard de Monte Carlo inutile, car elle nécessite un grand nombre d’itérations. Pour une bonne estimation de la fiabilité des réseaux au moindre coût, nous avons développé de nouvelles techniques de simulation basées sur la réduction de variance par échantillonnage préférentiel
Network reliability determination, is an NP-hard problem. For instance, in telecommunications, it is desired to evaluate the probability that a selected group of nodes communicate or not. In this case, a set of disconnected nodes can lead to critical financials security consequences. A precise estimation of the reliability is, therefore, needed. In this work, we are interested in the study and the calculation of the reliability of highly reliable networks. In this case the unreliability is very small, which makes the standard Monte Carlo approach useless, because it requires a large number of iterations. For a good estimation of system reliability with minimum cost, we have developed new simulation techniques based on variance reduction using importance sampling
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Lestang, Thibault. "Numerical simulation and rare events algorithms for the study of extreme fluctuations of the drag force acting on an obstacle immersed in a turbulent flow". Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEN049/document.

Testo completo
Abstract (sommario):
Cette thèse porte sur l'étude numérique des fluctuations extrêmes de la force de traînée exercée par un écoulement turbulent sur un corps immergé.Ce type d'évènement, très rare, est difficile à caractériser par le biais d'un échantillonnage direct, puisqu'il est alors nécessaire de simuler l'écoulement sur des durées extrêmement longues. Cette thèse propose une approche différente, basée sur l'application d'algorithmes d'échantillonnage d'événements rares. L'objectif de ces algorithmes, issus de la physique statistique, est de modifier la statistique d'échantillonnage des trajectoires d'un système dynamique, de manière à favoriser l'occurrence d'événements rares. Si ces techniques ont été appliquées avec succès dans le cas de dynamiques relativement simples, l'intérêt de ces algorithmes n'est à ce jour pas clair pour des dynamiques déterministes extrêmement complexes, comme c'est le cas pour les écoulement turbulents.Cette thèse présente tout d'abord une étude de la dynamique et de la statistique associée aux fluctuations extrêmes de la force de traînée sur un obstacle carré fixe immergé dans un écoulement turbulent à deux dimensions. Ce cadre simplifié permet de simuler la dynamique sur des durées très longues, permettant d'échantillonner un grand nombre de fluctuations dont l'amplitude est assez élevée pour être qualifiée d'extrême.Dans un second temps, l'application de deux algorithmes d’échantillonnage est présentée et discutée.Dans un premier cas, il est illustré qu'une réduction significative du temps de calcul d'extrêmes peut être obtenue. En outre, des difficultés liées à la dynamique de l'écoulement sont mises en lumière, ouvrant la voie au développement de nouveaux algorithmes spécifiques aux écoulements turbulents
This thesis discusses the numerical simulation of extreme fluctuations of the drag force acting on an object immersed in a turbulent medium.Because such fluctuations are rare events, they are particularly difficult to investigate by means of direct sampling. Indeed, such approach requires to simulate the dynamics over extremely long durations.In this work an alternative route is introduced, based on rare events algorithms.The underlying idea of such algorithms is to modify the sampling statistics so as to favour rare trajectories of the dynamical system of interest.These techniques recently led to impressive results for relatively simple dynamics. However, it is not clear yet if such algorithms are useful for complex deterministic dynamics, such as turbulent flows.This thesis focuses on the study of both the dynamics and statistics of extreme fluctuations of the drag experienced by a square cylinder mounted in a two-dimensional channel flow.This simple framework allows for very long simulations of the dynamics, thus leading to the sampling of a large number of events with an amplitude large enough so as they can be considered extreme.Subsequently, the application of two different rare events algorithms is presented and discussed.In the first case, a drastic reduction of the computational cost required to sample configurations resulting in extreme fluctuations is achieved.Furthermore, several difficulties related to the flow dynamics are highlighted, paving the way to novel approaches specifically designed to turbulent flows
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Poma, Bernaola Adolfo Maximo. "Simulações atomísticas de eventos raros através de Transition Path Sampling". [s.n.], 2007. http://repositorio.unicamp.br/jspui/handle/REPOSIP/278438.

Testo completo
Abstract (sommario):
Orientador: Maurice de Koning
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Fisica Gleb Wataghin
Made available in DSpace on 2018-08-08T20:27:57Z (GMT). No. of bitstreams: 1 PomaBernaola_AdolfoMaximo_M.pdf: 3697892 bytes, checksum: a07c1ad647a61d9862283f697732410e (MD5) Previous issue date: 2007
Resumo: Nesta dissertação abordamos o estudo de uma das limitações da simulação atomística denominada o evento raro, quem é responsável pela limitação temporal, exemplos de problemas que envolvem os eventos raros são, o enovelamento de proteínas, mudanças conformacionais de moléculas, reações químicas (em solução), difusão de sólidos e os processos de nucleação numa transição de fase de 1a ordem, entre outros. Métodos convencionais como Dinâmica Molecular (MD) ou Monte Carlo (MC) são úteis para explorar a paisagem de energia potencial de sistemas muito complexos, mas em presença de eventos raros se tornam muito ineficientes, devido à falta de estatística na amostragem do evento. Estes métodos gastam muito tempo computacional amostrando as configurações irrelevantes e não as transições de interesse. Neste sentido o método Transition Path Sampling (TPS), desenvolvido por D. Chandler e seus colaboradores, consegue explorar a paisagem de energia potencial e obter um conjunto de verdadeiras trajetórias dinâmicas que conectam os estados metaestáveis em presença de evento raros. A partir do ensemble de caminhos a constante de reação e o mecanismo de reação podem ser extraídos com muito sucesso. Neste trabalho de mestrado implementamos com muito sucesso o método TPS e realizamos uma comparação quantitativa em relação ao método MC configuracional num problema padrão da isomerização de uma molécula diatômica imersa num líquido repulsivo tipo Weeks-Chandler-Andersen (WCA). A aplicação destes métodos mostrou como o ambiente, na forma de solvente, pode afetar a cinética de um evento raro
Abstract: In this dissertation we aproach the study of one of the limitations of the atomistic simulation called the rare event, which is responsible for the temporal limitation. Examples of problems that involve the rare event are the folding protein, conformational changes in molecules, chemical reactions (in solution), solid diffusion, and the processes of nucleation in a first-order phase transition, among other. Conventional methods as Molecular Dynamics (MD) or Monte Carlo (MC) are useful to explore the potencial energy landscape of very complex systems, but in presence of rare events they become very inefficient, due to lack of statistics in the sampling of the event. These methods spend much computational time sampling the irrelevant configurations and not the transition of interest. In this sense, the Transition Path Sampling (TPS) method, developed by D. Chandler and his collaborators, can explore the potential energy landscape and get a set of true dynamical trajectories that connect the metastable states in presence of the rare events. From this ensemble of trajectories the rate constant and the mechanism of reaction can be extracted with great success. In this work we implemented the TPS method and carried out a quantitative comparison in relation to the configurational MC method in a standard problem of the isomerization of a diatomic molecule immersed in a Weeks-Chandler-Andersen (WCA) repulsive fluid. The application of these methods showed as the environment, in the form of solvent, can affect the kinetic of a rare event
Mestrado
Física Estatistica e Termodinamica
Mestre em Física
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Sallin, Mathieu. "Approche probabiliste du diagnostic de l'état de santé des véhicules militaires terrestres en environnement incertain". Thesis, Université Clermont Auvergne‎ (2017-2020), 2018. http://www.theses.fr/2018CLFAC099.

Testo completo
Abstract (sommario):
Ce travail de thèse est une contribution à l’analyse de santé structurale de la caisse de véhicules militaires terrestres à roues. Appartenant à la gamme 20 - 30 tonnes, de tels véhicules sont déployés dans des contextes opérationnels variés où les conditions de roulage sont sévères et difficilement caractérisables. De plus, faisant face à la concurrence, la fonction mobilité des véhicules est acquise auprès de fournisseurs et n’est plus développée par Nexter Systems. De ce fait, la définition complète de cette fonction n’est plus connue. S’appuyant sur ce contexte, l’objectif principal de la thèse est d’aborder l’état de santé de la structure porteuse par approche probabiliste, afin de maitriser les techniques de calcul permettant la prise en compte de l’aléa intrinsèque des chargements liés à la diversité d’emploi des véhicules militaires terrestres. En particulier, les stratégies les plus pertinentes pour propager les incertitudes de roulage au sein d’un modèle mécanique d’un véhicule terrestre sont définies. Ces travaux décrivent comment il est possible d’exploiter une grandeur d’intérêt au sein du véhicule dans un objectif d’évaluation de la fiabilité par rapport à un critère de dommage donné. Une application sur un démonstrateur entièrement conçu par Nexter Systems illustre l’approche proposée
This thesis is a contribution to the structural health analysis of the body of ground military vehicles. Belonging to the 20 - 30 tons range, such vehicles are deployed in a variety of operational contexts where driving conditions are severe and difficult to characterize. In addition, due to a growing industrial competition, the mobility function of vehicles is acquired from suppliers and is no longer developed by Nexter Systems. As a result, the complete definition of this function is unknown. Based on this context, the main objective of this thesis is to analyze the health of the vehicle body using a probabilistic approach in order to control the calculation techniques allowing to take into account the random nature of loads related to the use of ground military vehicles. In particular, the most relevant strategies for propagating uncertainties due to the terrain within a vehicle dynamics model are defined. This work describes how it is possible to manage an observation data measured in the vehicle for the purpose of assessing the reliability with respect to a given damage criterion. An application on a demonstrator entirely designed by Nexter Systems illustrates the proposed approach
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Jegourel, Cyrille. "Rare event simulation for statistical model checking". Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S084/document.

Testo completo
Abstract (sommario):
Dans cette thèse, nous considérons deux problèmes auxquels le model checking statistique doit faire face. Le premier concerne les systèmes hétérogènes qui introduisent complexité et non-déterminisme dans l'analyse. Le second problème est celui des propriétés rares, difficiles à observer et donc à quantifier. Pour le premier point, nous présentons des contributions originales pour le formalisme des systèmes composites dans le langage BIP. Nous en proposons une extension stochastique, SBIP, qui permet le recours à l'abstraction stochastique de composants et d'éliminer le non-déterminisme. Ce double effet a pour avantage de réduire la taille du système initial en le remplaçant par un système dont la sémantique est purement stochastique sur lequel les algorithmes de model checking statistique sont définis. La deuxième partie de cette thèse est consacrée à la vérification de propriétés rares. Nous avons proposé le recours à un algorithme original d'échantillonnage préférentiel pour les modèles dont le comportement est décrit à travers un ensemble de commandes. Nous avons également introduit les méthodes multi-niveaux pour la vérification de propriétés rares et nous avons justifié et mis en place l'utilisation d'un algorithme multi-niveau optimal. Ces deux méthodes poursuivent le même objectif de réduire la variance de l'estimateur et le nombre de simulations. Néanmoins, elles sont fondamentalement différentes, la première attaquant le problème au travers du modèle et la seconde au travers des propriétés
In this thesis, we consider two problems that statistical model checking must cope. The first problem concerns heterogeneous systems, that naturally introduce complexity and non-determinism into the analysis. The second problem concerns rare properties, difficult to observe, and so to quantify. About the first point, we present original contributions for the formalism of composite systems in BIP language. We propose SBIP, a stochastic extension and define its semantics. SBIP allows the recourse to the stochastic abstraction of components and eliminate the non-determinism. This double effect has the advantage of reducing the size of the initial system by replacing it by a system whose semantics is purely stochastic, a necessary requirement for standard statistical model checking algorithms to be applicable. The second part of this thesis is devoted to the verification of rare properties in statistical model checking. We present a state-of-the-art algorithm for models described by a set of guarded commands. Lastly, we motivate the use of importance splitting for statistical model checking and set up an optimal splitting algorithm. Both methods pursue a common goal to reduce the variance of the estimator and the number of simulations. Nevertheless, they are fundamentally different, the first tackling the problem through the model and the second through the properties
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Walter, Clément. "Using Poisson processes for rare event simulation". Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCC304/document.

Testo completo
Abstract (sommario):
Cette thèse est une contribution à la problématique de la simulation d'événements rares. A partir de l'étude des méthodes de Splitting, un nouveau cadre théorique est développé, indépendant de tout algorithme. Ce cadre, basé sur la définition d'un processus ponctuel associé à toute variable aléatoire réelle, permet de définir des estimateurs de probabilités, quantiles et moments sans aucune hypothèse sur la variable aléatoire. Le caractère artificiel du Splitting (sélection de seuils) disparaît et l'estimateur de la probabilité de dépasser un seuil est en fait un estimateur de la fonction de répartition jusqu'au seuil considéré. De plus, les estimateurs sont basés sur des processus ponctuels indépendants et identiquement distribués et permettent donc l'utilisation de machine de calcul massivement parallèle. Des algorithmes pratiques sont ainsi également proposés.Enfin l'utilisation de métamodèles est parfois nécessaire à cause d'un temps de calcul toujours trop important. Le cas de la modélisation par processus aléatoire est abordé. L'approche par processus ponctuel permet une estimation simplifiée de l'espérance et de la variance conditionnelles de la variable aléaoire résultante et définit un nouveau critère d'enrichissement SUR adapté aux événements rares
This thesis address the issue of extreme event simulation. From a original understanding of the Splitting methods, a new theoretical framework is proposed, regardless of any algorithm. This framework is based on a point process associated with any real-valued random variable and lets defined probability, quantile and moment estimators without any hypothesis on this random variable. The artificial selection of threshold in Splitting vanishes and the estimator of the probability of exceeding a threshold is indeed an estimator of the whole cumulative distribution function until the given threshold. These estimators are based on the simulation of independent and identically distributed replicas of the point process. So they allow for the use of massively parallel computer cluster. Suitable practical algorithms are thus proposed.Finally it can happen that these advanced statistics still require too much samples. In this context the computer code is considered as a random process with known distribution. The point process framework lets handle this additional source of uncertainty and estimate easily the conditional expectation and variance of the resulting random variable. It also defines new SUR enrichment criteria designed for extreme event probability estimation
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Suzuki, Yuya. "Rare-event Simulation with Markov Chain Monte Carlo". Thesis, KTH, Matematisk statistik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-138950.

Testo completo
Abstract (sommario):
In this thesis, we consider random sums with heavy-tailed increments. By the term random sum, we mean a sum of random variables where the number of summands is also random. Our interest is to analyse the tail behaviour of random sums and to construct an efficient method to calculate quantiles. For the sake of efficiency, we simulate rare-events (tail-events) using a Markov chain Monte Carlo (MCMC) method. The asymptotic behaviour of sum and the maximum of heavy-tailed random sums is identical. Therefore we compare random sum and maximum value for various distributions, to investigate from which point one can use the asymptotic approximation. Furthermore, we propose a new method to estimate quantiles and the estimator is shown to be efficient.
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Gudmundsson, Thorbjörn. "Rare-event simulation with Markov chain Monte Carlo". Doctoral thesis, KTH, Matematisk statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-157522.

Testo completo
Abstract (sommario):
Stochastic simulation is a popular method for computing probabilities or expecta- tions where analytical answers are difficult to derive. It is well known that standard methods of simulation are inefficient for computing rare-event probabilities and there- fore more advanced methods are needed to those problems. This thesis presents a new method based on Markov chain Monte Carlo (MCMC) algorithm to effectively compute the probability of a rare event. The conditional distri- bution of the underlying process given that the rare event occurs has the probability of the rare event as its normalising constant. Using the MCMC methodology a Markov chain is simulated, with that conditional distribution as its invariant distribution, and information about the normalising constant is extracted from its trajectory. In the first two papers of the thesis, the algorithm is described in full generality and applied to four problems of computing rare-event probability in the context of heavy- tailed distributions. The assumption of heavy-tails allows us to propose distributions which approximate the conditional distribution conditioned on the rare event. The first problem considers a random walk Y1 + · · · + Yn exceeding a high threshold, where the increments Y are independent and identically distributed and heavy-tailed. The second problem is an extension of the first one to a heavy-tailed random sum Y1+···+YN exceeding a high threshold,where the number of increments N is random and independent of Y1 , Y2 , . . .. The third problem considers the solution Xm to a stochastic recurrence equation, Xm = AmXm−1 + Bm, exceeding a high threshold, where the innovations B are independent and identically distributed and heavy-tailed and the multipliers A satisfy a moment condition. The fourth problem is closely related to the third and considers the ruin probability for an insurance company with risky investments. In last two papers of this thesis, the algorithm is extended to the context of light- tailed distributions and applied to four problems. The light-tail assumption ensures the existence of a large deviation principle or Laplace principle, which in turn allows us to propose distributions which approximate the conditional distribution conditioned on the rare event. The first problem considers a random walk Y1 + · · · + Yn exceeding a high threshold, where the increments Y are independent and identically distributed and light-tailed. The second problem considers a discrete-time Markov chains and the computation of general expectation, of its sample path, related to rare-events. The third problem extends the the discrete-time setting to Markov chains in continuous- time. The fourth problem is closely related to the third and considers a birth-and-death process with spatial intensities and the computation of first passage probabilities. An unbiased estimator of the reciprocal probability for each corresponding prob- lem is constructed with efficient rare-event properties. The algorithms are illustrated numerically and compared to existing importance sampling algorithms.

QC 20141216

Gli stili APA, Harvard, Vancouver, ISO e altri
18

Phinikettos, Ioannis. "Monitoring in survival analysis and rare event simulation". Thesis, Imperial College London, 2012. http://hdl.handle.net/10044/1/9518.

Testo completo
Abstract (sommario):
Monte Carlo methods are a fundamental tool in many areas of statistics. In this thesis, we will examine these methods, especially for rare event simulation. We are mainly interested in the computation of multivariate normal probabilities and in constructing hitting thresholds in survival analysis models. Firstly, we develop an algorithm for computing high dimensional normal probabilities. These kinds of probabilities are a fundamental tool in many statistical applications. The new algorithm exploits the diagonalisation of the covariance matrix and uses various variance reduction techniques. Its performance is evaluated via a simulation study. The new method is designed for computing small exceedance probabilities. Secondly, we introduce a new omnibus cumulative sum chart for monitoring in survival analysis models. By omnibus we mean that it is able to detect any change. This chart exploits the absolute differences between the Kaplan-Meier estimator and the in-control distribution over specific time intervals. A simulation study is presented that evaluates the performance of our proposed chart and compares it to existing methods. Thirdly, we apply the method of adaptive multilevel splitting for the estimation of hitting probabilities and hitting thresholds for the survival analysis cumulative sum charts. Simulation results are presented evaluating the benefits of adaptive multilevel splitting. Finally, we extend the idea of adaptive multilevel splitting by estimating not just a hitting probability, but the whole distribution function up to a certain point. A theoretical result is proved that is used to construct confidence bands for the distribution function conditioned on lying in a closed interval.
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Bastide, Dorinel-Marian. "Handling derivatives risks with XVAs in a one-period network model". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASM027.

Testo completo
Abstract (sommario):
La réglementation requiert des établissements bancaires d'être en mesure de conduire des analyses de scénarios de tests de résistance (stress tests) réguliers de leurs expositions, en particulier face aux chambres de compensation (CCPs) auxquels ils sont largement exposés, en appliquant des chocs de marchés pour capturer le risque de marché et des chocs économiques pouvant conduire à l'état de faillite, dit aussi de défaut, divers acteurs financiers afin de refléter les risques de crédit et de contrepartie. Un des rôles principaux des CCPs est d'assurer par leur interposition entre acteurs financiers la réduction du risque de contrepartie associé aux pertes potentiels des engagements contractuels non respectés dus à la faillite d'une ou plusieurs des parties engagées. Elles facilitent également les divers flux financiers des activités de trading même en cas de défaut d'un ou plusieurs de leurs membres en re-basculant certaines des positions de ces membres et en allouant toute perte qui pourrait se matérialiser suite à ces défauts aux membres survivants . Pour développer une vision juste des risques et disposer d'outils performants de pilotage du capital, il apparaît essentiel d'être en mesure d'appréhender de manière exhaustive les pertes et besoins de liquidités occasionnés par ces divers chocs dans ces réseaux financiers ainsi que d'avoir une compréhension précise des mécanismes sous-jacents. Ce projet de thèse aborde différentes questions de modélisation permettant de refléter ces besoins, qui sont au cœur de la gestion des risques d'une banque dans les environnements actuels de trading centralisé. Nous commençons d'abord par définir un dispositif de modèle statique à une période reflétant les positions hétérogènes et possibilité de défauts joints de multiples acteurs financiers, qu'ils soient membres de CCPs ou autres participants financiers, pour identifier les différents coûts, dits de XVA, générés par les activités de clearing et bilatérales avec des formules explicites pour ces coûts. Divers cas d'usage de ce dispositif sont illustrés avec des exemples d'exercices de stress test sur des réseaux financiers depuis le point de vue d'un membre ou de novation de portefeuille de membres en défaut sur des CCPs avec les autres membres survivants. Des modèles de distributions à queues épaisses pour générer les pertes sur les portefeuilles et les défauts sont privilégiés avec l'application de techniques de Monte-Carlo en très grande dimension accompagnée des quantifications d'incertitudes numériques. Nous développons aussi l'aspect novation de portefeuille de membres en défauts et les transferts de coûts XVA associés. Ces novations peuvent s'exécuter soit sur les places de marchés (exchanges), soit par les CCP elles-mêmes qui désignent les repreneurs optimaux ou qui mettent aux enchères les positions des membres défaillants avec des expressions d'équilibres économiques. Les défauts de membres sur plusieurs CCPs en commun amènent par ailleurs à la mise en équation et la résolution de problèmes d'optimisation multidimensionnelle du transfert des risques abordées dans ces travaux
Finance regulators require banking institutions to be able to conduct regular scenario analyses to assess their resistance to various shocks (stress tests) of their exposures, in particular towards clearing houses (CCPs) to which they are largely exposed, by applying market shocks to capture market risk and economic shocks leading some financial players to bankruptcy, known as default state, to reflect both credit and counterparty risks. By interposing itself between financial actors, one of the main purposes of CCPs are to limit counterparty risk due to contractual payment failures due to one or several defaults among engaged parties. They also facilitate the various financial flows of the trading activities even in the event of default of one or more of their members by re-arranging certain positions and allocating any loss that could materialize following these defaults to the surviving members. To develop a relevant view of risks and ensure effective capital steering tools, it is essential for banks to have the capacity to comprehensively understand the losses and liquidity needs caused by these various shocks within these financial networks as well as to have an understanding of the underlying mechanisms. This thesis project aims at tackling modelling issues to answer those different needs that are at the heart of risk management practices for banks under clearing environments. We begin by defining a one-period static model for reflecting the market heterogeneous positions and possible joint defaults of multiple financial players, being members of CCPs and other financial participants, to identify the different costs, known as XVAs, generated by both clearing and bilateral activities, with explicit formulas for these costs. Various use cases of this modelling framework are illustrated with stress test exercises examples on financial networks from a member's point of view or innovation of portfolio of CCP defaulted members with other surviving members. Fat-tailed distributions are favoured to generate portfolio losses and defaults with the application of very large-dimension Monte-Carlo methods along with numerical uncertainty quantifications. We also expand on the novation aspects of portfolios of defaulted members and the associated XVA costs transfers. These innovations can be carried out either on the marketplaces (exchanges) or by the CCPs themselves by identifying the optimal buyers or by conducting auctions of defaulted positions with dedicated economic equilibrium problems. Failures of members on several CCPs in common also lead to the formulation and resolution of multidimensional optimization problems of risk transfer that are introduced in this thesis
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Zhang, Benjamin Jiahong. "A coupling approach to rare event simulation via dynamic importance sampling". Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112384.

Testo completo
Abstract (sommario):
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 106-109).
Rare event simulation involves using Monte Carlo methods to estimate probabilities of unlikely events and to understand the dynamics of a system conditioned on a rare event. An established class of algorithms based on large deviations theory and control theory constructs provably asymptotically efficient importance sampling estimators. Dynamic importance sampling is one these algorithms in which the choice of biasing distribution adapts in the course of a simulation according to the solution of an Isaacs partial differential equation or by solving a sequence of variational problems. However, obtaining the solution of either problem may be expensive, where the cost of solving these problems may be even more expensive than performing simple Monte Carlo exhaustively. Deterministic couplings induced by transport maps allows one to relate a complex probability distribution of interest to a simple reference distribution (e.g. a standard Gaussian) through a monotone, invertible function. This diverts the complexity of the distribution of interest into a transport map. We extend the notion of transport maps between probability distributions on Euclidean space to probability distributions on path space following a similar procedure to Itô's coupling. The contraction principle is a key concept from large deviations theory that allows one to relate large deviations principles of different systems through deterministic couplings. We convey that with the ability to computationally construct transport maps, we can leverage the contraction principle to reformulate the sequence of variational problems required to implement dynamic importance sampling and make computation more amenable. We apply this approach to simple rotorcraft models. We conclude by outlining future directions of research such as using the coupling interpretation to accelerate rare event simulation via particle splitting, using transport maps to learn large deviations principles, and accelerating inference of rare events.
by Benjamin Jiahong Zhang.
S.M.
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Lamers, Eugen. "Contributions to Simulation Speed-up : Rare Event Simulation and Short-term Dynamic Simulation for Mobile Network Planning /". Wiesbaden : Vieweg + Teubner in GWV Fachverlage, 2008. http://d-nb.info/988382016/04.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Lamers, Eugen. "Contributions to simulation speed-up rare event simulation and short-term dynamic simulation for mobile network planning". Wiesbaden Vieweg + Teubner, 2007. http://d-nb.info/988382016/04.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Gudmundsson, Thorbjörn. "Markov chain Monte Carlo for rare-event simulation in heavy-tailed settings". Licentiate thesis, KTH, Matematisk statistik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-134624.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Dean, Thomas Anthony. "A subsolutions approach to the analysis and implementation of splitting algorithms in rare event simulation". View abstract/electronic edition; access limited to Brown University users, 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3318304.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Turati, Pietro. "Méthodes de simulation adaptative pour l’évaluation des risques de système complexes". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLC032/document.

Testo completo
Abstract (sommario):
L’évaluation de risques est conditionnée par les connaissances et les informations disponibles au moment où l’analyse est faite. La modélisation et la simulation sont des moyens d’explorer et de comprendre le comportement du système, d’identifier des scénarios critiques et d’éviter des surprises. Un certain nombre de simulations du modèle sont exécutées avec des conditions initiales et opérationnelles différentes pour identifier les scénarios conduisant à des conséquences critiques et pour estimer leurs probabilités d’occurrence. Pour les systèmes complexes, les modèles de simulations peuvent être : i) de haute dimension ; ii) boite noire ; iii) dynamiques ; iv) coûteux en termes de calcul, ce qu’empêche l’analyste d’exécuter toutes les simulations pour les conditions multiples qu’il faut considérer.La présente thèse introduit des cadres avancés d’évaluation des risques basée sur les simulations. Les méthodes développées au sein de ces cadres sont attentives à limiter les coûts de calcul requis par l’analyse, afin de garder une scalabilité vers des systèmes complexes. En particulier, toutes les méthodes proposées partagent l’idée prometteuse de focaliser automatiquement et de conduire d’une manière adaptive les simulations vers les conditions d’intérêt pour l’analyse, c’est-à-dire, vers des informations utiles pour l'évaluation des risques.Les avantages des méthodes proposées ont été montrés en ce qui concerne différentes applications comprenant, entre autres, un sous-réseau de transmission de gaz, un réseau électrique et l’Advanced Lead Fast Reactor European Demonstrator (ALFRED)
Risk assessment is conditioned on the knowledge and information available at the moment of the analysis. Modeling and simulation are ways to explore and understand system behavior, for identifying critical scenarios and avoiding surprises. A number of simulations of the model are run with different initial and operational conditions to identify scenarios leading to critical consequences and to estimate their probabilities of occurrence. For complex systems, the simulation models can be: i) high-dimensional; ii) black-box; iii) dynamic; and iv) computationally expensive to run, preventing the analyst from running the simulations for the multiple conditions that need to be considered.The present thesis presents advanced frameworks of simulation-based risk assessment. The methods developed within the frameworks are attentive to limit the computational cost required by the analysis, in order to keep them scalable to complex systems. In particular, all methods proposed share the powerful idea of automatically focusing and adaptively driving the simulations towards those conditions that are of interest for the analysis, i.e., for risk-oriented information.The advantages of the proposed methods have been shown with respect to different applications including, among others, a gas transmission subnetwork, a power network and the Advanced Lead Fast Reactor European Demonstrator (ALFRED)
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Yu, Xiaomin. "Simulation Study of Sequential Probability Ratio Test (SPRT) in Monitoring an Event Rate". University of Cincinnati / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1244562576.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Razaaly, Nassim. "Rare Event Estimation and Robust Optimization Methods with Application to ORC Turbine Cascade". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX027.

Testo completo
Abstract (sommario):
Cette thèse vise à formuler des méthodes innovantes de quantification d'incertitude (UQ) à la fois pour l'optimisation robuste (RO) et l'optimisation robuste et fiable (RBDO). L’application visée est l’optimisation des turbines supersoniques pour les Cycles Organiques de Rankine (ORC).Les sources d'énergie typiques des systèmes d'alimentation ORC sont caractérisées par une source de chaleur et des conditions thermodynamiques entrée/sortie de turbine variables. L'utilisation de composés organiques, généralement de masse moléculaire élevée, conduit à des configurations de turbines sujettes à des écoulements supersoniques et des chocs, dont l'intensité augmente dans les conditions off-design; ces caractéristiques dépendent également de la forme locale de la pâle, qui peut être influencée par la variabilité géométrique induite par les procédures de fabrication. Il existe un consensus sur la nécessité d’inclure ces incertitudes dans la conception, nécessitant ainsi des méthodes UQ et un outil permettant l'optimisation de form adapté.Ce travail est décomposé en deux parties principales. La première partie aborde le problème de l’estimation des événements rares en proposant deux méthodes originales pour l'estimation de probabilité de défaillance (metaAL-OIS et eAK-MCS) et un pour le calcul quantile (QeAK-MCS). Les trois méthodes reposent sur des stratégies d’adaptation basées sur des métamodèles (Kriging), visant à affiner directement la région dite Limit-State-Surface (LSS), contrairement aux methodes de type Subset Simulation (SS). En effet, ces dernières considèrent différents seuils intermédiaires associés à des LSSs devant être raffinés. Cette propriété de raffinement direct est cruciale, car elle permet la compatibilité de couplage à des méthodes RBDO existantes.En particulier, les algorithmes proposés ne sont pas soumis à des hypothèses restrictives sur le LSS (contrairement aux méthodes de type FORM/SORM), tel que le nombre de modes de défaillance, cependant doivent être formulés dans l’espace standard. Les méthodes eAK-MCS et QeAK-MCS sont dérivées de la méthode AK-MCS, et d'un échantillonnage adaptatif et parallèle basé sur des algorithmes de type K-Means pondéré. MetaAL-OIS présente une stratégie de raffinement séquentiel plus élaborée basée sur des échantillons MCMC tirés à partir d'une densité d'échantillonage d'importance (ISD) quasi optimale. Par ailleurs, il propose la construction d’une ISD de type mélange de gaussiennes, permettant l’estimation précise de petites probabilités de défaillance lorsqu’un grand nombre d'échantillons (plusieurs millions) est disponible, comme alternative au SS. Les trois méthodes sont très performantes pour des exemples analytiques 2D à 8D classiques, tirés de la littérature sur la fiabilité des structures, certaines présentant plusieurs modes de défaillance, et tous caractérisés par une très faible probabilité de défaillance/niveau de quantile. Des estimations précises sont obtenues pour les cas considérés en un nombre raisonnable d'appels à la fonction de performance
This thesis aims to formulate innovative Uncertainty Quantification (UQ) methods in both Robust Optimization (RO) and Reliability-Based Design Optimization (RBDO) problems. The targeted application is the optimization of supersonic turbines used in Organic Rankine Cycle (ORC) power systems.Typical energy sources for ORC power systems feature variable heat load and turbine inlet/outlet thermodynamic conditions. The use of organic compounds with a heavy molecular weight typically leads to supersonic turbine configurations featuring supersonic flows and shocks, which grow in relevance in the aforementioned off-design conditions; these features also depend strongly on the local blade shape, which can be influenced by the geometric tolerances of the blade manufacturing. A consensus exists about the necessity to include these uncertainties in the design process, so requiring fast UQ methods and a comprehensive tool for performing shape optimization efficiently.This work is decomposed in two main parts. The first one addresses the problem of rare events estimation, proposing two original methods for failure probability (metaAL-OIS and eAK-MCS) and one for quantile computation (QeAK-MCS). The three methods rely on surrogate-based (Kriging) adaptive strategies, aiming at refining the so-called Limit-State Surface (LSS) directly, unlike Subset Simulation (SS) derived methods. Indeed, the latter consider intermediate threshold associated with intermediate LSSs to be refined. This direct refinement property is of crucial importance since it enables the adaptability of the developed methods for RBDO algorithms. Note that the proposed algorithms are not subject to restrictive assumptions on the LSS (unlike the well-known FORM/SORM), such as the number of failure modes, however need to be formulated in the Standard Space. The eAK-MCS and QeAK-MCS methods are derived from the AK-MCS method and inherit a parallel adaptive sampling based on weighed K-Means. MetaAL-OIS features a more elaborate sequential refinement strategy based on MCMC samples drawn from a quasi-optimal ISD. It additionally proposes the construction of a Gaussian mixture ISD, permitting the accurate estimation of small failure probabilities when a large number of evaluations (several millions) is tractable, as an alternative to SS. The three methods are shown to perform very well for 2D to 8D analytical examples popular in structural reliability literature, some featuring several failure modes, all subject to very small failure probability/quantile level. Accurate estimations are performed in the cases considered using a reasonable number of calls to the performance function.The second part of this work tackles original Robust Optimization (RO) methods applied to the Shape Design of a supersonic ORC Turbine cascade. A comprehensive Uncertainty Quantification (UQ) analysis accounting for operational, fluid parameters and geometric (aleatoric) uncertainties is illustrated, permitting to provide a general overview over the impact of multiple effects and constitutes a preliminary study necessary for RO. Then, several mono-objective RO formulations under a probabilistic constraint are considered in this work, including the minimization of the mean or a high quantile of the Objective Function. A critical assessment of the (Robust) Optimal designs is finally investigated
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Uribe-Castillo, Felipe [Verfasser], Daniel [Akademischer Betreuer] Straub, Youssef M. [Gutachter] Marzouk, Daniel [Gutachter] Straub e Elisabeth [Gutachter] Ullmann. "Bayesian analysis and rare event simulation of random fields / Felipe Uribe-Castillo ; Gutachter: Youssef M. Marzouk, Daniel Straub, Elisabeth Ullmann ; Betreuer: Daniel Straub". München : Universitätsbibliothek der TU München, 2020. http://d-nb.info/1232406139/34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Reinhardt, Aleks. "Computer simulation of the homogeneous nucleation of ice". Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:9ec0828b-df99-42e1-8694-14786d7578b9.

Testo completo
Abstract (sommario):
In this work, we wish to determine the free energy landscape and the nucleation rate associated with the process of homogeneous ice nucleation. To do this, we simulate the homogeneous nucleation of ice with the mW monatomic model of water and with all-atom models of water using primarily the umbrella sampling rare event method. We find that the use of the mW model of water, which has simpler dynamics compared to all-atom models of water, but is nevertheless surprisingly good at reproducing experimental data, results in very reasonable agreement with classical nucleation theory, in contrast to some previous simulations of homogeneous ice nucleation. We suggest that previous simulations did not observe the lowest free energy pathway in order parameter space because of their use of global order parameters, leading to a deviation from classical nucleation theory predictions. Whilst monatomic water can nucleate reasonably quickly, all-atom models of water are considerably more difficult to simulate, primarily because of their slow dynamics of ice growth and the fact that standard order parameters do not work well in driving nucleation when such models are being used. In this thesis, we describe a local, rotationally invariant order parameter that is capable of growing ice homogeneously in a biassed simulation without the unnatural effects introduced by global order parameters, and without leading to non-physical chain-like growth of 'ice' clusters that results from a naïve implementation of the standard Steinhardt-Ten Wolde order parameter. We have successfully used this order parameter to force the growth of ice clusters in simulations of all-atom models of water. However, although ice growth can be achieved, equilibrating simulations with all-atom models of water is extremely difficult. We describe several approaches to speeding up the equilibration in all-atom models of water to enable the computation of free energy profiles for homogeneous ice nucleation.
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Estecahandy, Maïder. "Méthodes accélérées de Monte-Carlo pour la simulation d'événements rares. Applications aux Réseaux de Petri". Thesis, Pau, 2016. http://www.theses.fr/2016PAUU3008/document.

Testo completo
Abstract (sommario):
Les études de Sûreté de Fonctionnement (SdF) sur les barrières instrumentées de sécurité représentent un enjeu important dans de nombreux domaines industriels. Afin de pouvoir réaliser ce type d'études, TOTAL développe depuis les années 80 le logiciel GRIF. Pour prendre en compte la complexité croissante du contexte opératoire de ses équipements de sécurité, TOTAL est de plus en plus fréquemment amené à utiliser le moteur de calcul MOCA-RP du package Simulation. MOCA-RP permet d'analyser grâce à la simulation de Monte-Carlo (MC) les performances d'équipements complexes modélisés à l'aide de Réseaux de Petri (RP). Néanmoins, obtenir des estimateurs précis avec MC sur des équipements très fiables, tels que l'indisponibilité, revient à faire de la simulation d'événements rares, ce qui peut s'avérer être coûteux en temps de calcul. Les méthodes standard d'accélération de la simulation de Monte-Carlo, initialement développées pour répondre à cette problématique, ne semblent pas adaptées à notre contexte. La majorité d'entre elles ont été définies pour améliorer l'estimation de la défiabilité et/ou pour les processus de Markov. Par conséquent, le travail accompli dans cette thèse se rapporte au développement de méthodes d'accélération de MC adaptées à la problématique des études de sécurité se modélisant en RP et estimant notamment l'indisponibilité. D'une part, nous proposons l'Extension de la Méthode de Conditionnement Temporel visant à accélérer la défaillance individuelle des composants. D'autre part, la méthode de Dissociation ainsi que la méthode de ``Truncated Fixed Effort'' ont été introduites pour accroitre l'occurrence de leurs défaillances simultanées. Ensuite, nous combinons la première technique avec les deux autres, et nous les associons à la méthode de Quasi-Monte-Carlo randomisée. Au travers de diverses études de sensibilité et expériences numériques, nous évaluons leur performance, et observons une amélioration significative des résultats par rapport à MC. Par ailleurs, nous discutons d'un sujet peu familier à la SdF, à savoir le choix de la méthode à utiliser pour déterminer les intervalles de confiance dans le cas de la simulation d'événements rares. Enfin, nous illustrons la faisabilité et le potentiel de nos méthodes sur la base d'une application à un cas industriel
The dependability analysis of safety instrumented systems is an important industrial concern. To be able to carry out such safety studies, TOTAL develops since the eighties the dependability software GRIF. To take into account the increasing complexity of the operating context of its safety equipment, TOTAL is more frequently led to use the engine MOCA-RP of the GRIF Simulation package. Indeed, MOCA-RP allows to estimate quantities associated with complex aging systems modeled in Petri nets thanks to the standard Monte Carlo (MC) simulation. Nevertheless, deriving accurate estimators, such as the system unavailability, on very reliable systems involves rare event simulation, which requires very long computing times with MC. In order to address this issue, the common fast Monte Carlo methods do not seem to be appropriate. Many of them are originally defined to improve only the estimate of the unreliability and/or well-suited for Markovian processes. Therefore, the work accomplished in this thesis pertains to the development of acceleration methods adapted to the problematic of performing safety studies modeled in Petri nets and estimating in particular the unavailability. More specifically, we propose the Extension of the "Méthode de Conditionnement Temporel" to accelerate the individual failure of the components, and we introduce the Dissociation Method as well as the Truncated Fixed Effort Method to increase the occurrence of their simultaneous failures. Then, we combine the first technique with the two other ones, and we also associate them with the Randomized Quasi-Monte Carlo method. Through different sensitivities studies and benchmark experiments, we assess the performance of the acceleration methods and observe a significant improvement of the results compared with MC. Furthermore, we discuss the choice of the confidence interval method to be used when considering rare event simulation, which is an unfamiliar topic in the field of dependability. Last, an application to an industrial case permits the illustration of the potential of our solution methodology
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Aboutaleb, Adam. "Empirical study of the effect of stochastic variability on the performance of human-dependent flexible flow lines". Thesis, De Montfort University, 2015. http://hdl.handle.net/2086/12103.

Testo completo
Abstract (sommario):
Manufacturing systems have developed both physically and technologically, allowing production of innovative new products in a shorter lead time, to meet the 21st century market demand. Flexible flow lines for instance use flexible entities to generate multiple product variants using the same routing. However, the variability within the flow line is asynchronous and stochastic, causing disruptions to the throughput rate. Current autonomous variability control approaches decentralise the autonomous decision allowing quick response in a dynamic environment. However, they have limitations, e.g., uncertainty that the decision is globally optimal and applicability to limited decisions. This research presents a novel formula-based autonomous control method centered on an empirical study of the effect of stochastic variability on the performance of flexible human-dependent serial flow lines. At the process level, normal distribution was used and generic nonlinear terms were then derived to represent the asynchronous variability at the flow line level. These terms were shortlisted based on their impact on the throughput rate and used to develop the formula using data mining techniques. The developed standalone formulas for the throughput rate of synchronous and asynchronous human-dependent flow lines gave steady and accurate results, higher than closest rivals, across a wide range of test data sets. Validation with continuous data from a real-world case study gave a mean absolute percentage error of 5%. The formula-based autonomous control method quantifies the impact of changes in decision variables, e.g., routing, arrival rate, etc., on the global delivery performance target, i.e., throughput, and recommends the optimal decisions independent of the performance measures of the current state. This approach gives robust decisions using pre-identified relationships and targets a wider range of decision variables. The performance of the developed autonomous control method was successfully validated for process, routing and product decisions using a standard 3x3 flexible flow line model and the real-world case study. The method was able to consistently reach the optimal decisions that improve local and global performance targets, i.e., throughput, queues and utilisation efficiency, for static and dynamic situations. For the case of parallel processing which the formula cannot handle, a hybrid autonomous control method, integrating the formula-based and an existing autonomous control method, i.e., QLE, was developed and validated.
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Lagnoux, Agnès. "Analyse des modeles de branchement avec duplication des trajectoires pour l'étude des événements rares". Toulouse 3, 2006. http://www.theses.fr/2006TOU30231.

Testo completo
Abstract (sommario):
Nous étudions, dans cette thèse, le modèle de branchement avec duplication des trajectoires d'abord introduit pour l'étude des événements rares destiné à accélérer la simulation. Dans cette technique, les échantillons sont dupliqués en R copies à différents niveaux pendant la simulation. L'optimisation de l'algorithme à coût fixé suggère de prendre les probabilités de transition entre les niveaux égales à une constante et de dupliquer un nombre égal à l'inverse de cette constante, nombre qui peut être non entier. Nous étudions d'abord la sensibilité de l'erreur relative entre la probabilité d'intérêt P(A) et son estimateur en fonction de la stratégie adoptée pour avoir des nombres de retirage entiers. Ensuite, puisqu'en pratique les probabilités de transition sont généralement inconnues (et de même pour les nombres de retirages), nous proposons un algorithme en deux étapes pour contourner ce problème. Des applications numériques et comparaisons avec d'autres modèles sont proposés
This thesis deals with the splitting method first introduced in rare event analysis in order to speed-up simulation. In this technique, the sample paths are split into R multiple copies at various stages during the simulation. Given the cost, the optimization of the algorithm suggests to take the transition probabilities between stages equal to some constant and to resample the inverse of that constant subtrials, which may be non-integer and even unknown but estimated. First, we study the sensitivity of the relative error between the probability of interest P(A) and its estimator depending on the strategy that makes the resampling numbers integers. Then, since in practice the transition probabilities are generally unknown (and so the optimal resampling umbers), we propose a two-steps algorithm to face that problem. Several numerical applications and comparisons with other models are proposed
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Gedion, Michael. "Contamination des composants électroniques par des éléments radioactifs". Thesis, Montpellier 2, 2012. http://www.theses.fr/2012MON20267/document.

Testo completo
Abstract (sommario):
Cette thèse a pour objet l'étude des éléments radioactifs qui peuvent altérer le bon fonctionnement des composants électroniques au niveau terrestre. Ces éléments radioactifs sont appelés émetteurs alpha. Intrinsèques aux composants électroniques, ils se désintègrent et émettent des particules alpha qui ionisent la matière du dispositif électronique et déclenchent des SEU (Single Event Upset). Ces travaux visent à évaluer la fiabilité des circuits digitaux due à cette contrainte radiative interne aux composants électroniques. Dans ce but, tous les émetteurs alpha naturelles ou artificielles susceptibles de contaminer les matériaux des circuits digitaux ont été identifiés et classés en deux catégories : les impuretés naturelles et les radionucléides introduits. Les impuretés naturelles proviennent d'une contamination naturelle ou involontaire des matériaux utilisés. Afin d'évaluer leurs effets sur la fiabilité, le SER (Soft Error Rate) a été déterminé par simulations Monte-Carlo pour différents nœuds technologiques dans le cas de l'équilibre séculaire. Par ailleurs, avec la miniaturisation des circuits digitaux, de nouveaux éléments chimiques ont été suggérés ou employés dans la nanoélectronique. Les radionucléides introduits regroupent ce type d'élément naturellement constitué d'émetteurs alpha. Des études basées sur des simulations Monte-Carlo et des applications analytiques ont été effectués pour évaluer la fiabilité des dispositifs électroniques. Par la suite, des recommandations ont été proposées sur l'emploi de nouveaux éléments chimiques dans la nanotechnologie
This work studies radioactive elements that can affect the proper functioning of electronic components at ground level. These radioactive elements are called alpha emitters. Intrinsic to electronic components, they decay and emit alpha particles which ionize the material of the electronic device and trigger SEU (Single Event Upset).This thesis aims to assess the reliability of digital circuits due to this internal radiative constraint of electronic components. For that, all alpha-emitting natural or artificial isotopes that can contaminate digital circuits have been identified and classified into two categories: natural impurities and introduced radionuclides.Natural impurities result from a natural or accidental contamination of materials used in nanotechnology. To assess their effects on reliability, the SER (Soft Error Rate) was determined by Monte Carlo simulations for different technology nodes in the case of secular equilibrium. Besides, a new analytical approach was developed to determine the consequences of secular disequilibrium on the reliability of digital circuits.Moreover, with the miniaturization of digital circuits, new chemical elements have been suggested or used in nanoelectronics. The introduced radionuclides include this type of element consisting of natural alpha emitters. Studies based on Monte Carlo simulations and analytical approches have been conducted to evaluate the reliability of electronic devices. Subsequently, recommendations were proposed on the use of new chemical elements in nanotechnology
Gli stili APA, Harvard, Vancouver, ISO e altri
34

DANTAS, Renata Ara?jo. "An?lise t?cnica e econ?mica da produ??o de biodiesel utilizando ?leo de fritura residual em unidade piloto". Universidade Federal Rural do Rio de Janeiro, 2016. https://tede.ufrrj.br/jspui/handle/jspui/1934.

Testo completo
Abstract (sommario):
Submitted by Jorge Silva (jorgelmsilva@ufrrj.br) on 2017-07-28T18:32:47Z No. of bitstreams: 1 2016 - Renata Ara?jo Dantass.pdf: 3354692 bytes, checksum: 1d9395453e1cd720f2bd08e346ef7ce7 (MD5)
Made available in DSpace on 2017-07-28T18:32:47Z (GMT). No. of bitstreams: 1 2016 - Renata Ara?jo Dantass.pdf: 3354692 bytes, checksum: 1d9395453e1cd720f2bd08e346ef7ce7 (MD5) Previous issue date: 2016-12-15
The increasing economic and technological development over the world has caused a huge demand of energy. In Brazil, the transportation sector is the main responsible for energy consumption in the country, reaching 45.2% of the diesel use. In this scenario, renewable fuels become an attractive alternative due to the numerous benefits attributed to their use, such as degradability, absence of toxicity, reduction of pollutant emission and renewable origin. As in other countries, Brazil has authorized the use of a minimum quantity of biodiesel in diesel. Currently, according to federal law n? 11.097/2005, the minimum mixing percentage is equivalent to 7% and during 2016, it will increase to 8%. However, the major challenge to produce biodiesel in a large-scale is the high operating cost. According to the literature, the raw material costs are so high, and correspond to 70-85% of the total cost of production. The use of alternative and low cost sources of triglycerides for biodiesel production has been extensively studied. Waste oil, from domestic and commercial consumption, is a potential residue because of its high quantity available and low aggregated value. In this context, the production of biodiesel in a pilot unit present in the UFRRJ was studied using the homogeneous alkaline transesterification reaction. The operational condition used in the pilot plant was previously determined from an experimental planning, choosing the best one that gave the high conversion and acceptable values of viscosity. The simulations were performed in the SuperPro Designer?, and the prices of materials, utilities, consumables and costs with operators were determined based on the domestic and international market. A previous analysis of the production of 250 kg/batch of biodiesel was economically viable considering the unit cost of biodiesel of $ 0.62/kg. The sensitivity analysis showed that the plant becomes economically viable from a biodiesel price equal or greater than $ 0.754/kg, almost 34 cents above the initially estimated value. In order to turn the biodiesel production more profitable, simulations were done with different vegetable oils with different levels of costs. The results were unsatisfactory from the economic point of view, confirming the importance of the use of low cost raw materials. A scale up was made from the pilot unit and the technical and economic data showed that the project becomes more profitable starting from the production of 2.500 kg of residual oil, presenting IRR and return time of respectively 116.9 % and 1 year. The production of biodiesel in the pilot unit was carried out and the specifications were compared to ANP. The values of viscosity, specific mass, flash point, iodine content and corrosivity were in the limit range established by ANP.
O crescente desenvolvimento econ?mico e tecnol?gico associado ao aumento populacional tem provocado uma grande demanda de energia. No Brasil, atrav?s do Balan?o Energ?tico Nacional divulgado pelo Minist?rio de Minas e Energia, foi apontado que o setor de transporte foi o maior respons?vel pelo consumo de energia no pa?s, sendo a maior parte (45,2%) referente ao uso de ?leo diesel. Neste cen?rio, os combust?veis renov?veis tornam-se uma alternativa atraente devido aos in?meros benef?cios atribu?dos ao seu uso, como degradabilidade, aus?ncia de toxicidade, redu??o da emiss?o de poluentes e origem renov?vel. Assim como em outros pa?ses, o Brasil autorizou o uso de misturas m?nimas obrigat?rias de biodiesel ao diesel. Atualmente, de acordo com a lei federal n?. 11.097/2005, o percentual de mistura m?nima ? equivalente a 7% e, para o ano de 2016, foi aprovado o aumento para 8%. Entretanto, a maior barreira para produ??o e utiliza??o de biodiesel em larga escala ? o elevado custo operacional. De acordo com a literatura, o custo de aquisi??o da mat?ria prima ? o mais alto e corresponde entre 70- 85% do custo total de produ??o. O uso de fontes de triglicer?deos alternativas e menos dispendiosas para produ??o de biodiesel tem sido amplamente estudada. O ?leo residual, proveniente do consumo dom?stico e comercial, ? uma fonte em potencial devido a sua vasta abund?ncia e baixo valor agregado. Neste contexto, a produ??o de biodiesel em uma unidade piloto presente na UFRRJ foi estudada a partir da rea??o de transesterifica??o homog?nea b?sica. A condi??o operacional empregada na planta piloto foi previamente determinada a partir de um planejamento experimental, objetivando maior convers?o e menor gasto energ?tico e material. Com aux?lio de um simulador comercial, SuperPro Designer?, o balan?o de massa e energia foi realizado. Com base no mercado nacional e internacional, os pre?os de materiais, utilidades, consum?veis e custos com operadores foram determinados. Uma an?lise pr?via da produ??o de 250 kg/batelada de biodiesel, de acordo com as opera??es unit?rias presentes na planta original, apresentou-se invi?vel economicamente considerando o custo unit?rio de biodiesel de $0,62/kg. A an?lise de sensibilidade mostrou que a planta torna-se vi?vel economicamente a partir de um pre?o de venda do biodiesel igual ou superior a $0,754/kg, 34 centavos acima do valor inicialmente estimado. Simula??es da produ??o de biodiesel na unidade piloto com ?leos vegetais de diferentes custos de aquisi??o foram realizadas. Os resultados mostraram-se insatisfat?rios do ponto de vista econ?mico, ratificando a import?ncia do uso de mat?rias-primas de baixo valor agregado. Um ?scale up? foi feito a partir da unidade piloto e os dados t?cnicos e econ?micos mostraram que o projeto torna-se mais lucrativo partindo da produ??o de 2.500 kg de ?leo residual, apresentando TIR e tempo de retorno de, respectivamente, 116,9% e 1 ano. A produ??o de biodiesel na unidade piloto foi realizada e o biodiesel produzido foi analisado e comparado quanto ?s especifica??es da ANP. Os valores de viscosidade cinem?tica, massa espec?fica, ponto de fulgor, ?ndice de iodo, corrosividade ao cobre e aspecto atenderam os limites preconizados pela ANP.
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Malherbe, Victor. "Multi-scale modeling of radiation effects for emerging space electronics : from transistors to chips in orbit". Thesis, Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0753/document.

Testo completo
Abstract (sommario):
En raison de leur impact sur la fiabilité des systèmes, les effets du rayonnement cosmique sur l’électronique ont été étudiés dès le début de l’exploration spatiale. Néanmoins, de récentes évolutions industrielles bouleversent les pratiques dans le domaine, les technologies standard devenant de plus en plus attrayantes pour réaliser des circuits durcis aux radiations. Du fait de leurs fréquences élevées, des nouvelles architectures de transistor et des temps de durcissement réduits, les puces fabriquées suivant les derniers procédés CMOS posent de nombreux défis. Ce travail s’attelle donc à la simulation des aléas logiques permanents (SEU) et transitoires (SET), en technologies FD-SOI et bulk Si avancées. La réponse radiative des transistors FD-SOI 28 nm est tout d’abord étudiée par le biais de simulations TCAD, amenant au développement de deux modèles innovants pour décrire les courants induits par particules ionisantes en FD-SOI. Le premier est principalement comportemental, tandis que le second capture des phénomènes complexes tels que l’amplification bipolaire parasite et la rétroaction du circuit, à partir des premiers principes de semi-conducteurs et en accord avec les simulations TCAD poussées.Ces modèles compacts sont alors couplés à une plateforme de simulation Monte Carlo du taux d’erreurs radiatives (SER) conduisant à une large validation sur des données expérimentales recueillies sous faisceau de particules. Enfin, des études par simulation prédictive sont présentées sur des cellules mémoire et portes logiques en FD-SOI 28 nm et bulk Si 65 nm, permettant d’approfondir la compréhension des mécanismes contribuant au SER en orbite des circuits intégrés modernes
The effects of cosmic radiation on electronics have been studied since the early days of space exploration, given the severe reliability constraints arising from harsh space environments. However, recent evolutions in the space industry landscape are changing radiation effects practices and methodologies, with mainstream technologies becoming increasingly attractive for radiation-hardened integrated circuits. Due to their high operating frequencies, new transistor architectures, and short rad-hard development times, chips manufactured in latest CMOS processes pose a variety of challenges, both from an experimental standpoint and for modeling perspectives. This work thus focuses on simulating single-event upsets and transients in advanced FD-SOI and bulk silicon processes.The soft-error response of 28 nm FD-SOI transistors is first investigated through TCAD simulations, allowing to develop two innovative models for radiation-induced currents in FD-SOI. One of them is mainly behavioral, while the other captures complex phenomena, such as parasitic bipolar amplification and circuit feedback effects, from first semiconductor principles and in agreement with detailed TCAD simulations.These compact models are then interfaced to a complete Monte Carlo Soft-Error Rate (SER) simulation platform, leading to extensive validation against experimental data collected on several test vehicles under accelerated particle beams. Finally, predictive simulation studies are presented on bit-cells, sequential and combinational logic gates in 28 nm FD-SOI and 65 nm bulk Si, providing insights into the mechanisms that contribute to the SER of modern integrated circuits in orbit
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Malherbe, Victor. "Multi-scale modeling of radiation effects for emerging space electronics : from transistors to chips in orbit". Electronic Thesis or Diss., Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0753.

Testo completo
Abstract (sommario):
En raison de leur impact sur la fiabilité des systèmes, les effets du rayonnement cosmique sur l’électronique ont été étudiés dès le début de l’exploration spatiale. Néanmoins, de récentes évolutions industrielles bouleversent les pratiques dans le domaine, les technologies standard devenant de plus en plus attrayantes pour réaliser des circuits durcis aux radiations. Du fait de leurs fréquences élevées, des nouvelles architectures de transistor et des temps de durcissement réduits, les puces fabriquées suivant les derniers procédés CMOS posent de nombreux défis. Ce travail s’attelle donc à la simulation des aléas logiques permanents (SEU) et transitoires (SET), en technologies FD-SOI et bulk Si avancées. La réponse radiative des transistors FD-SOI 28 nm est tout d’abord étudiée par le biais de simulations TCAD, amenant au développement de deux modèles innovants pour décrire les courants induits par particules ionisantes en FD-SOI. Le premier est principalement comportemental, tandis que le second capture des phénomènes complexes tels que l’amplification bipolaire parasite et la rétroaction du circuit, à partir des premiers principes de semi-conducteurs et en accord avec les simulations TCAD poussées.Ces modèles compacts sont alors couplés à une plateforme de simulation Monte Carlo du taux d’erreurs radiatives (SER) conduisant à une large validation sur des données expérimentales recueillies sous faisceau de particules. Enfin, des études par simulation prédictive sont présentées sur des cellules mémoire et portes logiques en FD-SOI 28 nm et bulk Si 65 nm, permettant d’approfondir la compréhension des mécanismes contribuant au SER en orbite des circuits intégrés modernes
The effects of cosmic radiation on electronics have been studied since the early days of space exploration, given the severe reliability constraints arising from harsh space environments. However, recent evolutions in the space industry landscape are changing radiation effects practices and methodologies, with mainstream technologies becoming increasingly attractive for radiation-hardened integrated circuits. Due to their high operating frequencies, new transistor architectures, and short rad-hard development times, chips manufactured in latest CMOS processes pose a variety of challenges, both from an experimental standpoint and for modeling perspectives. This work thus focuses on simulating single-event upsets and transients in advanced FD-SOI and bulk silicon processes.The soft-error response of 28 nm FD-SOI transistors is first investigated through TCAD simulations, allowing to develop two innovative models for radiation-induced currents in FD-SOI. One of them is mainly behavioral, while the other captures complex phenomena, such as parasitic bipolar amplification and circuit feedback effects, from first semiconductor principles and in agreement with detailed TCAD simulations.These compact models are then interfaced to a complete Monte Carlo Soft-Error Rate (SER) simulation platform, leading to extensive validation against experimental data collected on several test vehicles under accelerated particle beams. Finally, predictive simulation studies are presented on bit-cells, sequential and combinational logic gates in 28 nm FD-SOI and 65 nm bulk Si, providing insights into the mechanisms that contribute to the SER of modern integrated circuits in orbit
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Orire, Endurance. "The techno-economics of bitumen recovery from oil and tar sands as a complement to oil exploration in Nigeria / E. Orire". Thesis, North-West University, 2009. http://hdl.handle.net/10394/5704.

Testo completo
Abstract (sommario):
The Nigeria economy is wholly dependent on revenue from oil. However, bitumen has been discovered in the country since 1903 and has remained untapped over the years. The need for the country to complement oil exploration with the huge bitumen deposit cannot be overemphasized. This will help to improve the country's gross domestic product (GDP) and revenue available to government. Bitumen is classifled as heavy crude with API (American petroleum Institute) number ranging between 50 and 110 and occurs in Nigeria, Canada, Saudi Arabia, Venezuela etc from which petroleum products could be derived. This dissertation looked at the Canadian experience by comparing the oil and tar sand deposit found in Canada with particular reference to Athabasca (Grosmont, Wabiskaw McMurray and Nsiku) with that in Nigeria with a view of transferring process technology from Canada to Nigeria. The Nigeria and Athabasca tar sands occur in the same type of environment. These are the deltaic, fluvial marine deposit in an incised valley with similar reservoir, chemical and physical properties. However, the Nigeria tar sand is more asphaltenic and also contains more resin and as such will yield more product volume during hydro cracking albeit more acidic. The differences in the components (viscosity, resin and asphaltenes contents, sulphur and heavy metal contents) of the tar sands is within the limit of technology adaptation. Any of the technologies used in Athabasca, Canada is adaptable to Nigeria according to the findings of this research. The techno-economics of some of the process technologies are. x-rayed using the PTAC (petroleum technology alliance Canada) technology recovery model in order to obtain their unit cost for Nigeria bitumen. The unit cost of processed bitumen adopting steam assisted gravity drainage (SAGD), in situ combustion (ISC) and cyclic steam stimulation (CSS) process technology is 40.59, 25.00 and 44.14 Canadian dollars respectively. The unit cost in Canada using the same process technology is 57.27, 25.00 and 61.33 Canadian dollars respectively. The unit cost in Nigeria is substantively lesser than in Canada. A trade off is thereafter done using life cycle costing so as to select the best process technology for the Nigeria oil/tar sands. The net present value/internal rate of return is found to be B$3,062/36.35% for steam assisted gravity drainage, B$I,570124.51 % for cyclic steam stimulation and B$3,503/39.64% for in situ combustion. Though in situ combustion returned the highest net present value and internal rate of return, it proved not to be the best option for Nigeria due to environmental concern and response time to production. The best viable option for the Nigeria tar sand was then deemed to be steam assisted gravity drainage. An integrated oil strategy coupled with cogeneration using MSAR was also seen to considerably amplify the benefits accruable from bitumen exploration; therefore, an investment in bitumen exploration in Nigeria is a wise economic decision.
Thesis (M.Ing. (Development and Management))--North-West University, Potchefstroom Campus, 2010.
Gli stili APA, Harvard, Vancouver, ISO e altri
38

NOTARANGELO, NICLA MARIA. "A Deep Learning approach for monitoring severe rainfall in urban catchments using consumer cameras. Models development and deployment on a case study in Matera (Italy) Un approccio basato sul Deep Learning per monitorare le piogge intense nei bacini urbani utilizzando fotocamere generiche. Sviluppo e implementazione di modelli su un caso di studio a Matera (Italia)". Doctoral thesis, Università degli studi della Basilicata, 2021. http://hdl.handle.net/11563/147016.

Testo completo
Abstract (sommario):
In the last 50 years, flooding has figured as the most frequent and widespread natural disaster globally. Extreme precipitation events stemming from climate change could alter the hydro-geological regime resulting in increased flood risk. Near real-time precipitation monitoring at local scale is essential for flood risk mitigation in urban and suburban areas, due to their high vulnerability. Presently, most of the rainfall data is obtained from ground‐based measurements or remote sensing that provide limited information in terms of temporal or spatial resolution. Other problems may be due to the high costs. Furthermore, rain gauges are unevenly spread and usually placed away from urban centers. In this context, a big potential is represented by the use of innovative techniques to develop low-cost monitoring systems. Despite the diversity of purposes, methods and epistemological fields, the literature on the visual effects of the rain supports the idea of camera-based rain sensors but tends to be device-specific. The present thesis aims to investigate the use of easily available photographing devices as rain detectors-gauges to develop a dense network of low-cost rainfall sensors to support the traditional methods with an expeditious solution embeddable into smart devices. As opposed to existing works, the study focuses on maximizing the number of image sources (like smartphones, general-purpose surveillance cameras, dashboard cameras, webcams, digital cameras, etc.). This encompasses cases where it is not possible to adjust the camera parameters or obtain shots in timelines or videos. Using a Deep Learning approach, the rainfall characterization can be achieved through the analysis of the perceptual aspects that determine whether and how a photograph represents a rainy condition. The first scenario of interest for the supervised learning was a binary classification; the binary output (presence or absence of rain) allows the detection of the presence of precipitation: the cameras act as rain detectors. Similarly, the second scenario of interest was a multi-class classification; the multi-class output described a range of quasi-instantaneous rainfall intensity: the cameras act as rain estimators. Using Transfer Learning with Convolutional Neural Networks, the developed models were compiled, trained, validated, and tested. The preparation of the classifiers included the preparation of a suitable dataset encompassing unconstrained verisimilar settings: open data, several data owned by National Research Institute for Earth Science and Disaster Prevention - NIED (dashboard cameras in Japan coupled with high precision multi-parameter radar data), and experimental activities conducted in the NIED Large Scale Rainfall Simulator. The outcomes were applied to a real-world scenario, with the experimentation through a pre-existent surveillance camera using 5G connectivity provided by Telecom Italia S.p.A. in the city of Matera (Italy). Analysis unfolded on several levels providing an overview of generic issues relating to the urban flood risk paradigm and specific territorial questions inherent with the case study. These include the context aspects, the important role of rainfall from driving the millennial urban evolution to determining present criticality, and components of a Web prototype for flood risk communication at local scale. The results and the model deployment raise the possibility that low‐cost technologies and local capacities can help to retrieve rainfall information for flood early warning systems based on the identification of a significant meteorological state. The binary model reached accuracy and F1 score values of 85.28% and 0.86 for the test, and 83.35% and 0.82 for the deployment. The multi-class model reached test average accuracy and macro-averaged F1 score values of 77.71% and 0.73 for the 6-way classifier, and 78.05% and 0.81 for the 5-class. The best performances were obtained in heavy rainfall and no-rain conditions, whereas the mispredictions are related to less severe precipitation. The proposed method has limited operational requirements, can be easily and quickly implemented in real use cases, exploiting pre-existent devices with a parsimonious use of economic and computational resources. The classification can be performed on single photographs taken in disparate conditions by commonly used acquisition devices, i.e. by static or moving cameras without adjusted parameters. This approach is especially useful in urban areas where measurement methods such as rain gauges encounter installation difficulties or operational limitations or in contexts where there is no availability of remote sensing data. The system does not suit scenes that are also misleading for human visual perception. The approximations inherent in the output are acknowledged. Additional data may be gathered to address gaps that are apparent and improve the accuracy of the precipitation intensity prediction. Future research might explore the integration with further experiments and crowdsourced data, to promote communication, participation, and dialogue among stakeholders and to increase public awareness, emergency response, and civic engagement through the smart community idea.
Negli ultimi 50 anni, le alluvioni si sono confermate come il disastro naturale più frequente e diffuso a livello globale. Tra gli impatti degli eventi meteorologici estremi, conseguenti ai cambiamenti climatici, rientrano le alterazioni del regime idrogeologico con conseguente incremento del rischio alluvionale. Il monitoraggio delle precipitazioni in tempo quasi reale su scala locale è essenziale per la mitigazione del rischio di alluvione in ambito urbano e periurbano, aree connotate da un'elevata vulnerabilità. Attualmente, la maggior parte dei dati sulle precipitazioni è ottenuta da misurazioni a terra o telerilevamento che forniscono informazioni limitate in termini di risoluzione temporale o spaziale. Ulteriori problemi possono derivare dagli elevati costi. Inoltre i pluviometri sono distribuiti in modo non uniforme e spesso posizionati piuttosto lontano dai centri urbani, comportando criticità e discontinuità nel monitoraggio. In questo contesto, un grande potenziale è rappresentato dall'utilizzo di tecniche innovative per sviluppare sistemi inediti di monitoraggio a basso costo. Nonostante la diversità di scopi, metodi e campi epistemologici, la letteratura sugli effetti visivi della pioggia supporta l'idea di sensori di pioggia basati su telecamera, ma tende ad essere specifica per dispositivo scelto. La presente tesi punta a indagare l'uso di dispositivi fotografici facilmente reperibili come rilevatori-misuratori di pioggia, per sviluppare una fitta rete di sensori a basso costo a supporto dei metodi tradizionali con una soluzione rapida incorporabile in dispositivi intelligenti. A differenza dei lavori esistenti, lo studio si concentra sulla massimizzazione del numero di fonti di immagini (smartphone, telecamere di sorveglianza generiche, telecamere da cruscotto, webcam, telecamere digitali, ecc.). Ciò comprende casi in cui non sia possibile regolare i parametri fotografici o ottenere scatti in timeline o video. Utilizzando un approccio di Deep Learning, la caratterizzazione delle precipitazioni può essere ottenuta attraverso l'analisi degli aspetti percettivi che determinano se e come una fotografia rappresenti una condizione di pioggia. Il primo scenario di interesse per l'apprendimento supervisionato è una classificazione binaria; l'output binario (presenza o assenza di pioggia) consente la rilevazione della presenza di precipitazione: gli apparecchi fotografici fungono da rivelatori di pioggia. Analogamente, il secondo scenario di interesse è una classificazione multi-classe; l'output multi-classe descrive un intervallo di intensità delle precipitazioni quasi istantanee: le fotocamere fungono da misuratori di pioggia. Utilizzando tecniche di Transfer Learning con reti neurali convoluzionali, i modelli sviluppati sono stati compilati, addestrati, convalidati e testati. La preparazione dei classificatori ha incluso la preparazione di un set di dati adeguato con impostazioni verosimili e non vincolate: dati aperti, diversi dati di proprietà del National Research Institute for Earth Science and Disaster Prevention - NIED (telecamere dashboard in Giappone accoppiate con dati radar multiparametrici ad alta precisione) e attività sperimentali condotte nel simulatore di pioggia su larga scala del NIED. I risultati sono stati applicati a uno scenario reale, con la sperimentazione attraverso una telecamera di sorveglianza preesistente che utilizza la connettività 5G fornita da Telecom Italia S.p.A. nella città di Matera (Italia). L'analisi si è svolta su più livelli, fornendo una panoramica sulle questioni relative al paradigma del rischio di alluvione in ambito urbano e questioni territoriali specifiche inerenti al caso di studio. Queste ultime includono diversi aspetti del contesto, l'importante ruolo delle piogge dal guidare l'evoluzione millenaria della morfologia urbana alla determinazione delle criticità attuali, oltre ad alcune componenti di un prototipo Web per la comunicazione del rischio alluvionale su scala locale. I risultati ottenuti e l'implementazione del modello corroborano la possibilità che le tecnologie a basso costo e le capacità locali possano aiutare a caratterizzare la forzante pluviometrica a supporto dei sistemi di allerta precoce basati sull'identificazione di uno stato meteorologico significativo. Il modello binario ha raggiunto un'accuratezza e un F1-score di 85,28% e 0,86 per il set di test e di 83,35% e 0,82 per l'implementazione nel caso di studio. Il modello multi-classe ha raggiunto un'accuratezza media e F1-score medio (macro-average) di 77,71% e 0,73 per il classificatore a 6 vie e 78,05% e 0,81 per quello a 5 classi. Le prestazioni migliori sono state ottenute nelle classi relative a forti precipitazioni e assenza di pioggia, mentre le previsioni errate sono legate a precipitazioni meno estreme. Il metodo proposto richiede requisiti operativi limitati, può essere implementato facilmente e rapidamente in casi d'uso reali, sfruttando dispositivi preesistenti con un uso parsimonioso di risorse economiche e computazionali. La classificazione può essere eseguita su singole fotografie scattate in condizioni disparate da dispositivi di acquisizione di uso comune, ovvero da telecamere statiche o in movimento senza regolazione dei parametri. Questo approccio potrebbe essere particolarmente utile nelle aree urbane in cui i metodi di misurazione come i pluviometri incontrano difficoltà di installazione o limitazioni operative o in contesti in cui non sono disponibili dati di telerilevamento o radar. Il sistema non si adatta a scene che sono fuorvianti anche per la percezione visiva umana. I limiti attuali risiedono nelle approssimazioni intrinseche negli output. Per colmare le lacune evidenti e migliorare l'accuratezza della previsione dell'intensità di precipitazione, sarebbe possibile un'ulteriore raccolta di dati. Sviluppi futuri potrebbero riguardare l'integrazione con ulteriori esperimenti in campo e dati da crowdsourcing, per promuovere comunicazione, partecipazione e dialogo aumentando la resilienza attraverso consapevolezza pubblica e impegno civico in una concezione di comunità smart.
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Shi, Yixi. "Rare Events in Stochastic Systems: Modeling, Simulation Design and Algorithm Analysis". Thesis, 2013. https://doi.org/10.7916/D86H4QNS.

Testo completo
Abstract (sommario):
This dissertation explores a few topics in the study of rare events in stochastic systems, with a particular emphasis on the simulation aspect. This line of research has been receiving a substantial amount of interest in recent years, mainly motivated by scientific and industrial applications in which system performance is frequently measured in terms of events with very small probabilities.The topics mainly break down into the following themes: Algorithm Analysis: Chapters 2, 3, 4 and 5. Simulation Design: Chapters 3, 4 and 5. Modeling: Chapter 5. The titles of the main chapters are detailed as follows: Chapter 2: Analysis of a Splitting Estimator for Rare Event Probabilities in Jackson Networks Chapter 3: Splitting for Heavy-tailed Systems: An Exploration with Two Algorithms Chapter 4: State Dependent Importance Sampling with Cross Entropy for Heavy-tailed Systems Chapter 5: Stochastic Insurance-Reinsurance Networks: Modeling, Analysis and Efficient Monte Carlo
Gli stili APA, Harvard, Vancouver, ISO e altri
40

McKenzie, Todd G., e 麥陶德. "Rare Events in Monte Carlo Methods with Applications to Circuit Simulation". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/3hu2xz.

Testo completo
Abstract (sommario):
博士
國立臺灣大學
資訊工程學研究所
107
The simulation of rare events in Monte Carlo is of practical importance across many disciplines including circuit simulation, finance, and meterology to name a few. To make inferences about the behavior of distributions in systems which have a likelihood of 1 in billions, simulation by traditional Monte Carlo becomes impractical. Intuitively, Monte Carlo samples which are likely (i.e. samples drawn from regions of higher probability density) have little influence on the tail distribution which is associated with rare events. This work provides a novel methodology to directly sample rare events in input multivariate random vector space as a means to efficiently learn about the distribution tail in the output space. In addition, the true form of the Monte Carlo simulation is modeled by first linear and then quadratic forms. A systematic procedure is developed which traces the flow from the linear or quadratic modeling to the computation of distribution statistics such as moments and quantiles directly from the modeling form itself. Next, a general moment calculation method is derived based on the distribution quantiles where no underlying linear or quadratic model is assumed. Finally, each of the proposed methods is grounded in practical circuit simulation examples. Overall, the thesis provides several new methods and approaches to tackle some challenging problems in Monte Carlo simulation, probability distribution modeling, and statistical analysis.
Gli stili APA, Harvard, Vancouver, ISO e altri
41

MARSILI, SIMONE. "Simulations of rare events in chemistry". Doctoral thesis, 2009. http://hdl.handle.net/2158/555496.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Kuruganti, Indira. "Optimal importance sampling for simulating rare events in Markov chains /". 1997. http://wwwlib.umi.com/dissertations/fullcit/9708640.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Hawk, Alexander Timothy. "Calculating rare biophysical events. A study of the milestoning method and simple polymer models". 2012. http://hdl.handle.net/2152/19528.

Testo completo
Abstract (sommario):
Performing simulations of large-scale bio-molecular systems has long been one of the great challenges of molecular biophysics. Phenomena, such as the folding and conformational rearrangement of proteins, often takes place over the course milliseconds-to-seconds. The methods of traditional molecular dynamics used to simulate such systems are on the other hand typically limited to giving trajectories of nanosecond-to-microsecond duration. The failure of traditional methods has thus motivated the development of many special purpose techniques that propose to capture the essential characteristics of systems over conventionally inaccessible timescales. This dissertation first focuses on presenting a set of advances made on one such technique, Milestoning. Milestoning gives a statistical procedure for recovering long trajectories of the system based on observations of many short trajectories that start and end on hypersurfaces in the system’s phase space. Justification of the method’s validity typically relies on the assumption that trajectories of the system lose all memory between crossing successive milestones. We start by giving a modified milestoning procedure in which both the memory loss assumption is relaxed and reaction mechanisms are more easily extracted. We follow with numerical examples illustrating the success of new procedure. Then we show how milestoning may be used to compute an experimentally relevant timescale known as the transit time (also known as the reaction path time). Finally, we discuss how time reversal symmetry may be exploited to improve sampling of the trajectory fragments that connect milestones. After discussing milestoning, the dissertation shifts focus to a different way of approaching the problem of simulating long timescales. We consider two polymers models that are sufficiently simple to permit numerical integration of the desired long trajectories of the system. In some limiting cases, we see their simplicity even permits some questions about the dynamcis to be answered analytically. Using these models, we make a series of experimentally verifiable predictions about the dynamics of unfolded polymers.
text
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Ben, Rached Nadhir. "Rare Events Simulations with Applications to the Performance Evaluation of Wireless Communication Systems". Diss., 2018. http://hdl.handle.net/10754/629483.

Testo completo
Abstract (sommario):
The probability that a sum of random variables (RVs) exceeds (respectively falls below) a given threshold, is often encountered in the performance analysis of wireless communication systems. Generally, a closed-form expression of the sum distribution does not exist and a naive Monte Carlo (MC) simulation is computationally expensive when dealing with rare events. An alternative approach is represented by the use of variance reduction techniques, known for their efficiency in requiring less computations for achieving the same accuracy requirement. For the right-tail region, we develop a unified hazard rate twisting importance sampling (IS) technique that presents the advantage of being logarithmic efficient for arbitrary distributions under the independence assumption. A further improvement of this technique is then developed wherein the twisting is applied only to the components having more impacts on the probability of interest than others. Another challenging problem is when the components are correlated and distributed according to the Log-normal distribution. In this setting, we develop a generalized hybrid IS scheme based on a mean shifting and covariance matrix scaling techniques and we prove that the logarithmic efficiency holds again for two particular instances. We also propose two unified IS approaches to estimate the left-tail of sums of independent positive RVs. The first applies to arbitrary distributions and enjoys the logarithmic efficiency criterion, whereas the second satisfies the bounded relative error criterion under a mild assumption but is only applicable to the case of independent and identically distributed RVs. The left-tail of correlated Log-normal variates is also considered. In fact, we construct an estimator combining an existing mean shifting IS approach with a control variate technique and prove that it possess the asymptotically vanishing relative error property. A further interesting problem is the left-tail estimation of sums of ordered RVs. Two estimators are presented. The first is based on IS and achieves the bounded relative error under a mild assumption. The second is based on conditional MC approach and achieves the bounded relative error property for the Generalized Gamma case and the logarithmic efficiency for the Log-normal case.
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Lipsmeier, Florian [Verfasser]. "Rare event simulation for probabilistic models of T-cell activation / Florian Lipsmeier". 2010. http://d-nb.info/1007843659/34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Wang, Sing-Po, e 王星博. "Optimal Hit-Based Splitting Technique for Rare-Event Simulation and Its Application to Power Grid Blackout Simulation". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/20805668368023114338.

Testo completo
Abstract (sommario):
碩士
國立臺灣大學
工業工程學研究所
99
Rare-event probability estimation is a crucial issue in areas such as reliability, telecommunications and aircraft management. When an event rarely occurs, naive Monte Carlo simulation becomes unreasonably demanding for computing power and often results in an unreliable probability estimate, i.e., an estimate with a large variance. Level splitting simulation has emerged as a promising technique to reduce the variance of a probability estimate by creating separate copies (splits) of the simulation whenever it gets close to a rare event. This technique allocates simulation runs to levels of event progressively approaching a final rare event. However, determination of a good number of simulation runs at each stage can be challenging. An optimal splitting technique called the Optimal Splitting Technique for Rare Events (OSTRE) provides an asymptotically optimal allocation solution of simulation runs. A splitting simulation characterized by allocating simulation runs may fail to obtain a probability estimate because the probability of an event occurring at a given level is too low, and the number of simulation runs allocated to that level is not enough to observe it. In this research, we propose a hit-based splitting method that allocates a number of hits, instead of the number of simulation runs, to each stage. The number of hits is the number of event occurrences at each stage. Regardless of the number of simulation runs required, the allocated number of hits has to be reached before advancing to the next level of splitting simulation. We derive an asymptotically optimal allocation of hits to each stage of splitting simulation, referred to as Optimal Hit-based Splitting Simulation. Experiments indicate that the proposed method performs as well as OSTRE under most conditions. In addition to the allocation problem, choice of levels is also a critical issue in level splitting simulation. Based on the proposed hit-based splitting simulation method, we have developed an algorithm capable of obtaining optimal levels by effectively estimating the initial probability for some events to occur at each stage of progression. Results indicate that the choice of optimal levels is effective. Furthermore, we apply our technique to an IEEE-bus electric network and demonstrate our approach is just as effective as conventional techniques in detecting the most vulnerable link in the electric grid, i.e., the link with the highest probability leading to a blackout event.
Gli stili APA, Harvard, Vancouver, ISO e altri
47

AMABILI, MATTEO. "Stability of the submerged superhydrophobic state via rare event molecular dynamics simulations". Doctoral thesis, 2017. http://hdl.handle.net/11573/936817.

Testo completo
Abstract (sommario):
In the present thesis the stability of the superhydrophobic state on submerged nanotextured surfaces with complex chemistry and geometry is investigated via rare event molecular dynamics simulations. Superhydrophobicity stems from the presence of vapor or gas pockets trapped within the surface texture sustaining the liquid atop of the textures. This superhydrophobic --or Cassie-- state gives rise to remarkable macroscopic properties, such as enhancing liquid slip at the surface or resulting into anti-fouling capabilities. These and many other properties make superhydrophobic surfaces promising for a wide range of technological applications. Superhydrophobicity can be lost via two mechanisms: i) liquid intrusion, i.e. the filling of the liquid inside the textures, leading to the fully wet Wenzel state, or ii) vapor cavitation, i.e. the formation, growth, detachment of a (supercritical) vapor or gas bubble, and the ensuing replacement of the vapor pocket by the liquid. Understanding the mechanism of the Cassie-Wenzel and cavitation transitions is crucial in order to define quantitatively the stability of the superhydrophobic state. These transitions are typically characterized by large free-energy barriers. The presence of these barriers make the study of the transitions particularly challenging due to the diverse timescales present in the problem. Indeed, these transitions are rare events, i.e., they happen on timescale which cannot be simulated by standard molecular dynamics methods. Thus, in order to tackle this issue, rare event methods are used which allow one to compute the free-energy barriers and the mechanism of the transition. This information is essential for developing new design criteria which can pave the way to a new generation of surfaces with more stable superhydrophobic properties in submerged conditions. In the first part of the thesis, re-entrant surface textures are investigated. The focus on this geometry is due to the increasing interest on textures which show omniphobic properties, i.e., which allows the formation of the Cassie state also for low surface tension liquids. Our atomistic results are compared with a macroscopic sharp-interface continuum model. While the two models are generally in fair agreement, quantitative differences were found in the cavitation free-energy barriers, with the macroscopic model overestimating them. The major qualitative difference concerns the behaviour of the system in the Wenzel state, where only the atomistic model can capture the presence of the confined liquid spinodal. These results also allowed us to develop design criteria for textured submerged superhydrophobic surface. The role of the chemistry of the surface was also studied. Pure hydrophobic, pure hydrophilic, and mixed, i.e. internally hydrophobic and externally hydrophilic surfaces, were considered. It was found that the free energy of the mixed chemistry surface closely resemble the superimposition of the one of pure hydrophilic and hydrophobic chemistries. The mixed chemistry shows improved stability of gas pockets against both liquid intrusion and vapor cavitation. Finally, the combined effect of complex chemistry and geometry, such as re-entrant pore morphologies, was also investigated. This latter part of the work was primarily inspired by the natural case of Salvinia molesta, which is a floating fern capable of remaining dry even after a long underwater immersion. Salvinia leaves show similar features with respect to the proposed textures; it is characterized by hairs with a peculiar re-entrant structure with a heterogeneous chemistry: a hydrophobic interior and an hydrophilic patch on the hairs top. In the second part of the thesis, the Cassie-Wenzel transition was investigated on a submerged 3D nano pillared surface. Here, a state-of-the-art technique, the string method, was employed in order to compute the most probable transition mechanism and the corresponding free-energy barrier. The coarse-grained fluid density field was used as a collective variable to characterize the transition. Results are both of applicative and of methodological interst. For the former, the string method revealed the actual transition mechanism, which proceeds by breaking the 2D translational symmetry of the surface textures. These results are interpreted in terms of a sharp-interface continuum model suggesting that nanoscale effects, e.g., line tension, play a minor role in the considered conditions. Concerning to the former, the effect of the choice of the collective variables, i.e. different level of coarse-graining of the fluid density field, was studied. Results show the correct level of coarse-graining suited to correctly capture the transition mechanism and the free-energy barrier.
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Ben, Issaid Chaouki. "Effcient Monte Carlo Simulations for the Estimation of Rare Events Probabilities in Wireless Communication Systems". Diss., 2019. http://hdl.handle.net/10754/660001.

Testo completo
Abstract (sommario):
Simulation methods are used when closed-form solutions do not exist. An interesting simulation method that has been widely used in many scientific fields is the Monte Carlo method. Not only it is a simple technique that enables to estimate the quantity of interest, but it can also provide relevant information about the value to be estimated through its confidence interval. However, the use of classical Monte Carlo method is not a reasonable choice when dealing with rare event probabilities. In fact, very small probabilities require a huge number of simulation runs, and thus, the computational time of the simulation increases significantly. This observation lies behind the main motivation of the present work. In this thesis, we propose efficient importance sampling estimators to evaluate rare events probabilities. In the first part of the thesis, we consider a variety of turbulence regimes, and we study the outage probability of free-space optics communication systems under a generalized pointing error model with both a nonzero boresight component and different horizontal and vertical jitter effects. More specifically, we use an importance sampling approach,based on the exponential twisting technique to offer fast and accurate results. We also show that our approach extends to the multihop scenario. In the second part of the thesis, we are interested in assessing the outage probability achieved by some diversity techniques over generalized fading channels. In many circumstances, this is related to the difficult question of analyzing the statistics of the sum of random variables. More specifically, we propose robust importance sampling schemes that efficiently evaluate the outage probability of diversity receivers over Gamma-Gamma, α − µ, κ − µ, and η − µ fading channels. The proposed estimators satisfy the well-known bounded relative error criterion for both maximum ratio combining and equal gain combining cases. We show the accuracy and the efficiency of our approach compared to naive Monte Carlo via some selected numerical simulations in both case studies. In the last part of this thesis, we propose efficient importance sampling estimators for the left tail of positive Gaussian quadratic forms in both real and complex settings. We show that these estimators possess the bounded relative error property. These estimators are then used to estimate the outage probability of maximum ratio combining diversity receivers over correlated Nakagami-m or correlated Rician fading channels
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Peres, Ricardo José Mota. "Contributions to the Xenon dark matter experiment: simulations of detector response to calibration sources and electric field optimization". Master's thesis, 2018. http://hdl.handle.net/10316/86267.

Testo completo
Abstract (sommario):
Dissertação de Mestrado em Física apresentada à Faculdade de Ciências e Tecnologia
A Matéria Escura continua a ser dos maiores mistérios do atual mundo da Física, potenciando imensos esforços no estudo do seu comportamento e das suas propriedades, tanto a nível experimental como teórico. Os detetores XENON tentam detetar de forma direta estas esquivas partículas, focando-se principalmente nas Partículas Pesadas que Interagem por Força Fraca, ou WIMPs, em grandes volumes de Xenon líquido com detetores de Câmaras de Projeção Temporal (TPC) de dupla fase. Desde a publicação dos seus últimos resultados, o detetor XENON1T é reconhecido como a mais sensível experiência de Matéria Escura.Neste trabalho, enumeram-se as principais evidências da existência de Matéria Escura no Universo e alguns dos seus mais reconhecidos modelos teóricos, passando depois à apresentação do detetor XENON1T e, de seguida, do futuro detetor XENONnT. O principal foco desta dissertação é, numa primeira instância, a simulação e análise de calibrações de recuos nucleares (NR) com um gerador de neutrões (NG), feitas no detetor XENON1T. Uma simulação de todo o detetor é feita para estudar o comportamento esperado e os resultados de análise comparados com os resultados em dados do detetor real. Mais ainda, é de seguida feita uma análise a outras campanhas de calibração com o NG de forma a estudar e modelar a banda de recuos nucleares. Mais tarde, no último capítulo, o foco muda para a construção do modelo geométrico e simulações de elementos finitos de campo elétrico da TPC do detetor XENONnT. Aqui, o principal objetivo é otimizar a geometria e diferencças de potencial aplicadas aos anéis de deformação do campo elétrico(FSR), responsáveis por o manter o mais uniforme possível dentro da TPC.
Dark Matter (DM) still stands as one of the great mysteries of current day Physics, fueling massive experimental and theoretical endeavors to understand its behavior and properties. The XENON detectors aim to directly detect these elusive particles, mostly focusing on Weakly Interactive Massive Particles, or WIMPs, in a large volume of liquid Xenon, using double phase time projection chamber (TPC) detectors. Since its latest results published, the XENON1T detector stands as the most sensitive Dark Matter experiment up to date.In this work, a journey through the evidences on the existence of Dark Matter in the Universe and some of its most notorious models leads to a presentation of the current generation XENON1T detector and, later on, the next generation XENONnT detector. The core of this dissertation focuses, at first, on the simulation and analysis of neutron generator (NG) nuclear recoil (NR) calibration data from the XENON1T detector. Here, a full simulation of a NG calibration run is computed and its results compared with data taken with the XENON1T detector. A separate analysis of other NR calibration data to look into the NR band model and ots empirical fit is achieved. Later, in the last chapter, the focus changes to the construction of the geometry model and electric field finite-element simulation of the XENONnT TPC. The main objective is to optimize the resistive chain that ensures the uniformity of the field inside the TPC, changing the geometry and voltage of the field shaping rings, preliminary based on the design used for XENON1T.
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Lu, Chun-Yaung. "Dynamical simulation of molecular scale systems : methods and applications". Thesis, 2010. http://hdl.handle.net/2152/ETD-UT-2010-12-2151.

Testo completo
Abstract (sommario):
Rare-event phenomena are ubiquitous in nature. We propose a new strategy, kappa-dynamics, to model rare event dynamics. In this methodology we only assume that the important rare-event dynamics obey first-order kinetics. Exact rates are not required in the calculation and the reaction path is determined on the fly. kappa-dynamics is highly parallelizable and can be implemented on computer clusters and distributed machines. Theoretical derivations and several examples of atomic scale dynamics are presented. With single-molecule (SM) techniques, the individual molecular process can be resolved without being averaged over the ensemble. However, factors such as apparatus stability, background level, and data quality will limit the amount of information being collected. We found that the correlation function calculated from the finite-size SM rotational diffusion trajectory will deviate from its true value. Therefore, care must be taken not to interpret the difference as the evidence of new dynamics occurred in the system. We also proposed an algorithm of single fluorophore orientation reconstruction which converts three measured intensities {I₀,I₄₅,I₉₀} to the dipole orientation. Fluctuations in the detected signals caused by the shot noise result in a different prediction from the true orientation. This difference should not be interpreted as the evidence of the nonisotropic rotational motion. Catalytic reactions are also governed by the rare-events. Studying the dynamics of catalytic processes is an important subject since the more we learn, the more we can improve current catalysts. Fuel cells have become a promising energy source in the past decade. The key to make a high performance cell while keeping the price low is the choice of a suitable catalyst at the electrodes. Density functional theory calculations are carried out to study the role of geometric relaxation in the oxygen-reduction reaction for nanoparticle of various transition metals. Our calculations of Pt nanoparticles show that the structural deformation induced by atomic oxygen binding can energetically stabilize the oxidized states and thus reduces the catalytic activity. The catalytic performance can be improved by making alloys with less deformable metals.
text
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia