Tesis sobre el tema "Monte Carlo Simulation Technique"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Monte Carlo Simulation Technique.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Monte Carlo Simulation Technique".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Rangaraj, Dharanipathy. "Multicomponent aerosol dynamics : exploration of direct simulation Monte Carlo technique /". free to MU campus, to others for purchase, 2004. http://wwwlib.umi.com/cr/mo/fullcit?p3144452.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Jackson, Andrew N. "Structural phase behaviour via Monte Carlo techniques". Thesis, University of Edinburgh, 2001. http://hdl.handle.net/1842/4850.

Texto completo
Resumen
There are few reliable computational techniques applicable to the problem of structural phase behaviour. This is starkly emphasised by the fact that there are still a number of unanswered questions concerning the solid state of some of the simplest models of matter. To determine the phase behaviour of a given system we invoke the machinery of statistical physics, which identifies the equilibrium phase as that which minimises the free-energy. This type of problem can only be dealt with fully via numerical simulation, as any less direct approach will involve making some uncontrolled approximation. In particular, a numerical simulation can be used to evaluate the free-energy difference between two phases if the simulation is free to visit them both. However, it has proven very difficult to find an algorithm which is capable of efficiently exploring two different phases, particularly when one or both of them is a crystalline solid. This thesis builds on previous work (Physical Review Letters 79 p.3002), exploring a new Monte Carlo approach to this class of problem. This new simulation technique uses a global coordinate transformation to switch between two different crystalline structures. Generally, this `lattice switch' is found to be extremely unlikely to succeed in a normal Monte Carlo simulation. To overcome this, extended-sampling techniques are used to encourage the simulation to visit `gateway' microstates where the switch will be successful. After compensating for this bias in the sampling, the free-energy difference between the two structures can be evaluated directly from their relative probabilities. As concrete examples on which to base the research, the lattice-switch Monte Carlo method is used to determine the free-energy difference between the face-centred cubic (fcc) and hexagonal close-packed (hcp) phases of two generic model systems --- the hard-sphere and Lennard-Jones potentials. The structural phase behaviour of the hard-sphere solid is determined at densities near melting and in the close-packed limit. The factors controlling the efficiency of the lattice-switch approach are explored, as is the character of the `gateway' microstates. The face-centred cubic structure is identified as the thermodynamically stable phase, and the free-energy difference between the two structures is determined with high precision. These results are shown to be in complete agreement with the results of other authors in the field (published during the course of this work), some of whom adopted the lattice-switch method for their calculations. Also, the results are favourably compared against the experimentally observed structural phase behaviour of sterically-stabilised colloidal dispersions, which are believed to behave like systems of hard spheres. The logical extension of the hard sphere work is to generalise the lattice-switch technique to deal with `softer' systems, such as the Lennard-Jones solid. The results in the literature for the structural phase behaviour of this relatively simple system are found to be completely inconsistent. A number of different approaches to this problem are explored, leading to the conclusion that these inconsistencies arise from the way in which the potential is truncated. Using results for the ground-state energies and from the harmonic approximation, we develop a new truncation scheme which allows this system to be simulated accurately and efficiently. Lattice-switch Monte Carlo is then used to determine the fcc-hcp phase boundary of the Lennard-Jones solid in its entirety. These results are compared against the experimental results for the Lennard-Jones potential's closest physical analogue, the rare-gas solids. While some of the published rare-gas observations are in approximate agreement with the lattice-switch results, these findings contradict the widely held belief that fcc is the equilibrium structure of the heavier rare-gas solids for all pressures and temperatures. The possible reasons for this disagreement are discussed. Finally, we examine the pros and cons of the lattice-switch technique, and explore ways in which it can be extended to cover an even wider range of structures and interactions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Can, Mutan Oya. "Comparison Of Regression Techniques Via Monte Carlo Simulation". Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/3/12605175/index.pdf.

Texto completo
Resumen
The ordinary least squares (OLS) is one of the most widely used methods for modelling the functional relationship between variables. However, this estimation procedure counts on some assumptions and the violation of these assumptions may lead to nonrobust estimates. In this study, the simple linear regression model is investigated for conditions in which the distribution of the error terms is Generalised Logistic. Some robust and nonparametric methods such as modified maximum likelihood (MML), least absolute deviations (LAD), Winsorized least squares, least trimmed squares (LTS), Theil and weighted Theil are compared via computer simulation. In order to evaluate the estimator performance, mean, variance, bias, mean square error (MSE) and relative mean square error (RMSE) are computed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Louvin, Henri. "Development of an adaptive variance reduction technique for Monte Carlo particle transport". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS351/document.

Texto completo
Resumen
L’algorithme Adaptive Multilevel Splitting (AMS) a récemment fait son apparition dans la littérature de mathématiques appliquées, en tant que méthode de réduction de variance pour la simulation Monte Carlo de chaı̂nes de Markov. Ce travail de thèse se propose d’implémenter cette méthode de réduction de variance adaptative dans le code Monte-Carlo de transport de particules TRIPOLI-4,dédié entre autres aux études de radioprotection et d’instrumentation nucléaire. Caractérisées par de fortes atténuations des rayonnements dans la matière, ces études entrent dans la problématique du traitement d’évènements rares. Outre son implémentation inédite dans ce domaine d’application, deux nouvelles fonctionnalités ont été développées pour l’AMS, testées puis validées. La première est une procédure d’encaissement au vol permettant d’optimiser plusieurs scores en une seule simulation AMS. La seconde est une extension de l’AMS aux processus branchants, courants dans les simulations de radioprotection, par exemple lors du transport couplé de neutrons et des photons induits par ces derniers. L’efficacité et la robustesse de l’AMS dans ce nouveau cadre applicatif ont été démontrées dans des configurations physiquement très sévères (atténuations du flux de particules de plus de 10 ordres de grandeur), mettant ainsi en évidence les avantages prometteurs de l’AMS par rapport aux méthodes de réduction de variance existantes
The Adaptive Multilevel Splitting algorithm (AMS) has recently been introduced to the field of applied mathematics as a variance reduction scheme for Monte Carlo Markov chains simulation. This Ph.D. work intends to implement this adaptative variance reduction method in the particle transport Monte Carlo code TRIPOLI-4, dedicated among others to radiation shielding and nuclear instrumentation studies. Those studies are characterized by strong radiation attenuation in matter, so that they fall within the scope of rare events analysis. In addition to its unprecedented implementation in the field of particle transport, two new features were developed for the AMS. The first is an on-the-fly scoring procedure, designed to optimize the estimation of multiple scores in a single AMS simulation. The second is an extension of the AMS to branching processes, which are common in radiation shielding simulations. For example, in coupled neutron-photon simulations, the neutrons have to be transported alongside the photons they produce. The efficiency and robustness of AMS in this new framework have been demonstrated in physically challenging configurations (particle flux attenuations larger than 10 orders of magnitude), which highlights the promising advantages of the AMS algorithm over existing variance reduction techniques
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Nilsson, Emma. "Monte Carlo simulation techniques : The development of a general framework". Thesis, Linköping University, Department of Management and Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-18327.

Texto completo
Resumen

Algorithmica Research AB develops software application for the financial markets. One of their products is Quantlab that is a tool for quantitative analyses. An effective method to value several financial instruments is Monte Carlo simulation. Since it is a common method Algorithmica is interesting in investigating if it is possible to create a Monte Carlo framework.

A requirement from Algorithmica is that the framework is general and this is the main problem to solve. It is difficult to generate a generalized framework because financial derivatives have very different appearances. To simplify the framework the thesis will be delimitated to European style derivatives where the underlying asset is following a Geometric Brownian Motion.

The definition of the problem and delimitation were defined gradually, in parallel with the review of literature, this to be able to decide what purpose, and delimitations that is reasonable to treat. Standard Monte Carlo requires a large number of trials and is therefore slow. To speed up the process there exist different variance reduction techniques and also Quasi Monte Carlo simulation, where deterministic numbers (low discrepancy sequences) is used instead of random. The thesis investigated the variance reduction techniques; control variate technique, antithetic variate technique, and the low discrepancy sequences; Sobol, Faure and Halton.

Three test instruments were chosen to test the framework, an Asian option and a Barrier option where the purpose is to conclude which Monte Carle method that performs best, and also a structured product; Smart Start, that is more complex and the purpose is to test that the framework can handle it.

To increase the understanding of the theory the Halton, Faure and Sobol sequence were implemented in Quantlab in parallel with the review of literature. The Halton and Faure sequences also seemed to perform worse than Sobol so they were not further analyzed.

The developing of the framework was an iterative process. The chosen solution is to design a general framework by using five function pointers; the path generator, the payoff function, the stop criterion function and the volatility and interest rates. The user specifies these functions by him/her given some obligatory input and output values. It is not a problem-free solution to use function pointers and several conflicts and issues are defined, therefore it is not recommended to implement the framework as it is designed today.

In parallel with the developing of the framework several experiments on the Asian and Barrier options were performed with varying result and it is not possible to draw a conclusion on which method that is best. Often Sobol seems to converge better and fluctuates less than standard Monte Carlo. The literature indicates that it is important that the user has an understanding of the instrument that should be valued, the stochastic process it follows and the advantages and disadvantages of different Monte Carlo methods. It is recommended to evaluate the different method with experiments, before deciding which method to use when valuing a new derivative.

Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Ahmad, Abdul Ossman. "Advances in an open-source direct simulation Monte Carlo technique for hypersonic rarefied gas flows". Thesis, University of Strathclyde, 2013. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=26579.

Texto completo
Resumen
Hypersonic vehicles that travel through rarefied gas environments are very expensive to design through experimental methods. In the last few decades major work has been carried out in developing numerical methods to capture these types of flows to a certain degree of accuracy. This accuracy is increased using particle based numerical techniques as opposed to continuum computational fluid dynamics. However, one of the modern problems of particle based techniques is the high computational cost associated with it. This thesis presents an enhanced open-source particle based technique to capture high speed rarefied gas flows. This particle based technique is called dsmcFoam and is based on the direct simulation Monte Carlo technique. As a result of the author's work dsmcFoam has become more efficient and accurate. Benchmark studies of the standard dsmcFoam solver will be presented before introducing the main advances. The results of the benchmark investigations are compared with analytical solutions, other DSMC codes and experimental data available in the literature. And excellent agreement is found when good DSMC practice has been followed. The main advances of dsmcFoam discussed are a routine for selecting collision pairs called the transient adaptive sub-cell (TASC) method and a dynamic wall temperature model (DWTM). The DWTM relates the wall temperature to the heat flux. In addition, verification and validation studies are undertaken of the DWTM. Furthermore, the widely used conventional 8 sub-cell method used to select possible collision pairs becomes very cumbersome to employ properly. This is because many mesh refinement stages are required in order to obtain accurate data. Instead of mesh refinement the TASC technique automatically employs more sub-cells, and these sub-cells are based on the number of particles in a cell. Finally, parallel efficiency tests of dsmcFoam are presented in this thesis along with a new domain decomposition technique for parallel processing. This technique splits up the computational domain based on the number of particles, such that each processor has the same number of particles to work with.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Lester, Christopher. "Efficient simulation techniques for biochemical reaction networks". Thesis, University of Oxford, 2017. https://ora.ox.ac.uk/objects/uuid:bb804e01-b1de-409f-b843-4806c2c990c2.

Texto completo
Resumen
Discrete-state, continuous-time Markov models are becoming commonplace in the modelling of biochemical processes. The mathematical formulations that such models lead to are opaque, and, due to their complexity, are often considered analytically intractable. As such, a variety of Monte Carlo simulation algorithms have been developed to explore model dynamics empirically. Whilst well-known methods, such as the Gillespie Algorithm, can be implemented to investigate a given model, the computational demands of traditional simulation techniques remain a significant barrier to modern research. In order to further develop and explore biologically relevant stochastic models, new and efficient computational methods are required. In this thesis, high-performance simulation algorithms are developed to estimate summary statistics that characterise a chosen reaction network. The algorithms make use of variance reduction techniques, which exploit statistical properties of the model dynamics, so that the statistics can be computed efficiently. The multi-level method is an example of a variance reduction technique. The method estimates summary statistics of well-mixed, spatially homogeneous models by using estimates from multiple ensembles of sample paths of different accuracies. In this thesis, the multi-level method is developed in three directions: firstly, a nuanced implementation framework is described; secondly, a reformulated method is applied to stiff reaction systems; and, finally, different approaches to variance reduction are implemented and compared. The variance reduction methods that underpin the multi-level method are then re-purposed to understand how the dynamics of a spatially-extended Markov model are affected by changes in its input parameters. By exploiting the inherent dynamics of spatially-extended models, an efficient finite difference scheme is used to estimate parametric sensitivities robustly. The new simulation methods are tested for functionality and efficiency with a range of illustrative examples. The thesis concludes with a discussion of our findings, and a number of future research directions are proposed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Baker, Adam Richard Ernest. "The use of the Monte Carlo technique in the simulation of small-scale dosimeters and microdosimeters". Thesis, University of Birmingham, 2011. http://etheses.bham.ac.uk//id/eprint/2897/.

Texto completo
Resumen
In order to understand the effects of low keV radiation upon small scales, a number of detector designs have been developed to investigate the ways energy is deposited. This research was conducted in order to investigate a number of different detector designs, looking in particular at their properties as small scale dosimeters exposed to photon radiation with an energy of 5-50 keV. In addition to this, Monte Carlo models were constructed of the different detector designs in order to ascertain the trends in energy absorption within the detectors. An important part of the research was investigating the dose enhancement effects produced when the low Z elements present in human tissues are in proximity to higher Z metallic elements within this energy range. This included looking at dose enhancement due to the photoelectric effect, with a photon energy of 5-50 keV and through the absorption of thermal neutrons. The reason for studying the dose enhancement was twofold - looking at the increase in energy absorption for elements that are currently being investigated for medical applications as well as elements that are present in dosemeters alongside the tissue equivalent elements. By comparing the results produced using the Monte Carlo codes MCNP4C and EGSnrc, simulations were produced for a variety of different detector designs, both solid state and gasfilled. These models were then compared with experimental results and were found to be able to predict trends in the behaviour of some of the detector designs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Mastail, Cédric. "Modélisation et simulation du dépôt des oxydes à forte permittivité par la technique du Monte-Carlo cinétique". Toulouse 3, 2009. http://thesesups.ups-tlse.fr/989/.

Texto completo
Resumen
Miniaturiser les composants impose des changements radicaux pour l'élaboration des dispositifs micro électroniques du futur. Dans ce cadre, les oxydes de grille MOS atteignent des épaisseurs limites qui les rendent perméables aux courants de fuite. Une solution est de remplacer le SiO2 par un matériau de permittivité plus élevée permettant l'utilisation de couches plus épaisses pour des performances comparables. Dans ce travail nous présentons une modélisation multi-échelle de la croissance par couche atomique (ALD) d'HfO2 sur Si permettant de relier la nano-structuration d'une interface au procédé d'élaboration. Nous montrons que la connaissance de processus chimiques élémentaires, via des calculs DFT, permet d'envisager une simulation procédé qui repose sur le développement d'un logiciel de type Monte Carlo Cinétique nommé "HIKAD". Au delà des mécanismes les plus évidents, adsorption, désorption, décomposition et hydrolyse des précurseurs sur la surface, nous introduirons la notion de mécanismes de densification des couches d'oxyde déposées. Ces mécanismes sont l'élément clé permettant de comprendre comment s'effectue la croissance de la couche en termes de couverture. Mais au delà de cet aspect ils nous permettent d'appréhender comment, à partir de réactions de type moléculaire le système évolue vers un matériau massif. Nous discuterons ces divers éléments à la lumière de résultats de caractérisations obtenus récemment sur le plan expérimental du dépôt d'oxydes d'hafnium
Miniaturizing components requires radical changes in the development of future micro electronic devices. In this perspective, the gate dielectric of MOS devices can become so thin as to be made permeable to leakage currents. One solution is to replace SiO2 by a material with a higher permittivity which would allow the use of thicker layers with similar results. My work presents a multi-scale modelling of the growth of HfO2 on Si by atomic layer (ALD), which allows me to link the nano-structuration of an interface with the process of development. I demonstrate that knowing how basic chemical processes work, thanks to DFT calculations, allows considering a process simulation based on the development of a Kinetic Monte Carlo software named "HIKAD. " Going beyond rather obvious mechanisms, I introduce the notion of densification mechanisms of deposited oxide layers. These mechanisms are the key element to understand how the growth of the layer in terms of coverage works. But even beyond that aspect, they allow to study the system's evolution towards a massive material, starting from molecular reactions. I shall discuss all those points in the light of recent experimental characterisation results concerning the deposition of hafnium oxides
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Mastail, Cedric. "Modélisation et simulation du dépôt des oxydes à forte permittivité par la technique du Monte-Carlo cinétique". Phd thesis, Université Paul Sabatier - Toulouse III, 2009. http://tel.archives-ouvertes.fr/tel-00541993.

Texto completo
Resumen
Miniaturiser les composants impose des changements radicaux pour l'élaboration des dispositifs micro électroniques du futur. Dans ce cadre, les oxydes de grille MOS atteignent des épaisseurs limites qui les rendent perméables aux courants de fuite. Une solution est de remplacer le SiO2 par un matériau de permittivité plus élevée permettant l'utilisation de couches plus épaisses pour des performances comparables. Dans ce travail nous présentons une modélisation multi-échelle de la croissance par couche atomique (ALD) d'HfO2 sur Si permettant de relier la nano-structuration d'une interface au procédé d'élaboration. Nous montrons que la connaissance de processus chimiques élémentaires, via des calculs DFT, permet d'envisager une simulation procédé qui repose sur le développement d'un logiciel de type Monte Carlo Cinétique nommé "HIKAD". Au delà des mécanismes les plus évidents, adsorption, désorption, décomposition et hydrolyse des précurseurs sur la surface, nous introduirons la notion de mécanismes de densification des couches d'oxyde déposées. Ces mécanismes sont l'élément clé permettant de comprendre comment s'effectue la croissance de la couche en termes de couverture. Mais au delà de cet aspect ils nous permettent d'appréhender comment, à partir de réactions de type moléculaire le système évolue vers un matériau massif. Nous discuterons ces divers éléments à la lumière de résultats de caractérisations obtenus récemment sur le plan expérimental du dépôt d'oxydes d'hafnium.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Medin, Joakim. "Studies of clinical proton dosimetry using Monte Carlo simulation and experimental techniques /". Online version, 1997. http://bibpurl.oclc.org/web/26808.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Charalambous, James, University of Western Sydney, of Science Technology and Environment College y School of Engineering and Industrial Design. "Application of Monte Carlo Simulation Technique with URBS Runoff-Routing Model for design flood estimation in large catchments". THESIS_CSTE_EID_Charalambous_J.xml, 2004. http://handle.uws.edu.au:8081/1959.7/769.

Texto completo
Resumen
In recent years, there have been significant researches on holistic approaches to design flood estimation in Australia. The Monte Carlo Simulation technique, an approximate form of Joint Probability Approach, has been developed and tested to small gauged catchments. This thesis presents the extension of the Monte Carlo Simulation Technique to large catchments using runoff routing model URBS. The URBS-Monte Carlo Technique(UMCT),has been applied to the Johnstone River and Upper Mary River catchments in Queensland. The thesis shows that the UMCT can be applied to large catchments and be readily used by hydrologists and floodplain managers. Further the proposed technique provides deeper insight into the hydrologic behaviour of large catchments and allows assessment of the effects of errors in inputs variables on design flood estimates. The research also highlights the problems and potentials of the UMCT for application in practical situations.
Masters of Engineering (Hons.)
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Degrelle, Deborah. "Caractérisation numérique de la technique de spectrométrie gamma par simulation Monte-Carlo. Application à la datation d'échantillons envrionnementaux". Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCD004/document.

Texto completo
Resumen
Caractérisation numérique de la technique de spectrométrie gamma par simulation Monte-Carlo : Application à la datation d'échantillons environnementaux.Résumé :Afin d'optimiser la détermination de l'activité d'échantillons environnementaux, l'étalonnage en efficacité de la chaîne spectrométrique se doit d'être de bonne qualité. Le travail de cette thèse consiste à rassembler les problématiques principales : les phénomènes d'auto-absorption et de coïncidence. Ces effets sont traités par métrologie et simulation Monte-Carlo (MCNP6), impliquant la bonne modélisation préalable de nos détecteurs. Le problème d'auto-absorption est dominant lorsque le standard utilisé pour l'étalonnage en efficacité a des caractéristiques physico-chimiques différentes des échantillons. Un étalonnage numérique semble plus approprié et nous l'appliquons à une archive sédimentaire du lac de Longemer (France). Nous proposons une méthode nouvelle se basant sur une mesure expérimentale ajustée numériquement afin de déterminer le coefficient d'atténuation massique des échantillons. Il est alors possible de remonter à une composition chimique fictive qui permettra d'utiliser la simulation Monte-Carlo pour réaliser l'étalonnage. Ainsi, à 59,54 keV la correction de l'auto-absorption peut atteindre 24 %. Les phénomènes de coïncidence peuvent également être corrigés par simulation. Le logiciel ETNA permet cette correction mais ne permet pas la modélisation d'un détecteur puits. Dans le but de corriger l'efficacité de notre détecteur puits, possédant une géométrie propice aux coïncidences, nous utilisons le transfert de rendement qui lui est adaptable pour n'importe quelle géométrie. Les résultats par cette méthode sont validés par MCNP6 et Génie 2000 sur les énergies principales du 214Bi
In order to improve the determination of environmental samples activity, the detector efficiency calibration must be reliable. These studies deal with the main issues in gamma-ray spectrometry: the self-absorption and the true coincidence summing effects (TCS). These phenomena are studied by metrology and Monte-Carlo simulation (MCNP6) that imply the faithful of our detector models in relation to the experimental device. The self-absorption problem is the main one when the used standard for efficiency calibration has not the same physical and chemical characteristics than samples. A numerical calibration seems to be more suitable and we apply it for Longemer lake archives (France). A new method is proposed where an experimental measurement is processed through numerical simulations to determine the mass attenuation coefficient of the samples. It makes it possible to define a virtual chemical composition to use Monte-Carlo simulation. Then the numerical calibration at 59.54 keV gives a 24% self-absorption correction. The TCS problems can also be corrected by simulation. The ETNA software can determine this correction but it doesn’t make the well type detector model possible, with a geometry conducive to TCS effects. With the aim of correcting the efficiency of our well detector, the efficiency transfer, which can be adjusted to any device, is used. The results with this method are validated by MCNP6 and Genie 2000 software on the main lines of 214Bi
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Charalambous, James. "Application of Monte Carlo Simulation Technique with URBS Runoff-Routing Model for design flood estimation in large catchments". Thesis, View thesis, 2004. http://handle.uws.edu.au:8081/1959.7/769.

Texto completo
Resumen
In recent years, there have been significant researches on holistic approaches to design flood estimation in Australia. The Monte Carlo Simulation technique, an approximate form of Joint Probability Approach, has been developed and tested to small gauged catchments. This thesis presents the extension of the Monte Carlo Simulation Technique to large catchments using runoff routing model URBS. The URBS-Monte Carlo Technique(UMCT),has been applied to the Johnstone River and Upper Mary River catchments in Queensland. The thesis shows that the UMCT can be applied to large catchments and be readily used by hydrologists and floodplain managers. Further the proposed technique provides deeper insight into the hydrologic behaviour of large catchments and allows assessment of the effects of errors in inputs variables on design flood estimates. The research also highlights the problems and potentials of the UMCT for application in practical situations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Charalambous, James. "Application of Monte Carlo Simulation Technique with URBS Runoff-Routing Model for design flood estimation in large catchments". View thesis, 2004. http://library.uws.edu.au/adt-NUWS/public/adt-NUWS20050520.153001/index.html.

Texto completo
Resumen
Thesis (M.Eng. (Hons.)) -- University of Western Sydney, 2004.
"Masters of Engineering (Hons) thesis, University of Western Sydney, December 2004. Supervisors: Ataur Rahman and Don Carroll" Includes bibliography.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

El, maalouf Joseph. "Méthodes de Monte Carlo stratifiées pour la simulation des chaines de Markov". Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM089.

Texto completo
Resumen
Les méthodes de Monte Carlo sont des méthodes probabilistes qui utilisent des ordinateurs pour résoudre de nombreux problèmes de la science à l’aide de nombres aléatoires. Leur principal inconvénient est leur convergence lente. La mise au point de techniques permettant d’accélérer la convergence est un domaine de recherche très actif. C’est l’objectif principal des méthodes déterministes quasi-Monte Carlo qui remplacent les points pseudo-aléatoires de simulation par des points quasi-aléatoires ayant une excellente répartition uniforme. Ces méthodes ne fournissent pas d’intervalles de confiance permettant d’estimer l’erreur. Nous étudions dans ce travail des méthodes stochastiques qui permettent de réduire la variance des estimateurs Monte Carlo : ces techniques de stratification le font en divisant le domaine d’échantillonnageen sous-domaines. Nous examinons l’intérêt de ces méthodes pour l’approximation des chaînes de Markov, la simulation de la diffusion physique et la résolution numérique de la fragmentation.Dans un premier chapitre, nous présentons les méthodes de Monte Carlo pour l’intégration numérique. Nous donnons le cadre général des méthodes de stratification. Nous insistons sur deux techniques : la stratification simple (MCS) et la stratification Sudoku (SS), qui place les points sur des grilles analogues à celle du jeu. Nous pressentons également les méthodesquasi-Monte Carlo qui partagent avec les méthodes de stratification certaines propriétés d'équipartition des points d’échantillonnage.Le second chapitre décrit l’utilisation des méthodes de Monte Carlo stratifiées pour la simulation des chaînes de Markov. Nous considérons des chaînes homogènes uni-dimensionnelles à espace d’états discret ou continu. Dans le premier cas, nous démontrons une réduction de variance par rapport `a la méthode de Monte Carlo classique ; la variance des schémas MCSou SS est d’ordre 3/2, alors que celle du schéma MC est de 1. Les résultats d’expériences numériques, pour des espaces d’états discrets ou continus, uni- ou multi-dimensionnels montrent une réduction de variance liée à la stratification, dont nous estimons l’ordre.Dans le troisième chapitre, nous examinons l’intérêt de la méthode de stratification Sudoku pour la simulation de la diffusion physique. Nous employons une technique de marche aléatoire et nous examinons successivement la résolution d’une équation de la chaleur, d’une équation de convection-diffusion, de problèmes de réaction-diffusion (équations de Kolmogorov et équation de Nagumo) ; enfin nous résolvons numériquement l’équation de Burgers. Dans chacun de ces cas, des tests numériques mettent en évidence une réduction de la variance due à l’emploi de la méthode de stratification Sudoku.Le quatrième chapitre décrit un schéma de Monte Carlo stratifie permettant de simuler un phénomène de fragmentation. La comparaison des performances dans plusieurs cas permet de constater que la technique de stratification Sudoku réduit la variance d’une estimation Monte Carlo. Nous testons enfin un algorithme de résolution d’un problème inverse, permettant d’approcher le noyau de fragmentation, à partir de résultats de l’évolution d’une distribution ;nous utilisons dans ce cas des points quasi-Monte Carlo pour résoudre le problème direct
Monte Carlo methods are probabilistic schemes that use computers for solving various scientific problems with random numbers. The main disadvantage to this approach is the slow convergence. Many scientists are working hard to find techniques that may accelerate Monte Carlo simulations. This is the aim of some deterministic methods called quasi-Monte Carlo, where random points are replaced with special sets of points with enhanced uniform distribution. These methods do not provide confidence intervals that permit to estimate the errordone. In the present work, we are interested with random methods that reduce the variance of a Monte Carlo estimator : the stratification techniques consist of splitting the sampling area into strata where random samples are chosen. We focus here on applications of stratified methods for approximating Markov chains, simulating diffusion in materials, or solving fragmentationequations.In the first chapter, we present Monte Carlo methods in the framework of numerical quadrature, and we introduce the stratification strategies. We focus on two techniques : the simple stratification (MCS) and the Sudoku stratification (SS), where the points repartitions are similar to Sudoku grids. We also present quasi-Monte Carlo methods, where quasi-random pointsshare common features with stratified points.The second chapter describes the use of stratified algorithms for the simulation of Markov chains. We consider time-homogeneous Markov chains with one-dimensional discrete or continuous state space. We establish theoretical bounds for the variance of some estimator, in the case of a discrete state space, that indicate a variance reduction with respect to usual MonteCarlo. The variance of MCS and SS methods is of order 3/2, instead of 1 for usual MC. The results of numerical experiments, for one-dimensional or multi-dimensional, discrete or continuous state spaces show improved variances ; the order is estimated using linear regression.In the third chapter, we investigate the interest of stratified Monte Carlo methods for simulating diffusion in various non-stationary physical processes. This is done by discretizing time and performing a random walk at every time-step. We propose algorithms for pure diffusion, for convection-diffusion, and reaction-diffusion (Kolmogorov equation or Nagumo equation) ; we finally solve Burgers equation. In each case, the results of numerical tests show an improvement of the variance due to the use of stratified Sudoku sampling.The fourth chapter describes a stratified Monte Carlo scheme for simulating fragmentation phenomena. Through several numerical comparisons, we can see that the stratified Sudoku sampling reduces the variance of Monte Carlo estimates. We finally test a method for solving an inverse problem : knowing the evolution of the mass distribution, it aims to find a fragmentation kernel. In this case quasi-random points are used for solving the direct problem
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Deshpande, Isha Sanjay. "HETEROGENEOUS COMPUTING AND LOAD BALANCING TECHNIQUES FOR MONTE CARLO SIMULATION IN A DISTRIBUTED ENVIRONMENT". The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1308244580.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Peter, Felix. "A quantitative comparison of numerical option pricing techniques". St. Gallen, 2008. http://www.biblio.unisg.ch/org/biblio/edoc.nsf/wwwDisplayIdentifier/01592823001/$FILE/01592823001.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Ferreiro, Rangel Carlos Augusto. "Molecular simulation studies in periodic mesoporous silicas SBA-2 and STAC-1 : model development and adsorption applications". Thesis, University of Edinburgh, 2011. http://hdl.handle.net/1842/5272.

Texto completo
Resumen
Adsorption is a low-energy separation process especially advantageous when the components to be separated are similar in nature or have a low molar concentration. The choice of the adsorbent is the key factor for a successful separation, and among them periodic mesoporous silicas (PMS) are of importance because of their pore sizes, shapes and connectivity. Furthermore, they can be modified by post-synthesis functionalisation, which provides a tool for tailoring them to specific applications. SBA-2 and STAC-1 are two types of PMS characterised by a three-dimensional pore system of spherical cages interconnected by a network of channels whose formation process was until now obscure. In this work the kinetic Monte Carlo (kMC) technique has been extended to simulate the synthesis of these complex materials, presenting evidence that the interconnecting network originates from spherical micelles touching during their close-packing aggregation in the synthesis. Moreover, for the first time atomistic models for these materials were obtained with realistic pore-surface roughness and details of the possible location of its interaction sites. Grand Canonical Monte Carlo (GCMC) simulations of nitrogen, methane and ethane adsorption in the materials pore models show excellent agreement with experimental results. In addition, their potential as design tools is explored by introducing surface groups for enhancing CO2 capture; and finally, application examples are presented for carbon dioxide capture from flue gases and for natural gas purification, as well as in the separation of n-butane / iso-butane isomers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Creffield, Charles Edward. "The application of numerical techniques to models of strongly correlated electrons". Thesis, King's College London (University of London), 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.266066.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Thierry, Olivier. "Rétrodiffusion de la lumière par un milieu particulaire dense : étude expérimentale et simulation numérique par la technique de Monte Carlo". Rouen, 1992. http://www.theses.fr/1992ROUE5042.

Texto completo
Resumen
Les phénomènes optiques qui interviennent lorsqu'un milieu particulaire est éclairé sont analysés : soit la diffusion simple, multiple, dépendante ; la rétrodiffusion cohérente et la localisation forte. Des diagrammes de caractérisation sont construits pour prévoir l'existence de la diffusion dépendante ou de la rétrodiffusion cohérente au sein d'un milieu diffusif, en fonction de ses paramètres-diamètre et concentration particulaires. Un montage expérimental performant est mis au point pour mesurer angulairement l'intensité rétrodiffusée, cohérente ou non. Un code de calcul, utilisant la méthode de Monte Carlo, est développé pour simuler l'expérience. Celui-ci modélise : le faisceau incident collimate gaussien, l'influence des parois de la cuve contenant le milieu diffusif, la diffusion multiple de la lumière par les particules du milieu, la détection angulaire de l'intensité rétrodiffusée. La confrontation des profils angulaires simulés et expérimentaux montre un bon accord pour les latex de diamètre 0,5 m de fraction volumique Vf comprise entre 0,1% et 1,92% et les latex de diamètre 2 m de fraction volumique Vf comprise entre 1% et 9,57 %; pour les gels de diamètre 59 m de fraction volumique comprise entre 0,2% et 1% et pour les gels de diamètre 92 m de fraction volumique Vf entre 0,2% et 1%. La mesure de la largeur angulaire du pic de rétrodiffusion cohérente pour deux longueurs d'onde permet le diagnostic optique de milieux denses tant sur la valeur du diamètre (entre 0,1 et 5 m) que de la concentration en particules
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Badal, Soler Andreu. "Development of advanced geometric models and acceleration techniques for Monte Carlo simulation in Medical Physics". Doctoral thesis, Universitat Politècnica de Catalunya, 2008. http://hdl.handle.net/10803/6615.

Texto completo
Resumen
Els programes de simulació Monte Carlo de caràcter general s'utilitzen actualment en una gran varietat d'aplicacions.
Tot i això, els models geomètrics implementats en la majoria de programes imposen certes limitacions a la forma dels objectes que es poden definir. Aquests models no són adequats per descriure les superfícies arbitràries que es troben en estructures anatòmiques o en certs aparells mèdics i, conseqüentment, algunes aplicacions que requereixen l'ús de models geomètrics molt detallats no poden ser acuradament estudiades amb aquests programes.
L'objectiu d'aquesta tesi doctoral és el desenvolupament de models geomètrics i computacionals que facilitin la descripció dels objectes complexes que es troben en aplicacions de física mèdica. Concretament, dos nous programes de simulació Monte Carlo basats en PENELOPE han sigut desenvolupats. El primer programa, penEasy, utilitza un algoritme de caràcter general estructurat i inclou diversos models de fonts de radiació i detectors que permeten simular fàcilment un gran nombre d'aplicacions. Les noves rutines geomètriques utilitzades per aquest programa, penVox, extenen el model geomètric estàndard de PENELOPE, basat en superfícices quàdriques, per permetre la utilització d'objectes voxelitzats. Aquests objectes poden ser creats utilitzant la informació anatòmica obtinguda amb una tomografia computeritzada i, per tant, aquest model geomètric és útil per simular aplicacions que requereixen l'ús de l'anatomia de pacients reals (per exemple, la planificació radioterapèutica). El segon programa, penMesh, utilitza malles de triangles per definir la forma dels objectes simulats. Aquesta tècnica, que s'utilitza freqüentment en el camp del disseny per ordinador, permet representar superfícies arbitràries i és útil per simulacions que requereixen un gran detall en la descripció de la geometria, com per exemple l'obtenció d'imatges de raig x del cos humà.
Per reduir els inconvenients causats pels llargs temps d'execució, els algoritmes implementats en els nous programes s'han accelerat utilitzant tècniques sofisticades, com per exemple una estructura octree. També s'ha desenvolupat un paquet de programari per a la paral·lelització de simulacions Monte Carlo, anomentat clonEasy, que redueix el temps real de càlcul de forma proporcional al nombre de processadors que s'utilitzen.
Els programes de simulació que es presenten en aquesta tesi són gratuïts i de codi lliures. Aquests programes s'han provat en aplicacions realistes de física mèdica i s'han comparat amb altres programes i amb mesures experimentals.
Per tant, actualment ja estan llestos per la seva distribució pública i per la seva aplicació a problemes reals.
Monte Carlo simulation of radiation transport is currently applied in a large variety of areas. However, the geometric models implemented in most general-purpose codes impose limitations on the shape of the objects that can be defined. These models are not well suited to represent the free-form (i.e., arbitrary) shapes found in anatomic structures or complex medical devices. As a result, some clinical applications that require the use of highly detailed phantoms can not be properly addressed.
This thesis is devoted to the development of advanced geometric models and accelration techniques that facilitate the use of state-of-the-art Monte Carlo simulation in medical physics applications involving detailed anatomical phantoms. To this end, two new codes, based on the PENELOPE package, have been developed. The first code, penEasy, implements a modular, general-purpose main program and provides various source models and tallies that can be readily used to simulate a wide spectrum of problems. Its associated geometry routines, penVox, extend the standard PENELOPE geometry, based on quadric surfaces, to allow the definition of voxelised phantoms. This kind of phantoms can be generated using the information provided by a computed tomography and, therefore, penVox is convenient for simulating problems that require the use of the anatomy of real patients (e.g., radiotherapy treatment planning). The second code, penMesh, utilises closed triangle meshes to define the boundary of each simulated object. This approach, which is frequently used in computer graphics and computer-aided design, makes it possible to represent arbitrary surfaces and it is suitable for simulations requiring a high anatomical detail (e.g., medical imaging).
A set of software tools for the parallelisation of Monte Carlo simulations, clonEasy, has also been developed. These tools can reduce the simulation time by a factor that is roughly proportional to the number of processors available and, therefore, facilitate the study of complex settings that may require unaffordable execution times in a sequential simulation.
The computer codes presented in this thesis have been tested in realistic medical physics applications and compared with other Monte Carlo codes and experimental data. Therefore, these codes are ready to be publicly distributed as free and open software and applied to real-life problems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Blasi, Philippe. "Simulation de la diffusion de la lumière et des gaz par techniques de Monte Carlo". Phd thesis, Université Sciences et Technologies - Bordeaux I, 1996. http://tel.archives-ouvertes.fr/tel-00006980.

Texto completo
Resumen
La synthèse d'images réalistes nécessite la modélisation précise des interactions de la lumière avec la matière (réflexion, réfraction, diffusion) et des échanges d'énergie lumineuse entre les objets de la scène. Cette modélisation, très complexe si l'on ne fait pas d'hypothèses restrictives, peut être efficacement réalisée par simulation de Monte Carlo. Dans le présent travail, nous définissons tout d'abord une méthode complète d'illumination de scène, fondée sur une simulation de Monte Carlo d'un modèle "particulaire" de la lumière. Dans un premier temps, nous développons cette simulation pour les milieux participants. Nous diminuons la variance de la simulation par un calcul exact de l'absorption. Nous étendons ensuite ce travail aux objets surfaciques et proposons une technique de regroupement de photons pour obtenir une efficacité constante à chaque pas de calcul. Dans la deuxième partie de ce travail, nous étudions l'application de cette méthode à la visualisation des champs scalaires tri-dimensionnels, puis l'application de certaines techniques issues de la synthèse d'images (facettisation, de données volumiques, partitionnement spatial, images de distance, ...) à la simulation de la diffusion des gaz, présentant de nombreuses similitudes avec la simulation de la diffusion de la lumière.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Badel, Jean-Noël. "Contrôle dosimétrique des traitements de radiothérapie par simulation Monte Carlo de l'image de dose portale transmise". Lyon, INSA, 2009. http://theses.insa-lyon.fr/publication/2009ISAL0110/these.pdf.

Texto completo
Resumen
Le sujet est le contrôle dosimétrique des traitement s de radiothérapie par l’utilisation de l’imagerie portale numérique (EPID). A l’origine, les EPID ont été conçus pour la vérifactaion du positionnement du patient par rapport aux champs d’irradiation de son traitement. Depuis, plusieurs travaux ont mis en avant les propriétés dosimétriques de ces imageurs et leurs applications pour le contrôle dosimétrique des traitements. L’objectif de cette thèse est ainsi de concevoir et d’évaluer un modèle de prédiction de l’image de dose portale transmise. Nos travaux se déclinent en deux axes. Le premier analyse les capacités dosimétriques d’un imageur à matrice de silicium (a-Si) de type iViewGT d’Elekta. Nous mettons en évidence la faisabilité d’utiliser ce système pour la dosimétrie à condition de procéder à un étalonnage précis, tenant compte des paramètres d’irradiations les plus influents tels l’énergie nominale, la taille du champ et l’épaisseur du patient. Le deuxième développe un modèle de prédiction de l’image de dose portale transmise par méthode Monte Carlo. Notre modèle consiste à calculer l’image de transmission à travers le patient par simulationMonte Carlo et à mesurer l’image portale du champ d’irradiation sans le patient. Cette approche implique la prise en compte du faisceau de photons, du patient et de l’imageur dans la simulation. Les premières validations ont consisté à comparer les transmissions mesurées et simulées. Les résultats sur fantôme, donnent des écarts inférieurs à 2% entre mesures et simulations
The thesis subject is the dosimetric control of radiation therapy treatments using electronic portal imaging device (EPID). Originally, the EPID have been designed for verification of patient positioning relative to the fields of radiation treatment. Since then, several studies have highlighted the dosimetric properties of these images and their applications for the dosimetric control treatments. The objective of this thesis is thus to design and evaluate a model for predicting portal transit dos image. Our work was split into two axes. The first axis analyses the dosimetric capatibilities of an amorphous silicon (a-Si) EPID named iViewGT (Elekta). We demonstrate the feasibility of using this system for dosimetry provided to make an accurate calibration, taking into account the most important parameters such as the nominal beam energy, the field size and target thickness. In the second axis we develop a model for predicting portal transit dose image using Monte Carlo simulations. Our model is to calculate the image transmission through the patient by Monte-Carlo simulation and measured portal image radiation field without the patient. This approach implies taking into account the photon beam, the patient and the EPID in the simulation. The first validations were perform to compare measured and simulated transmission. Results on phantom yield deviations below 2% between measurements and simulations
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Hosseini, Seyed Ali. "Modeling protein dynamics and protein-drug interactions with Monte Carlo based techniques". Doctoral thesis, Universitat de Barcelona, 2015. http://hdl.handle.net/10803/294730.

Texto completo
Resumen
A complete understanding of complex formation between proteins and ligands, a crucial matter for pharmacology and, more in general, in biomedicine, requires a detailed knowledge of their static and dynamic atomic interactions. The main objective of this thesis is to test recent developments in conformational sampling techniques in providing such a dynamical view. We aim at developing new protocols and methods for such a study. Moreover, we want to show how its application can aid in addressing existing problems in the biophysics of protein ligand interactions. Moreover, we apply and refine novel computational approaches aiming at a comprehensive description of the protein and protein-ligand energy landscape, progressing into the rational design of new inhibitors for particular targets. We provide here a summary of the main results. PELE was used for induce fit docking in protein kinases, mammalian target of rapamycin (mTOR) and BCL-2 family protein, particularly MCL-1 protein. Results produced a detailed atomic description of the binding modes of ligand/drug to the selected target. Overall, these results provide new data to understand the mechanism of action of these molecules, and provide new structural data that will allow the development of more Specific inhibitors for cancer treatments. Importantly, we demonstrate the critical role of sampling the protein-ligand dynamics in order to improve the docking score. Moreover, the findings reported here clearly shown the capabilities of PG (and its derivatives) for use in particular apoptotic targets. Following the previous goal, we aim at the implementation of the atomic detailed knowledge into the rational design of new inhibitors, aiming to enhance specificity and binding strength. Motivated by our success with validation studies (applied to several systems for protein-ligand interaction and induce fit procedure) we attempted to design a new inhibitor for a specific target. For doing so, we used the system from our second study: Molecular interactions of prodiginines with the BH3 domain of BCL-2 family members. We have shown how PELE can be used in effectively design improved compounds with significant better docking results. The PELE was applied to steroid Nuclear Receptors to unbiased simulations, where substrate/ligands were placed in the active site, to freely move through the protein and finding the channels, or outside the receptors allowed ligand to freely explore the protein surface. In this study, we demonstrated the applicability of the PELE method in solving relevant biophysical problems. In particular, using PELE we introduced a new structural and dynamic paradigm for ligand binding in steroid nuclear receptors. Using PELE, we create a protocol involving sequence comparison and all­atom protein-ligand induced fit simulations to predict PR resistance at the molecular level. We introduced a significant advance in predicting the affinity of different drugs against HIV-1 protease with several mutations. This study shows how computational techniques are capable of quantitatively discriminating resistance variants of HIV-1 protease. This application is fully automated and installed on PELE web server. Beside these main objectives based on methods application, we aim to add methodological improvements derived from the application and validation studies. We performed method development and studied PELE protocols to model long-time protein dynamics by means of normal mode perturbation and constrained minimization. New backbone perturbation combined with normal modes increased the capability of PELE method to explore local dynamics and large conformational changes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Elazhar, Halima. "Dosimétrie neutron en radiothérapie : étude expérimentale et développement d'un outil personnalisé de calcul de dose Monte Carlo". Thesis, Strasbourg, 2018. http://www.theses.fr/2018STRAE013/document.

Texto completo
Resumen
L’optimisation des traitements en radiothérapie vise à améliorer la précision de l’irradiation des cellules cancéreuses pour épargner le plus possible les organes environnants. Or la dose périphérique déposée dans les tissus les plus éloignés de la tumeur n’est actuellement pas calculée par les logiciels de planification de traitement, alors qu’elle peut être responsable de l’induction de cancers secondaires radio-induits. Parmi les différentes composantes, les neutrons produits par processus photo-nucléaires sont les particules secondaires pour lesquelles il y a un manque important de données dosimétriques. Une étude expérimentale et par simulation Monte Carlo de la production des neutrons secondaires en radiothérapie nous a conduit à développer un algorithme qui utilise la précision du calcul Monte Carlo pour l’estimation de la distribution 3D de la dose neutron délivrée au patient. Un tel outil permettra la création de bases de données dosimétriques pouvant être utilisées pour l’amélioration des modèles mathématiques « dose-risque » spécifiques à l’irradiation des organes périphériques à de faibles doses en radiothérapie
Treatment optimization in radiotherapy aims at increasing the accuracy of cancer cell irradiation while saving the surrounding healthy organs. However, the peripheral dose deposited in healthy tissues far away from the tumour are currently not calculated by the treatment planning systems even if it can be responsible for radiation induced secondary cancers. Among the different components, neutrons produced through photo-nuclear processes are suffering from an important lack of dosimetric data. An experimental and Monte Carlo simulation study of the secondary neutron production in radiotherapy led us to develop an algorithm using the Monte Carlo calculation precision to estimate the 3D neutron dose delivered to the patient. Such a tool will allow the generation of dosimetric data bases ready to be used for the improvement of “dose-risk” mathematical models specific to the low dose irradiation to peripheral organs occurring in radiotherapy
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Martínez, Rovira Immaculada. "Monte Carlo and experimental small-field dosimetry applied to spatially fractionated synchrotron radiotherapy techniques". Doctoral thesis, Universitat Politècnica de Catalunya, 2012. http://hdl.handle.net/10803/81470.

Texto completo
Resumen
Two innovative radiotherapy (RT) approaches are under development at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF): microbeam radiation therapy (MRT) and minibeam radiation therapy (MBRT). The two main distinct characteristics with respect to conventional RT are the use of submillimetric field sizes and spatial fractionation of the dose. This PhD work deals with different features related to small-field dosimetry involved in these techniques. Monte Carlo (MC) calculations and several experimental methods are used with this aim in mind. The core of this PhD Thesis consisted of the development and benchmarking of an MC-based computation engine for a treatment planning system devoted to MRT within the framework of the preparation of forthcoming MRT clinical trials. Additional achievements were the definition of safe MRT irradiation protocols, the assessment of scatter factors in MRT, the further improvement of the MRT therapeutic index by injecting a contrast agent into the tumour and the definition of a dosimetry protocol for preclinical trials in MBRT.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Jiang, Chang Zhong. "Microscopie électronique à balayage analytique : simulation par techniques de Monte Carlo de la détection coaxiale des électrons rétrodiffusés". Lyon 1, 1999. http://www.theses.fr/1999LYO10109.

Texto completo
Resumen
En microscopie electronique a balayage (meb), le contraste du materiau et le contraste topographique des electrons retrodiffuses depend notablement des angles d'incidence et de collection. Dans une premiere partie nous proposons une geometrie particuliere correspondant a une detection coaxiale des electrons retrodiffuses. Nous montrons que l'avantage principal de cette geometrie est d'augmenter la sensibilite au numero atomique tout en minimisant le contraste topographique. Les caracteristiques du microscope a employer sont etudiees pour definir les conditions d'obtention d'un faible diametre de faisceau. Le systeme experimental presente un ensemble des parametres ajustables (energie primaire, perte d'energie, angle d'incidence et angle de collection) dont les influences conjuguees sont difficiles a prevoir. Pour preciser les conditions experimentales, nous avons choisi de simuler les trajectoires electroniques dans les materiaux d'etude. Nos simulations sont faites en utilisant la methode de monte carlo et en supposant un echantillon amorphe bombarde par un faisceau ponctuel d'electron. L'interaction electron-matiere et la technique de monte carlo sont decrites dans la deuxieme partie. Nous avons utilise des sections efficaces elastiques calculees par la methode du developpement en ondes partielles. Nous decrivons l'interaction inelastique en utilisant l'approximation des pertes continues de bethe. Dans la derniere partie, nous avons presente les resultats de la simulation et discute en detail les influences des differents parametres sur les taux de retrodiffusion d'un echantillon massif et d'une couche mince. Les caracteristiques les plus importantes (par exemple : contraste en z, contraste topographique, rapport signal sur bruit, resolution spatiale, etc. ) sont aussi discutees. La simulation permet de trouver comment ajuster les parametres disponibles du microscope, par rapport aux valeurs de z mises en jeu, pour atteindre les objectifs juges prioritaires.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Riyanto. "Simulation, optimisation, et analyse de performances pour les systèmes industriels d'acquisition d'images". Toulouse, INPT, 1997. http://www.theses.fr/1997INPT107H.

Texto completo
Resumen
Le systeme d'acquisition d'images est un element crucial dans un systeme de vision par ordinateur. Meilleures seront les images obtenues, meilleurs seront les resultats des algorithmes de traitement d'images. Ce memoire presente une etude approfondie sur les systemes d'acquisition d'images utilises dans un contexte industriel dans le but d'evaluer leur performance et de proposer des voies d'amelioration. L'etude est divisee en trois parties : la premiere partie realise une etude theorique de ces systemes. Une modelisation complete du processus de la formation d'images est proposee. La deuxieme partie propose l'utilisation de ce modele sous forme d'un simulateur. La mesure de performance d'un systeme d'acquisition en terme de la qualite de l'eclairage et des images obtenues est introduite. Utilisant la methode de monte carlo pour simuler les perturbations aleatoires des parametres de ce systeme, les parametres critiques peuvent ainsi etre determines. La troisieme partie propose un outil d'optimisation des parametres d'eclairage par une approche semi-automatique. Il est base sur la resolution de problemes d'optimisation non lineaires avec contraintes utilisant la methode sqp (sequential quadratic programming).
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Coulibaly, Ibrahim. "Contributions à l'analyse numérique des méthodes quasi-Monte Carlo". Phd thesis, Université Joseph Fourier (Grenoble), 1997. http://tel.archives-ouvertes.fr/tel-00004933.

Texto completo
Resumen
Les méthodes de type quasi-Monte Carlo sont des versions déterministes des méthodes de Monte Carlo. Les nombres aléatoires sont remplacés par des nombres déterministes qui forment des ensembles ou des suites à faible discrepance, ayant une meilleure distribution uniforme. L'erreur d'une méthode quasi-Monte Carlo dépend de la discrepance de la suite utilisée, la discrepance étant une mesure de la déviation par rapport à la distribution uniforme. Dans un premier temps nous nous intéressons à la résolution par des méthodes quasi-Monte Carlo d'équations différentielles pour lesquelles il y a peu de régularité en temps. Ces méthodes consistent à formuler le problème avec un terme intégral pour effectuer ensuite une quadrature quasi-Monte Carlo. Ensuite des méthodes particulaires quasi-Monte Carlo sont proposées pour résoudre les équations cinétiques suivantes : l'équation de Boltzmann linéaire et le modèle de Kac. Enfin, nous nous intéressons à la résolution de l'équation de la diffusion à l'aide de méthodes particulaires utilisant des marches quasi-aléatoires. Ces méthodes comportent trois étapes : un schéma d'Euler en temps, une approximation particulaire et une quadrature quasi-Monte Carlo à l'aide de réseaux-$(0,m,s)$. A chaque pas de temps les particules sont réparties par paquets dans le cas des problèmes multi-dimensionnels ou triées si le problème est uni-dimensionnel. Ceci permet de démontrer la convergence. Les tests numériques montrent pour les méthodes de type quasi-Monte Carlo de meilleurs résultats que ceux fournis par les méthodes de type Monte Carlo.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Haddad, Yara. "Investigation of the formation mechanisms of the High Burnup Structure in the spent nuclear fuel - Experimental simulation with ions beams". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS519/document.

Texto completo
Resumen
L’objectif de cette thèse est d’étudier et de reproduire les caractéristiques spécifiques de la microstructure du combustible nucléaire irradié à fort taux de combustion, appelée structure HBS (High Burnup Structure). Il s’agit d’étudier les différents paramètres pertinents impliqués dans la formation d’une telle structure, en évaluant leur importance, et en clarifiant leurs éventuelles synergies. Cet objectif a été réalisé en utilisant un système de modèle ultra simplifié, à savoir des monocristaux de dioxyde d’uranium (UO₂) irradiés par des ions de basse énergie (quelques centaines de keV) de Lanthane (La) ou de xénon (Xe) à une température de 773 K, correspondant à celle de la périphérie des véritables pastilles de combustible en réacteur. Les énergies et les masses des ions ont été choisies pour étudier la déstabilisation du solide en fonction de deux paramètres-clefs: (i) les collisions nucléaires élastiques et (ii) la contribution chimique de l'incorporation d'impuretés à forte concentration. Les deux espèces ont été choisies délibérément pour leurs solubilités très différentes dans le dioxyde d’uranium: les ions La sont solubles dans l'UO₂ jusqu’à de très fortes concentrations, tandis que les ions Xe sont insolubles. Les techniques de la Microscopie Électronique en Transmission (TEM) et de Spectrométrie de Rétrodiffusion Rutherford en canalisation RBS/C ont été conduites in situ couplée avec l’irradiation. Ces deux techniques utilisées pour visualiser, quantifier et fournir des informations concernant la fraction des défauts induits par l’irradiation et la formation des bulles, de cavités ou de précipités dans le solide. Les données de canalisation ont été analysées par simulation Monte-Carlo en supposant l’existence de deux catégories de défauts : (i) des atomes aléatoirement déplacés (RDA) et (ii) des distorsions des rangés atomiques (BC). L’évolution de la fraction de défaut de type RDA montre une forte augmentation entre 0.4 à 4.0 dpa (correspondant à une très faible concentration des ions implantés), indépendamment de la nature des ions. Elle est suivie par une saturation de la fraction de RDA pour les deux ions sur une large gamme d’irradiation quoi s’étend jusque 100 dpa. Une forte élévation de la fraction de RDA est observée en particulier pour les cristaux implantés avec des ions Xe pour une concentration élevée dépassant les 4%. En ce qui concerne l’évolution de BC, elle augmente fortement jusqu’à 4 dpa et sature ensuite deux ions La et Xe. Les résultats de microscopie électronique in situ montrent que des défauts identiques pour les deux ions induits par l’irradiation apparaissent, et présentent la même évolution en fonction de la fluence. Les différents défauts évoluent en fonction de la fluence : la première étape correspond à la formation de ‘black dots’ ; la deuxième étape est caractérisée par la formation de boucles puis de lignes de dislocations, qui évoluent finalement jusqu’à commencer à devenir moins différenciables; le processus de restructuration se poursuit et forme un réseau de dislocations enchevêtrées. Une forte densité de bulles de gaz, de taille nanométrique et avec un diamètre moyen de 2 nm est observée pour le cristal Xe implanté à une dose seuil de 4 dpa. Le couplage des deux techniques conduites in situ montre que la différence entre les valeurs à saturation des fractions RDA des deux ions, d’une part, et l'augmentation drastique de RDA à très forte concentration d'ions Xe implantés d’autre part peuvent être attribuées à : (i) la solubilité des ions La vis-à-vis des ions Xe, conduisant à la formation des bulles de gaz de taille nanométrique et (ii) la taille des espèces implantées dans la matrice UO₂, pour laquelle les atomes Xe insolubles ont un rayon atomique beaucoup plus grand que le rayon cationique des atomes U⁴⁺(les atomes La³⁺ ont un rayon atomique similaire à celui des atomes U⁴⁺), responsable de plus de contraintes supplémentaires dans la structure cristalline
The aim of this thesis is to investigate and reproduce the specific features of the microstructure of the high burnup structure of the irradiated nuclear fuel and to explore the various relevant parameters involved in the formation of such a structure, in evaluating their importance, and in clarifying the synergies between them. This have been performed by using a very simplified model system – namely uranium dioxide single crystals- irradiated with low energy La and Xe ions at 773 K, corresponding to the temperature at the periphery of the genuine fuel. The energies and masses of bombarding ions were chosen to investigate the destabilization of the solid due to: (i) the elastic nuclear collisions and by (ii) the chemical contribution of implanting impurities at high concentrations by implanting different ions in UO₂, namely Xe and La, having very different solubility: La species are soluble in UO₂ while Xe ions are insoluble. In situ Transmission electron Microscopy (TEM) and in situ Rutherford Backscattering Spectrometry in the channeling mode (RBS/C), both techniques coupled to ion irradiation, were performed to visualize, quantify and provide information with respect to the fraction of radiation-induced defects and the formation of bubbles, cavities, or precipitates. The channeling data were analyzed afterwards by Monte Carlo simulations assuming two class of defects comprising (i) randomly displaced atoms (RDA) and (ii) bent channels (BC) defects. Regarding the RDA evolution, a sharp increase step appears from 0.4 to 4.0 dpa (corresponding to a low concentration of implanted ions) regardless of nature of ions followed by a saturation of the fraction of RDA for both ions over a wide range of irradiation. A sharp increase of RDA fraction is observed specifically for crystals implanted with Xe ions at a high concentration exceeding 1.5 % (corresponding to the dose of more than 125 dpa). Regarding the BC evolution, for both ions, the evolution shows an increase in the fraction of BC up to 4.0 dpa then the fraction of BC almost saturates for Xe and La ions. In situ TEM results show that a similar radiation-induced defects appear for both ions and the same evolution of defects as a function of the fluence is observed. The various defects evolved as a function of the fluence: starting from the black dot defects formation that were observed as a first type of defects created, then dislocation lines and loops appeared and evolved until they started to be become less distinguishable, the restructuring process continued by forming a tangled dislocation network. A high density of nanometer-sized gas bubbles with a mean diameter 2 nm were observed at room temperature for the Xe-implanted crystal at a threshold dose of 4 dpa. The coupling between both techniques (in situ RBS/C and TEM) demonstrates that the difference between the two plateaus of saturation between the two ions and the dramatic increase of RDA at high concentration of implanted Xe ions can be ascribed to: (i) the solubility of La compared to Xe ions leading to the formation of nanometer-sized gas bubbles and (ii) the size of implanted species in UO₂ matrix where insoluble Xe atoms have an atomic radius much larger than the cationic radius of U⁴⁺ atoms, (La³⁺ atoms have a similar atomic radius as U⁴⁺ atoms) responsible for more stress in UO₂ crystal
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Tabatabaian, Zinat. "Fast neutron transmission and tomography simulation using Monte Carlo techniques for the examination of large industrial and biological objects". Thesis, University of Surrey, 1997. http://epubs.surrey.ac.uk/844474/.

Texto completo
Resumen
Elemental analysis of substances made of heavy elements and detection of light elements in heavy matrices are difficult by means of photon transmission techniques. Neutrons have been used in this work, taking unique advantage of their absorption and scattering properties, to detect the structure of industrial and biological objects made of strongly-neutron scattering or absorbing materials, or to study objects combining of high and low neutron cross section materials. The most convenient matrices and impurities amenable to neutron inspection were searched by obtaining expressions for minimum detectable mass and length fraction of elements in an object. Formulae to calculate the minimum required number of neutrons to detect an impurity in a matrix have also been developed. The optimum sample thickness to be investigated with a minimum number of neutrons is likewise derived. Calculations have been carried out for the minimum detectable mass fraction of hydrogen in a number of sample matrices of industrial interest and of elements in a water matrix highlighting the differences with photon attenuation measurements. Results are presented for three neutron energies cold (0.001 eV), thermal (0.025 eV), and fast (14 MeV); concentrations in the parts per million range are demonstrated. Fast neutrons were used because of their high penetration ability, in order to study bulk industrial and biological samples and for their adequacy in detection of light elements such as H, C, N and O in large objects. An attempt to simulate fast neutron transmission tomographs of biological samples was made using the MORSE-CGA Monte-Carlo code. The code was used to calculate transmission of multienergetic U-235 fast fission neutron source in a complex geometry for industrial and biological applications. A fast neutron collimator for radiography, a collimator for brain tomography and a tomography chamber were simulated to design a technique to estimate the effect of scattered neutrons in practical tomography. The macroscopic cross section and mean free path of neutrons for the media of the heterogeneous matrices were also obtained by using microscopic cross sections of elements from the DLC-100G package. Using a multienergetic source provided an opportunity to determine the optimum neutron energy for examination of objects. The analysis required establishing a technique to calculate the fraction of neutrons in each energy group for the 100 group structure of the DLC-100G package. Finally the simulated neutron tomographic images were reconstructed by using the neutron transmission data for different angles of the object, and reconstructing them by the filtered back projection technique. In non-destructive evaluation of medical organs by fast neutron simulation tomography the simulated tomography of prototype biological objects were able to distinguish brain in skull, bone-marrow in bone and bone in soft tissue with good contrast up to 0.42. These results are valuable to identify developing cystic lesions and daughter cyst within the marrow vascular spaces, solid bony tumors, aberrant masses in the facial bone, tumor in spine or other bone marrow abnormalities. In studying component characterisation of industrial objects non-destructively by fast neutron tomography a 3mm diameter duct containing engine-oil was detected at 40 cm depth inside an aluminium combustion engine with a remarkable contrast of 0.35. The minimum detectable mass of oil in aluminium for an optimum neutron energy was 0.1mg/g with a similar result for iron.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Lu, Yongjie. "Application de la methode de simulation directe monte carlo aux ecoulements de transition application a plusieurs techniques de mesure". Paris 6, 1993. http://www.theses.fr/1993PA066158.

Texto completo
Resumen
La methode de simulation directe monte carlo, adoptant le modele de borgnakke-larsen d'echange d'energie interne-translation et le modele de maxwell d'interaction molecule-paroi, est utilisee dans l'etude des ecoulements de transition rencontres dans les techniques de mesure: sphere tombante, anemometrie a fil chaud et anemometrie laser doppler. Les etudes mettent l'accent sur la force de trainee subie par l'obstacle (sphere et particule spherique) et l'echange de chaleur entre l'ecoulement et l'obstacle (fil cylindrique)
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Phan, Thanh-Hien. "Simulation and Experimental Characterization of the Scintillation Detector for IGOSat". Thesis, Sorbonne Paris Cité, 2019. http://www.theses.fr/2019USPCC048.

Texto completo
Resumen
Cette thèse décrit les travaux effectués pour le développement du détecteur à scintillation du nanosatellite IGOSat. Sur la base des exigences du projet, un concept de détecteur a été validé, concept qui doit également être validé à la fois par la simulation et par l’expérience.IGOSat est un projet de satellite universitaire visant à développer un nanosatellite contenant une charge utile, basée sur un scintillateur, capable de mesurer le fond radiatif dans les zones aurorales et l’Anomalie de l’Atlantique Sud (ASA) en orbite terrestre basse. Le satellite dispose aussi d'une charge utile GPS bi-fréquence permettant de mesurer le contenu électronique total (TEC) de l'ionosphère. Ces deux charges utiles sont hébergées sur une plateforme 3U CubeSat qui sera lancée sur une orbite polaire à une altitude d’environ 650 km.La charge utile du scintillateur comprend un détecteur composé d'un cristal scintillant inorganique entouré de cinq scintillateurs organiques. Le scintillateur inorganique choisi est un cristal de CeBr3, sensible à la fois aux photons gamma et aux électrons. Les cinq scintillateurs plastiques EJ-200, qui sont principalement sensibles aux électrons, ont été choisis pour permettre de distinguer les deux types de particules. En d'autres termes, nous pouvons dire qu'une particule est un photon gamma lorsqu'elle n'interagit que dans le cristal de CeBr3, alors qu'il s'agit d'un électron si au moins une interaction est survenue dans un des scintillateurs plastique.Une simulation Monte-Carlo a été utilisée pour étudier la capacité de détection de ce détecteur. Une matrice de réponse a été élaborée pour la simulation des rayons gamma, qui peut être utilisée pour estimer le spectre en énergie des photons gamma en orbite terrestre basse.Un banc d'essai expérimental a été mis en place pour mesurer le spectre de rayonnement de sources radioactives. Ces mesures servent non seulement à valider le résultat de la simulation, mais également à en déduire la résolution en énergie du détecteur et une méthode d’étalonnage correspondante.Une comparaison entre les simulations de Monte-Carlo et les mesures expérimentales est également fournie dans cette thèse.Sur la base des travaux décrits ci-dessus, la thèse est constituée de 6 chapitres énumérés ci-dessous:- Le chapitre 1 est l'introduction au projet et une revue des études sur le fond radiatif en orbite terrestre basse, ainsi que les activités de développement CubeSat.- Le chapitre 2 décrit la configuration su satellite IGOSat, qui est développé pour remplir les exigences de fonctionnement des charges utiles.- Le chapitre 3 décrit le concept du détecteur à scintillation, son système électronique de lecture et le concept opérationnel de la charge utile.- Le chapitre 4 explique les processus physiques d’interaction d'une particule dans les matériaux à scintillation, la simulation de Monte-Carlo et la matrice de réponse du détecteur IGOSat.- Le chapitre 5 fournit les résultats des mesures expérimentales, issues des bancs de tests spécifiques mis en place. Les comparaisons entre simulation et expérience sont également décrites à la fin de ce chapitre.Le chapitre 6 est la conclusion de ce travail
This dissertation describes the work that has been done for the development of the scintillation detector of the IGOSat nanosatellite. Based on the requirements of the project, a concept of the detector has been proposed which required the validation by both simulations and experiments.IGOSat is a university satellite project aimed at developing a nanosatellite containing a scintillator payload that can measure the radiative background in the aurora zones and the South Atlantic Anomaly (SAA) on Low-Earth Orbit. Beside that, the satellite has a GPS Dual-Frequency Payload for measuring the Total Electronics Content (TEC) in the ionosphere. These two payloads are hosted on a 3U CubeSat platform that will be launched on a polar orbit at an altitude of about 650 km.The scintillator payload included a detector which is composed of an inorganic scintillation crystal and five surrounding organic scintillators. The chosen inorganic scintillator is Cerium Bromide (CeBr3), which is sensitive to both gamma-ray photons and electrons. The five surrounding EJ-200 plastic scintillators, which are mainly sensitive to the electrons, are chosen to discriminate the two types of particles. In other words we can say a particle is a gamma-ray photon when it interacts only in the CeBr3 crystal, while it is an electron if at least one interaction happened in a plastic scintillator.Monte-Carlo simulations have been used to investigate the detection ability of this detector. A response matrix has been made for the gamma-ray simulation, which can be used to estimate the original energy spectrum of the Low-Earth Orbit gamma-ray photons.An experimental test bench has been set up for measuring the detected spectrum of radioactive sources. These measurements are not only used to validate the simulation results, but also to determine the energy resolution of the detector, and a calibration method for it.A comparison between the Monte-Carlo simulations and the experimental measurements is also provided in this dissertation.Based on the topics described above, the dissertation has 6 chapters as listed below:- Chapter 1 is the introduction to the project and a review of the studies on Low-Earth Orbit radiative background, as well as description on the CubeSat development activities.- Chapter 2 describes the IGOSat satellite configuration on the platform, which is developed to support the payload based on their requirements.- Chapter 3 describes the concept of the scintillation detector, its electronics readout system and the operational concept of the payload.- Chapter 4 explains the physical processes of a particle in the scintillation materials, the Monte-Carlo simulations and the response matrix of the IGOSat detector.- Chapter 5 provides the experimental measurement results, based on each test bench that has been set up. The comparisons between simulation and experiment are also described at the end of this chapter.Chapter 6 is the conclusion of the work
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Yang, Qing. "A computational fluid dynamic approach and Monte Carlo simulation of phantom mixing techniques for quality control testing of gamma cameras". Thesis, University of Canterbury. Physics and Astronomy, 2013. http://hdl.handle.net/10092/8742.

Texto completo
Resumen
In order to reduce the unnecessary radiation exposure for the clinical personnel, the optimization of procedures in the quality control test of gamma camera was investigated. A significant component of the radiation dose in performing the quality control testing is handling phantoms of radioactivity, especially the mixing to get a uniform activity concentration. Improving the phantom mixing techniques appeared to be a means of reducing radiation dose to personnel. However, this is difficult to perform without a continuous dynamic tomographic acquisition system to study mixing the phantom. In the first part of this study a computational fluid dynamics model was investigated to simulate the mixing procedure. Mixing techniques of shaking and spinning were simulated using the computational fluid dynamics tool FLUENT. In the second part of this study a Siemens E.Cam gamma camera was simulated using the Monte Carlo software SIMIND. A series of validation experiments demonstrated the reliability of the Monte Carlo simulation. In the third part of this study the simulated the mixing data from FLUENT was used as the source distribution in SIMIND to simulate a tomographic acquisition of the phantom. The planar data from the simulation was reconstructed using filtered back projection to produce a tomographic data set for the activity distribution in the phantom. This completed the simulation routine for phantom mixing and verified the Proof-in-Concept that the phantom mixing problem can be studied using a combination of computational fluid dynamics and nuclear medicine radiation transport simulations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Toporkov, Jakov V. "Study of Electromagnetic Scattering from Randomly Rough Ocean-Like Surfaces Using Integral-Equation-Based Numerical Technique". Diss., Virginia Tech, 1998. http://hdl.handle.net/10919/30545.

Texto completo
Resumen
A numerical study of electromagnetic scattering by one-dimensional perfectly conducting randomly rough surfaces with an ocean-like Pierson-Moskowitz spectrum is presented. Simulations are based on solving the Magnetic Field Integral Equation (MFIE) using the numerical technique called the Method of Ordered Multiple Interactions (MOMI). The study focuses on the application and validation of this integral equation-based technique to scattering at low grazing angles and considers other aspects of numerical simulations crucial to obtaining correct results in the demanding low grazing angle regime. It was found that when the MFIE propagator matrix is used with zeros on its diagonal (as has often been the practice) the results appear to show an unexpected sensitivity to the sampling interval. This sensitivity is especially pronounced in the case of horizontal polarization and at low grazing angles. We show - both numerically and analytically - that the problem lies not with the particular numerical technique used (MOMI) but rather with how the MFIE is discretized. It is demonstrated that the inclusion of so-called "curvature terms" (terms that arise from a correct discretization procedure and are proportional to the second surface derivative) in the diagonal of the propagator matrix eliminates the problem completely. A criterion for the choice of the sampling interval used in discretizing the MFIE based on both electromagnetic wavelength and the surface spectral cutoff is established. The influence of the surface spectral cutoff value on the results of scattering simulations is investigated and a recommendation for the choice of this spectral cutoff for numerical simulation purposes is developed. Also studied is the applicability of the tapered incident field at low grazing incidence angles. It is found that when a Gaussian-like taper with fixed beam waist is used there is a characteristic pattern (anomalous jump) in the calculated average backscattered cross section at incidence angles close to grazing that indicates a failure of this approximate (non-Maxwellian) taper. This effect is very pronounced for the horizontal polarization and is not observed for vertical polarization and the differences are explained. Some distinctive features associated with the taper failure are visible in the surface current (solution to the MFIE) as well. Based on these findings we are able to refine one of the previously proposed criteria that relate the taper waist to the angle of incidence and demonstrate its robustness.
Ph. D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Sak, Halis. "Efficient Simulations in Finance". Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2008. http://epub.wu.ac.at/1068/1/document.pdf.

Texto completo
Resumen
Measuring the risk of a credit portfolio is a challenge for financial institutions because of the regulations brought by the Basel Committee. In recent years lots of models and state-of-the-art methods, which utilize Monte Carlo simulation, were proposed to solve this problem. In most of the models factors are used to account for the correlations between obligors. We concentrate on the the normal copula model, which assumes multivariate normality of the factors. Computation of value at risk (VaR) and expected shortfall (ES) for realistic credit portfolio models is subtle, since, (i) there is dependency throughout the portfolio; (ii) an efficient method is required to compute tail loss probabilities and conditional expectations at multiple points simultaneously. This is why Monte Carlo simulation must be improved by variance reduction techniques such as importance sampling (IS). Thus a new method is developed for simulating tail loss probabilities and conditional expectations for a standard credit risk portfolio. The new method is an integration of IS with inner replications using geometric shortcut for dependent obligors in a normal copula framework. Numerical results show that the new method is better than naive simulation for computing tail loss probabilities and conditional expectations at a single x and VaR value. Finally, it is shown that compared to the standard t statistic a skewness-correction method of Peter Hall is a simple and more accurate alternative for constructing confidence intervals. (author´s abstract)
Series: Research Report Series / Department of Statistics and Mathematics
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Jakobi, Christoph. "Entwicklung und Evaluation eines Gewichtsfenstergenerators für das Strahlungstransportprogramm AMOS". Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-234133.

Texto completo
Resumen
Effizienzsteigernde Methoden haben die Aufgabe, die Rechenzeit von Monte Carlo Simulationen zur Lösung von Strahlungstransportproblemen zu verringern. Dazu gehören weitergehende Quell- oder Geometrievereinfachungen und die Gewichtsfenstertechnik als wichtigstes varianzreduzierendes Verfahren, entwickelt in den 1950er Jahren. Die Schwierigkeit besteht bis heute in der Berechnung geeigneter Gewichtsfenster. In dieser Arbeit wird ein orts- und energieabhängiger Gewichtsfenstergenerator basierend auf dem vorwärts-adjungierten Generator von T.E. BOOTH und J.S. HENDRICKS für das Strahlungstransportprogramm AMOS entwickelt und implementiert. Dieser ist in der Lage, die Gewichtsfenster sowohl iterativ zu berechnen und automatisch zu setzen als auch, deren Energieeinteilung selbstständig anzupassen. Die Arbeitsweise wird anhand des Problems der tiefen Durchdringung von Photonenstrahlung demonstriert, wobei die Effizienz um mehrere Größenordnungen gesteigert werden kann. Energieabhängige Gewichtsfenster sorgen günstigstenfalls für eine weitere Verringerung der Rechenzeit um etwa eine Größenordnung. Für eine praxisbezogene Problemstellung, die Bestrahlung eines Personendosimeters, kann die Effizienz hingegen bestenfalls vervierfacht werden. Quell- und Geometrieveränderungen sind gleichwertig. Energieabhängige Fenster zeigen keine praxisrelevante Effizienzsteigerung
The purpose of efficiency increasing methods is the reduction of the computing time required to solve radiation transport problems using Monte Carlo techniques. Besides additional geometry manipulation and source biasing this includes in particular the weight windows technique as the most important variance reduction method developed in the 1950s. To date the difficulty of this technique is the calculation of appropriate weight windows. In this work a generator for spatial and energy dependent weight windows based on the forward-adjoint generator by T.E. BOOTH and J.S. HENDRICKS is developed and implemented in the radiation transport program AMOS. With this generator the weight windows are calculated iteratively and set automatically. Furthermore the generator is able to autonomously adapt the energy segmentation. The functioning is demonstrated by means of the deep penetration problem of photon radiation. In this case the efficiency can be increased by several orders of magnitude. With energy dependent weight windows the computing time is decreased additionally by approximately one order of magnitude. For a practice-oriented problem, the irradiation of a dosimeter for individual monitoring, the efficiency is only improved by a factor of four at best. Source biasing and geometry manipulation result in an equivalent improvement. The use of energy dependent weight windows proved to be of no practical relevance
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Stockbridge, Rebecca. "Bias and Variance Reduction in Assessing Solution Quality for Stochastic Programs". Diss., The University of Arizona, 2013. http://hdl.handle.net/10150/301665.

Texto completo
Resumen
Stochastic programming combines ideas from deterministic optimization with probability and statistics to produce more accurate models of optimization problems involving uncertainty. However, due to their size, stochastic programming problems can be extremely difficult to solve and instead approximate solutions are used. Therefore, there is a need for methods that can accurately identify optimal or near optimal solutions. In this dissertation, we focus on improving Monte-Carlo sampling-based methods that assess the quality of potential solutions to stochastic programs by estimating optimality gaps. In particular, we aim to reduce the bias and/or variance of these estimators. We first propose a technique to reduce the bias of optimality gap estimators which is based on probability metrics and stability results in stochastic programming. This method, which requires the solution of a minimum-weight perfect matching problem, can be run in polynomial time in sample size. We establish asymptotic properties and present computational results. We then investigate the use of sampling schemes to reduce the variance of optimality gap estimators, and in particular focus on antithetic variates and Latin hypercube sampling. We also combine these methods with the bias reduction technique discussed above. Asymptotic properties of the resultant estimators are presented, and computational results on a range of test problems are discussed. Finally, we apply methods of assessing solution quality using antithetic variates and Latin hypercube sampling to a sequential sampling procedure to solve stochastic programs. In this setting, we use Latin hypercube sampling when generating a sequence of candidate solutions that is input to the procedure. We prove that these procedures produce a high-quality solution with high probability, asymptotically, and terminate in a finite number of iterations. Computational results are presented.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Bykov, A. (Alexander). "Experimental investigation and numerical simulation of laser light propagation in strongly scattering media with structural and dynamic inhomogeneities". Doctoral thesis, University of Oulu, 2010. http://urn.fi/urn:isbn:9789514261558.

Texto completo
Resumen
Abstract Light scattering diagnostics of turbid media containing both structural and dynamic inhomogeneities is currently of significant importance. One of the important directions in modern light scattering diagnostics is the development of methods for probing biological media with visible- and near-infrared radiation allowing for visualization of the biotissue structure. Optical methods for studying the biotissue structure and characterization of its optical properties are very promising and have been rapidly developing during the past decade. The present work is aimed at improving and discovering new potentials of currently existing methods of laser diagnostics of biological tissues containing both structural and dynamic inhomogeneities. In particular, the feasibilities of spatially resolved reflectometry and time-of-flight techniques for the problem of noninvasive determination of glucose level in human blood and tissues were examined both numerically and experimentally. The relative sensitivities of these methods to changes in glucose level were estimated. Time-of-flight technique was found to be more sensitive. The possibilities of Doppler optical coherence tomography for imaging of dynamic inhomogeneities with high resolution were considered. This technique was applied for the first time for the imaging of complex autowave cellular motility and cytoplasm shuttle flow in the slime mold Physarum polycephalum. The effect of multiple scattering on the accuracy of the measured flow velocity profiles for the case of single flow and for the case of the flow embedded into the static medium with strong scattering was studied. It was shown that this effect causes significant distortion to the measured flow velocity profiles and it is necessary to take this into account while making quantitative measurements of flow velocities.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Pospíšilová, Barbora. "Modelování a simulace rizik investičních záměrů". Doctoral thesis, Vysoké učení technické v Brně. Fakulta stavební, 2015. http://www.nusl.cz/ntk/nusl-234563.

Texto completo
Resumen
This doctoral thesis deals with modelling and simulation of investment projects and linking risk management with new trends in construction industry. Process of balancing of acceptable risk level and investment costs is really complex and is influenced by several uncertainties. Simulation methods are able to model future scenarios of project development and quantify impact of risk factors. The aim of the doctoral thesis is to find an optimal methodology for risk analysis during decision-making process using simulation methods. The methodology links modeling by simulation method Monte Carlo with CBA, with risk analysis respectively. The aim is to reach more effective process of planning of investment projects. An accurate project plan in preinvestment phase will influence effectiveness of life cycle costs significantly. This is proved also by BIM methodology which works on base of transfer and storage of actual and complete information about investment plan within its whole lifecycle. Expected output of the thesis is effective application of simulation methods in risk prediction for modeling of outputs of investment project. Data from model are useful for decision making process, risk management, controlling and postaudit of investments. Projects can be evaluated by their complex benefits and quality with respect to sustainability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Ocnasu, Andreea Bianca. "Evaluation de la sûreté de fonctionnement des réseaux de distribution par la simulation Monte Carlo : application à des stratégies de maintenance optimales". Phd thesis, Grenoble INPG, 2008. http://tel.archives-ouvertes.fr/tel-00339260.

Texto completo
Resumen
Le secteur électrique est confronté aujourd'hui à de nouveaux défis imposés par la dérégulation du marché d'électricité, la volonté de réduire les émissions de gaz à effet de serre, le développement des nouvelles technologies. Nous assistons à un besoin croissant en terme d'analyse de sûreté de fonctionnement des réseaux de distribution, qui se manifeste par une migration des méthodes utilisées auparavant dans les réseaux de transport vers le niveau de la distribution. Dans une thèse précédente, une méthode de calcul basée sur une simulation Monte Carlo séquentielle a été développée. Une première partie de la présente thèse concerne l'étude des méthodes d'accélération de calculs. Les meilleurs résultats ont été obtenus pour une hybridation des méthodes des Variables Antithétiques et de Stratification. Nous avons abordé ensuite l'étude de faisabilité d'une méthode d'optimisation basée sur des critères de sûreté. L'application choisie a été l'optimisation des stratégies de maintenance préventive des équipements du réseau. Nous avons cherché, pour tous les équipements du système, le nombre optimal de maintenances préventives et la valeur du taux de défaillance maximal quand la maintenance est réalisée, en minimisant le coût total (coût de la maintenance préventive, maintenance corrective et coût d'interruption des clients). Au final, un ensemble de réflexions liées au développement futur d'un outil d'analyse de sûreté a été présenté. Une structure modulaire de l'outil est proposée pour faciliter son utilisation, ainsi que des possibilités de parallélisation des calculs pour une meilleure efficacité.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Li, Xing. "Novel brachytherapy techniques for cervical cancer and prostate cancer". Diss., University of Iowa, 2015. https://ir.uiowa.edu/etd/1682.

Texto completo
Resumen
Intensity-modulated brachytherapy techniques, compensator-based intensity modulated brachytherapy (CBT) and interstitial rotating shield brachytherapy (I-RSBT), are two novel conceptual radiation therapies for treating cervical and prostate cancer, respectively. Compared to conventional brachytherapy techniques for treating cervical cancer, CBT can potentially improve the dose conformity to the high-risk clinical target volume (CTV) of the cervix in a less invasive approach. I-RSBT can reduce the dose delivered to the prostate organ at risks (OARs) with the same radiation dose delivered to the prostate CTV. In this work, concepts and prototypes for CBT and I-RSBT were introduced and developed. Preliminary dosimetric measurements were performed for CBT and I-RSBT, respectively. A CBT prototype system was constructed and experimentally validated. A prototype cylindrical compensator with eight octants, each with different thicknesses, was designed. Direct metal laser sintering (DMLS) was used to construct CoCr and Ti compensator prototypes, and a 4-D milling technique was used to construct a Ti compensator prototype. Gafchromic EBT2 films, held by an acrylic quality assurance (QA) phantom, were irradiated to approximately 125 cGy with an electronic brachytherapy (eBT) source for both shielded and unshielded cases. The dose at each point on the films were calculated using a TG-43 calculation model that was modified to account for the presence of a compensator prototype by ray-tracing. With I-RSBT, a multi-pass dose delivery mechanism with prototypes was developed. Dosimetric measurements for a Gd-153 radioisotope was performed to demonstrate that using multiple partially shielded Gd-153 sources for I-RSBT is feasible. A treatment planning model was developed for applying I-RSBT clinically. A custom-built, stainless steel encapsulated 150 mCi Gd-153 capsule with an outer length of 12.8 mm, outer diameter of 2.10 mm, active length of 9.98 mm, and active diameter of 1.53 mm was used. A partially shielded catheter was constructed with a 500 micron platinum shield and a 500 micron aluminum emission window, both with 180° azimuthal coverage. An acrylic phantom was constructed to measure the dose distributions from the shielded catheter in the transverse plane using Gafchromic EBT3 films. Film calibration curves were generated from 50, 70, and 100 kVp x-ray beams with NIST-traceable air kerma values to account for energy variation. In conclusion, CBT, which is a non-invasive alternative to supplementary interstitial brachytherapy, is expected to improve dose conformity to bulky cervical tumors relative to conventional intracavitary brachytherapy. However, at the current stage, it would be time-consuming to construct a patient-specific compensator using DMLS, and the quality assurance of the compensator would be difficult. I-RSBT is a promising approach to reducing radiation dose delivered to prostate OARs. The next step in making Gd-153 based I-RSBT feasible in clinic is developing a Gd-153 source that is small enough such that the source, shield, and catheter all fit within a 16 guage needle, which has a 1.65 mm diameter.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Rathsman, Karin. "Modeling of Electron Cooling : Theory, Data and Applications". Doctoral thesis, Uppsala universitet, Kärnfysik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-129686.

Texto completo
Resumen
The Vlasov technique is used to model the electron cooling force. Limitations of the applicability of the method is obtained by considering the perturbations of the electron plasma. Analytical expressions of the electron cooling force, valid beyond the Coulomb logarithm approximation, are derived and compared to numerical calculations using adaptive Monte Carlo integration. The calculated longitudinal cooling force is verified with measurements in CELSIUS. Transverse damping rates of betatron oscillations for a nonlinear cooling force is explored. Experimental data of the transverse monochromatic instability is used to determine the rms angular spread due to solenoid field imperfections in CELSIUS. The result, θrms= 0.16 ± 0.02 mrad, is in agreement with the longitudinal cooling force measurements. This verifies the internal consistency of the model and shows that the transverse and longitudinal cooling force components have different velocity dependences. Simulations of electron cooling with applications to HESR show that the momentum reso- lution ∆p/p smaller than 10−5 is feasible, as needed for the charmonium spectroscopy in the experimental program of PANDA. By deflecting the electron beam angle to make use of the monochromatic instability, a reasonable overlap between the circulating antiproton beam and the internal target can be maintained. The simulations also indicate that the cooling time is considerably shorter than expected.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Krupa, Štěpán. "Stanovení hodnoty společnosti DEKTRADE, a.s". Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-162574.

Texto completo
Resumen
The aim of this Master's Thesis is valuation of the company DEKTRADE, a.s. as determined by possible independent analytic using publicly accessible information or data that could be acquired when in touch with management of the company as at January 1st 2013. Results of the thesis are supposed to analyze the corporation itself from the strategically-financial point of view, specify the height of its value if offered to a potential buyer and at last but not at least to help the board of managers to see the conceivable outlook of main competitors on DEKTRADE, a.s. The Thesis is split to theoretical and practical part where the first one should constitute solid methodological base for the final valuation of the company. Keystones of the latter are strategic and financial analysis followed by financial plan and specific techniques of valuation of the company. The ending of the Thesis is devoted to Monte Carlo simulation of results with respect to risk exposition of the computed values.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Masters, Nathan Daniel. "Efficient Numerical Techniques for Multiscale Modeling of Thermally Driven Gas Flows with Application to Thermal Sensing Atomic Force Microscopy". Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/11574.

Texto completo
Resumen
The modeling of Micro- and NanoElectroMechanical Systems (MEMS and NEMS) requires new computational techniques that can deal efficiently with geometric complexity and scale dependent effects that may arise. Reduced feature sizes increase the coupling of physical phenomena and noncontinuum behavior, often requiring models based on molecular descriptions and/or first principles. Furthermore, noncontinuum effects are often localized to small regions of (relatively) large systemsprecluding the global application of microscale models due to computational expense. Multiscale modeling couples efficient continuum solvers with detailed microscale models to providing accurate and efficient models of complete systems. This thesis presents the development of multiscale modeling techniques for nonequilibrium microscale gas phase phenomena, especially thermally driven microflows. Much of this focuses on improving the ability of the Information Preserving DSMC (IP-DSMC) to model thermally driven flows. The IP-DSMC is a recent technique that seeks to accelerate the solution of direct simulation Monte Carlo (DSMC) simulations by preserving and transporting certain macroscopic quantities within each simulation molecules. The primary contribution of this work is the development of the Octant Splitting IP-DSMC (OSIP-DSMC) which recovers previously unavailable information from the preserved quantities and the microscopic velocities. The OSIP-DSMC can efficiently simulate flow fields induced by nonequilibrium systems, including phenomena such as thermal transpiration. The OSIP-DSMC provides an efficient method to explore rarefied gas transport phenomena which may lead to a greater understanding of these phenomena and new concepts for how these may be utilized in practical engineering systems. Multiscale modeling is demonstrated utilizing the OSIP-DSMC and a 2D BEM solver for the continuum (heat transfer) model coupled with a modified Alternating Schwarz coupling scheme. An interesting application for this modeling technique is Thermal Sensing Atomic Force Microscopy (TSAFM). TSAFM relies on gas phase heat transfer between heated cantilever probes and the scanned surface to determine the scan height, and thus the surface topography. Accurate models of the heat transfer phenomena are required to correctly interpret scan data. This thesis presents results demonstrating the effect of subcontinuum heat transfer on TSAFM operation and explores the mechanical effects of the Knudsen Force on the heated cantilevers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Snyder, Brett W. "Tools and Techniques for Evaluating the Reliability of Cloud Computing Systems". University of Toledo / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1371685877.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Hu, Xiaohong. "Probability modeling of industrial situations using transform techniques". Ohio : Ohio University, 1995. http://www.ohiolink.edu/etd/view.cgi?ohiou1179433956.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Turgut, Ozhan Hulusi. "Effects Of Extrapolation Boundary Conditions On Subsonic Mems Flows Over A Flat Plate". Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12606962/index.pdf.

Texto completo
Resumen
In this research, subsonic rarefied flows over a flat-plate at constant pressure are investigated using the direct simulation Monte Carlo (DSMC) technique. An infinitely thin plate (either finite or semi-infinite) with zero angle of attack is considered. Flows with a Mach number of 0.102 and 0.4 and a Reynolds number varying between 0.063 and 246 are considered covering most of the transitional regime between the free-molecule and the continuum limits. A two-dimensional DSMC code of G.A. Bird is used to simulate these flows, and the code is modified to examine the effects of various inflow and outflow boundary conditions. It is observed that simulations of the subsonic rarefied flows are sensitive to the applied boundary conditions. Several extrapolation techniques are considered for the evaluation of the flow properties at the inflow and outflow boundaries. Among various alternatives, four techniques are considered in which the solutions are found to be relatively less sensitive. In addition to the commonly used extrapolation techniques, in which the flow properties are taken from the neighboring boundary cells of the domain, a newly developed extrapolation scheme, based on tracking streamlines, is applied to the outflow boundaries, and the best results are obtained using the new extrapolation technique together with the Neumann boundary conditions. With the new technique, the flow is not distorted even when the computational domain is small. Simulations are performed for various freestream conditions and computational domain configurations, and excellent agreement is obtained with the available experimental data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Kalavrezos, Michail y Michael Wennermo. "Stochastic Volatility Models in Option Pricing". Thesis, Mälardalen University, Department of Mathematics and Physics, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-538.

Texto completo
Resumen

In this thesis we have created a computer program in Java language which calculates European call- and put options with four different models based on the article The Pricing of Options on Assets with Stochastic Volatilities by John Hull and Alan White. Two of the models use stochastic volatility as an input. The paper describes the foundations of stochastic volatility option pricing and compares the output of the models. The model which better estimates the real option price is dependent on further research of the model parameters involved.

Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía