To see the other types of publications on this topic, follow the link: Monte Carlo experiments.

Dissertations / Theses on the topic 'Monte Carlo experiments'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Monte Carlo experiments.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Grinberg, Farida. "Ultraslow molecular dynamics of organized fluids: NMR experiments and Monte-Carlo simulations." Diffusion fundamentals 2 (2005) 119, S. 1-2, 2005. https://ul.qucosa.de/id/qucosa%3A14460.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ames, Allison Jennifer. "Monte Carlo Experiments on Maximum entropy Constructive Ensembles for Time Series Analysis and Inference." Thesis, Virginia Tech, 2005. http://hdl.handle.net/10919/32571.

Full text
Abstract:
In econometric analysis, the traditional bootstrap and related methods often require the assumption of stationarity. This assumption says that the distribution function of the process remains unchanged when shifted in time by an arbitrary value, imposing perfect time-homogeneity. In terms of the joint distribution, stationarity implies that the date of the first time index is not relevant. There are many problems with this assumption however for time series data. With time series, the order in which random realizations occur is crucial. This is why theorists work with stochastic processes, with two implicit arguments, w and t, where w represents the sample space and t represents the order. The question becomes, is there a bootstrap procedure that can preserve the ordering without assuming stationarity? The new method for maximum entropy ensembles proposed by Dr. H. D. Vinod might satisfy the Ergodic and Kolmogorov theorems, without assuming stationarity.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
3

Pettersson, Joachim. "Analysis of Monte Carlo data at low energies in electron-positron collider experiments using Initial State Radiation." Thesis, Uppsala universitet, Kärnfysik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-217038.

Full text
Abstract:
This report explores a novel application of the initial state radiation (radiative return) method in an electron-positron collider, to measure the electron-positron annihilation into a neutral pion and a photon  reaction cross section at low energies. The challenge of using ISR events for analysis is due to the combinatorics issue presented by the extra photon(s) in the final state. Through measuring the cross section, access is gained to the time-like electromagnetic transition form factor (TFF) of the neutral pion. This can be used to constrain theoretical models of the hadronic light-by-light scattering contribution to the anomalous magnetic moment of the muon (AMM). The aim of this project was to determine if existing, or expected, data samples at the experiments KLOE-2 in Frascati and BES-III in Beijing could provide competitive results for the time-like TFF slope parameter. The analysis was performed through the construction of an event generator, where events were generated for three different reaction models. Considering the amount of data available at low energies, this study indicates that the ISR approach could be a viable option to enhance the data sample in the low energy region. The most promising experiment for further analysis is here indicated to be KLOE-2. Compared to tabulated values for the form factor slope parameter, the uncertainty retrieved here is roughly on the same order of magnitude or smaller.
I denna rapport behandlas en ny metod för analys av ISR-data från experiment vid elektron-positron-kolliderare, så som KLOE-2 och BES-III. Strålning i form av en eller flera fotoner som strålats ut från elektronen eller positronen innan kollision kallas ISR. Då en foton strålas ut  från initialtillståndet sänks reaktionens nominella energi. Detta möjliggör analys av reaktioner över ett kontinuerligt energispektrum. Utmaningen med ISR analys ligger i kombinatoriken som uppstår då det återfinns ytterligare fotoner i sluttillståndet för reaktionen.I rapporen beskrivs processen elektron-positron-annihilation till en neutral pion och en foton. Denna reaktion är intressant då kunskap om dess reaktionstvärsnitt ger tillgång till den elektromagnetiska formfaktorn för den neutrala pionen. Formfaktorn beskriver hur reaktionen i fråga avviker från en punkt-lik elektromagnetisk växelverkan. Den elektromagnetiska fromfaktorn för den neutrala pionen är i sin tur en viktig del i beräkningarna för det hadroniska bidraget till myonens anomala magnetiska moment (AMM). Eftersom AMM är experimentellt uppmätt till mycket god noggrannhet kan jämförelser med teoretiska modeller göras med hög precision.  Vid låg reaktionsenergi kan formfaktorn beskrivas med endast en parameter, lutningsparametern. Från Monte Carlo genererad ISR-data har i denna rapport lutningsparametern bestämts med noggrannhet som är likvärdig eller bättre än tabulerade värden, beroende på mängd analyserad data samt val av analysmetod.
APA, Harvard, Vancouver, ISO, and other styles
4

Lin, Heng. "CROSSOVER FROM UNENTANGLED TO ENTANGLED DYNAMICS: MONTE CARLO SIMULATION OF POLYETHYLENE, SUPPORTED BY NMR EXPERIMENTS." Akron, OH : University of Akron, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=akron1142028839.

Full text
Abstract:
Dissertation (Ph. D.)--University of Akron, Dept. of Polymer Science, 2006.
"May, 2006." Title from electronic dissertation title page (viewed 10/11/2006) Advisor, Wayne L. Mattice; Committee members, Ernst D. von Meerwall, Ali Dhinojwala, Gustavo A. Carri, Richard J. Elliott; Department Chair, Mark D. Foster; Dean of the College, Frank N. Kelley; Dean of the Graduate School, George R. Newkome. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
5

Schälicke, Andreas. "Event generation at hadron colliders." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2005. http://nbn-resolving.de/urn:nbn:de:swb:14-1122466458074-11492.

Full text
Abstract:
Diese Arbeit befasst sich mit der Simulation von hochenergetischen Hadron-Kollisionsexperimenten, wie sie im Moment am Tevatron (Fermilab) durchgeführt werden und in naher Zukunft am Large Hadron Collider (LHC) am CERN zu erwarten sind. Für die Beschreibung dieser Experimente wird ein Algorithmus untersucht, der es ermöglicht, exakte Multijet-Matrixelemente auf Baumgraphenniveau in die Simulation einzubeziehen und so die Qualität der Vorhersage deutlich zu verbessern. Die Implementierung dieses Algorithmus in den Eventgenerator "SHERPA" und die Erweiterung des Parton Showers in diesem Programm ist das Hauptthema dieser Arbeit. Die Ergebnisse werden mit experimentellen Daten und mit anderen Simulationen verglichen
This work deals with the accurate simulation of high energy hadron-hadron-collision experiments, as they are currently performed at Fermilab Tevatron or as they are expected at the Large Hadron Collider at CERN. For a precise description of these experiments an algorithm is investigated, which enables the inclusion of exact multi-jet matrix elements in the simulation. The implementation of this algorithm in the event generator "SHERPA" and the extension of its parton shower is the main topic of this work. The results are compared with those of other simulation programs and with experimental data
APA, Harvard, Vancouver, ISO, and other styles
6

Moffat, Hayden. "Cost effective functional response experiments via sequential design." Thesis, Queensland University of Technology, 2021. https://eprints.qut.edu.au/209917/1/Hayden_Moffat_Thesis.pdf.

Full text
Abstract:
Functional response experiments are commonly used to explore predator-prey systems, where we are interested in learning about the number of prey consumed per predator as a function of prey density. Currently, functional response experiments are designed in an ad-hoc manner and may require significant experimentation to learn about the underlying system. In this thesis, we developed statistically principled functional response designs to learn about the true mathematical model driving the predator-prey dynamics as quickly as possible. This can lead to functional response experiments with reduced monetary costs and less sacrificing of animals.
APA, Harvard, Vancouver, ISO, and other styles
7

Chetvertkova, Vera [Verfasser], Edil [Akademischer Betreuer] Mustafin, Ulrich [Akademischer Betreuer] Ratzinger, and Oliver [Akademischer Betreuer] Kester. "Verification of Monte Carlo transport codes by activation experiments / Vera Chetvertkova. Gutachter: Ulrich Ratzinger ; Oliver Kester. Betreuer: Edil Mustafin." Frankfurt am Main : Univ.-Bibliothek Frankfurt am Main, 2013. http://d-nb.info/104409401X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hatzinger, Reinhold, and Walter Katzenbeisser. "A Combination of Nonparametric Tests for Trend in Location." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1991. http://epub.wu.ac.at/1298/1/document.pdf.

Full text
Abstract:
A combination of some well known nonparametric tests to detect trend in location is considered. Simulation results show that the power of this combination is remarkably increased. (author's abstract)
Series: Forschungsberichte / Institut für Statistik
APA, Harvard, Vancouver, ISO, and other styles
9

Lyubchyk, Andriy. "Gas adsorption in the MIL-53(AI) metal organic framework. Experiments and molecular simulation." Doctoral thesis, Faculdade de Ciências e Tecnologia, 2013. http://hdl.handle.net/10362/10932.

Full text
Abstract:
Dissertação para obtenção do Grau de Doutor em Engenharia Química
FCT - PhD Fellowship at Universidade Nova de Lisboa, Department of Chemistry (bolsa N SFRH/BD/45477/2008); FCT Program, project PTDC/AAC-AMB/108849/2008; NANO_GUARD, Project N°269138; Programme “PEOPLE” – Call ID “FP7-PEOPLE-2010-IRSES”
APA, Harvard, Vancouver, ISO, and other styles
10

Oliveira, José Benedito da Silva. "Combinação de técnicas de delineamento de experimentos e elementos finitos com a otimização via simulação Monte Carlo /." Guaratinguetá, 2019. http://hdl.handle.net/11449/183380.

Full text
Abstract:
Orientador: Aneirson Francisco da Silva
Resumo: A Estampagem a Frio é um processo de conformação plástica de chapas metálicas, que possibilita, por meio de ferramentas específicas, obter componentes com boas propriedades mecânicas, geometrias e espessuras variadas, diferentes especificações de materiais e com boa vantagem econômica. A multiplicidade destas variáveis gera a necessidade de utilização de técnicas estatísticas e de simulação numérica, que suportem a sua análise e adequada tomada de decisão na elaboração do projeto das ferramentas de conformação. Este trabalho foi desenvolvido em uma empresa brasileira multinacional de grande porte que atua no setor de autopeças, em seu departamento de engenharia de projetos de ferramentas, com o propósito de reduzir o estiramento e a ocorrência de trincas em uma travessa de 6,8 [mm] de aço LNE 380. A metodologia proposta obtém os valores dos fatores de entrada e sua influência na variável resposta com o uso de técnicas de Delineamento de Experimentos (DOE) e simulação pelo método de Elementos Finitos (FE). Uma Função Empírica é desenvolvida a partir desses dados, com o uso da técnica de regressão, obtendo-se a variável resposta y (espessura na região crítica), em função dos fatores influentes xi do processo. Com a Otimização via Simulação Monte Carlo (OvSMC) insere-se a incerteza nos coeficientes desta Função Empírica, sendo esta a principal contribuição deste trabalho, pois é o que ocorre, por via de regra, na prática com problemas experimentais. Simulando-se por FE as ferram... (Resumo completo, clicar acesso eletrônico abaixo)
Mestre
APA, Harvard, Vancouver, ISO, and other styles
11

Pinto, Letícia Negrão. "Experimentos de efeitos de reatividade no reator nuclear- IPEN/MB-01." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/85/85133/tde-23102012-145549/.

Full text
Abstract:
Pesquisas que tem como objetivo melhorar o desempenho de códigos de transporte de nêutrons e a qualidade de bases de dados de seções de choque nucleares são muito importantes para aumentar a acurácia de simulações e a qualidade de análises e predição de fenômenos no campo nuclear. Neste contexto, dados experimentais relevantes como medidas de reatividade induzida são necessários. O objetivo deste trabalho foi conduzir uma série de experimentos de medida de reatividade induzida, utilizando um reatímetro digital desenvolvido pelo IPEN. Os experimentos empregaram amostras metálicas inseridas na região central do núcleo do reator experimental IPEN/MB-01. A análise teórica foi realizada pelo código de física de reatores MCNP-5, desenvolvido e mantido pelo Los Alamos National Laboratory, e a biblioteca de dados nucleares ENDF/B-VII.0.
Researches that aim to improve the performance of neutron transport codes and quality of nuclear cross section databases are very important to increase the accuracy of simulations and the quality of the analysis and prediction of phenomena in the nuclear field. In this context, relevant experimental data such as reactivity worth measurements are needed. The objective of this work was to perform a series of experiments of reactivity worth measurements, using a digital reactivity meter developed at IPEN. The experiments employed metallic samples inserted in the central region of the core of the experimental IPEN/MB-01 reactor. The theoretical analysis was performed by the MCNP-5 reactor physics code, developed and maintained by Los Alamos National Laboratory, and the ENDF/B-VII.0 nuclear data library.
APA, Harvard, Vancouver, ISO, and other styles
12

Compton, Kigen. "Using Design of Experiments and Monte Carlo to assess life cycle costs of a GSHP heating system, in a detached home in Sweden." Thesis, KTH, Tillämpad maskinteknik (KTH Södertälje), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-104615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Alhassan, Erwin. "Nuclear data uncertainty quantification and data assimilation for a lead-cooled fast reactor : Using integral experiments for improved accuracy." Doctoral thesis, Uppsala universitet, Tillämpad kärnfysik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-265502.

Full text
Abstract:
For the successful deployment of advanced nuclear systems and optimization of current reactor designs, high quality nuclear data are required. Before nuclear data can be used in applications they must first be evaluated, tested and validated against a set of integral experiments, and then converted into formats usable for applications. The evaluation process in the past was usually done by using differential experimental data which was then complemented with nuclear model calculations. This trend is fast changing due to the increase in computational power and tremendous improvements in nuclear reaction models over the last decade. Since these models have uncertain inputs, they are normally calibrated using experimental data. However, these experiments are themselves not exact. Therefore, the calculated quantities of model codes such as cross sections and angular distributions contain uncertainties. Since nuclear data are used in reactor transport codes as input for simulations, the output of transport codes contain uncertainties due to these data as well. Quantifying these uncertainties is important for setting safety margins; for providing confidence in the interpretation of results; and for deciding where additional efforts are needed to reduce these uncertainties. Also, regulatory bodies are now moving away from conservative evaluations to best estimate calculations that are accompanied by uncertainty evaluations. In this work, the Total Monte Carlo (TMC) method was applied to study the impact of nuclear data uncertainties from basic physics to macroscopic reactor parameters for the European Lead Cooled Training Reactor (ELECTRA). As part of the work, nuclear data uncertainties of actinides in the fuel, lead isotopes within the coolant, and some structural materials have been investigated. In the case of the lead coolant it was observed that the uncertainty in the keff and the coolant void worth (except in the case of 204Pb), were large, with the most significant contribution coming from 208Pb. New 208Pb and 206Pb random nuclear data libraries with realistic central values have been produced as part of this work. Also, a correlation based sensitivity method was used in this work, to determine parameter - cross section correlations for different isotopes and energy groups. Furthermore, an accept/reject method and a method of assigning file weights based on the likelihood function are proposed for uncertainty reduction using criticality benchmark experiments within the TMC method. It was observed from the study that a significant reduction in nuclear data uncertainty was obtained for some isotopes for ELECTRA after incorporating integral benchmark information. As a further objective of this thesis, a method for selecting benchmark for code validation for specific reactor applications was developed and applied to the ELECTRA reactor. Finally, a method for combining differential experiments and integral benchmark data for nuclear data adjustments is proposed and applied for the adjustment of neutron induced 208Pb nuclear data in the fast energy region.
APA, Harvard, Vancouver, ISO, and other styles
14

Madeira, Marcelo Gomes. "Comparação de tecnicas de analise de risco aplicadas ao desenvolvimento de campos de petroleo." [s.n.], 2005. http://repositorio.unicamp.br/jspui/handle/REPOSIP/263732.

Full text
Abstract:
Orientadores: Denis Jose Schiozer, Eliana L. Ligero
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecanica, Instituto de Geociencias
Made available in DSpace on 2018-08-04T09:02:20Z (GMT). No. of bitstreams: 1 Madeira_MarceloGomes_M.pdf: 1487066 bytes, checksum: 39d5e48bf728f51d3c85f7d5ef207d41 (MD5) Previous issue date: 2005
Resumo: Os processos de tomada de decisões em campos de petróleo estão associados a grandes riscos provenientes de incertezas geológicas, econômicas e tecnológicas e altos investimentos. Nas fases de avaliação e desenvolvimento dos campos, torna-se necessário modelar o processo de recuperação com confiabilidade aumentando o esforço computacional. Uma forma de acelerar o processo é através de simplificações sendo algumas discutidas neste trabalho: técnica de quantificação do risco (Monte Carlo, árvore de derivação), redução no número de atributos, tratamento simplificado de atributos e simplificação da modelagem do reservatório. Ênfase especial está sendo dada à (1) comparação entre Monte Carlo e árvore de derivação e (2) desenvolvimento de modelos rápidos através de planejamento de experimentos e superfície de resposta. Trabalhos recentes estão sendo apresentados sobre estas técnicas, mas normalmente mostrando aplicações e não comparação entre alternativas. O objetivo deste trabalho é comparar estas técnicas levando em consideração a confiabilidade, a precisão dos resultados e aceleração do processo. Estas técnicas são aplicadas a um campo marítimo e os resultados mostram que (1) é possível reduzir significativamente o número de simulações do fluxo mantendo a precisão dos resultados e que (2) algumas simplificações podem afetar o processo de decisão
Abstract: Petroleum field decision-making process is associated to high risks due to geological, economic and technological uncertainties, and high investments, mainly in the appraisal and development phases of petroleum fields where it is necessary to model the recovery process with higher precision increasing the computational time. One way to speedup the process is by simplifying the process; some simplifications are discussed in this work: technique to quantify the risk (Monte Carlo and derivative tree), reduction of number of attributes, simplification of the treatment of attributes and simplification of the reservoir modeling process. Special emphasis is given to (1) comparison between Monte Carlo and derivative tree techniques and (2) development of fast models through experimental design and response surface method. Some works are being presented about these techniques but normally they show applications and no comparison among alternatives is presented. The objective of this work is to compare these techniques taking into account the reliability, precision of the results and speedup of the process. These techniques are applied to an offshore field and the results show that it is possible to reduce significantly the number of flow simulation maintaining the precision of the results. It is also possible to show that some simplifications can yield different results affecting the decision process
Mestrado
Reservatórios e Gestão
Mestre em Ciências e Engenharia de Petróleo
APA, Harvard, Vancouver, ISO, and other styles
15

Nalbant, Serkan. "An Evaluation Of The Reinspection Decision Policies For Software Code Inspections." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12605827/index.pdf.

Full text
Abstract:
This study evaluates a number of software reinspection decision policies for software code inspections with the aim of revealing their effects regarding cost, schedule and quality related objectives of a software project. Software inspection is an effective defect removal technique for software projects. After the initial inspection, a reinspection may be performed for decreasing the number of remaining defects further. Although, various reinspection decision methods are proposed in the literature, no study provides information on the results of employing different methods. In order to obtain insight about this unaddressed issue, this study compares the reinspection decision policies by finding out and analyzing their performance with respect to designated measures and preference profiles for cost, schedule, and quality perspectives in the context of a typical Software Capability Maturity Model Level 3 software organization. For this purpose, a Monte Carlo simulation model, which represents the process comprising initial code inspection, reinspection, testing and field use activities, is employed in the study together with the experiment designed in order to consider different circumstances under which the mentioned process operates. The study recommends concluding the reinspection decision by comparing inspection effectiveness measure for major defects with respect to a moderately high threshold value (i.e. 75%). The study also reveals that applying default decisions of &lsquo
Never Reinspect&rsquo
and &lsquo
Always Reinspect&rsquo
do not exhibit the most appropriate outcomes regarding cost, schedule, and quality. Additionally, the study presents suggestions for further improving the cost, schedule, and quality of the software based on the analysis of the experiment factors.
APA, Harvard, Vancouver, ISO, and other styles
16

Meglicki, Zdzislaw, and Zdzislaw Meglicki [gustav@perth ovpit indiana edu]. "Analysis and Applications of Smoothed Particle Magnetohydrodynamics." The Australian National University. Research School of Physical Sciences, 1995. http://thesis.anu.edu.au./public/adt-ANU20080901.114053.

Full text
Abstract:
Smoothed Particle Hydrodynamics (SPH) is analysed as the weighted residual method. In particular the analysis focuses on the collocation aspect of the method. Using Monte Carlo experiments we demonstrate that SPH is highly sensitive to node disorder, especially in its symmetrised energy and momentum conserving form. This aspect of the method is related to low [Beta] MHD instabilities observed by other authors. A remedy in the form of the Weighted Differences Method is suggested, which addresses this problem to some extent, but at a cost of losing automatic conservation of energy and momentum. ¶ The Weighted Differences Method is used to simulate propagation of Alfven and magnetosonic wave fronts in [Beta] = 0 plasma, and the results are compared with data obtained with the NCSA Zeus3D code with the Method of Characteristics (MOC) module. ¶ SPH is then applied to two interesting astrophysical situations: accretion on to a white dwarf in a compact binary system, which results in a formation of an accretion disk, and gravitational collapse of a magnetised vortex. Both models are 3 dimensional. ¶ The accretion disk which forms in the binary star model is characterised by turbulent flow: the Karman vortex street is observed behind the stream-disk interaction region. The shock that forms at the point of stream-disk interaction is controlled by the means of particle merges, whereas Monaghan-Lattanzio artificial viscosity is used to simulate Smagorinsky closure. ¶ The evolution of the collapsing magnetised vortex ends up in the formation of an expanding ring in the symmetry plane of the system. We observe the presence of spiralling inward motion towards the centre of attraction. That final state compares favourably with the observed qualitative and quantitative characteristics of the circumnuclear disk in the Galactic Centre. That simulation has also been verified with the NCSA Zeus3D run. ¶ In conclusions we contrast the result of our Monte Carlo experiments with the results delivered by our production runs. We also compare SPH and Weighted Differences against the new generation of conservative finite differences methods, such as the Godunov method and the Piecewise Parabolic Method. We conclude that although SPH cannot match the accuracy and performance of those methods, it appears to have some advantage in simulation of rotating flows, which are of special interest to astrophysics.
APA, Harvard, Vancouver, ISO, and other styles
17

Rohmer, Tom. "Deux tests de détection de rupture dans la copule d'observations multivariées." Thèse, Université de Sherbrooke, 2014. http://hdl.handle.net/11143/5933.

Full text
Abstract:
Résumé : Il est bien connu que les lois marginales d'un vecteur aléatoire ne suffisent pas à caractériser sa distribution. Lorsque les lois marginales du vecteur aléatoire sont continues, le théorème de Sklar garantit l'existence et l'unicité d'une fonction appelée copule, caractérisant la dépendance entre les composantes du vecteur. La loi du vecteur aléatoire est parfaitement définie par la donnée des lois marginales et de la copule. Dans ce travail de thèse, nous proposons deux tests non paramétriques de détection de ruptures dans la distribution d’observations multivariées, particulièrement sensibles à des changements dans la copule des observations. Ils améliorent tous deux des propositions récentes et donnent lieu à des tests plus puissants que leurs prédécesseurs pour des classes d’alternatives pertinentes. Des simulations de Monte Carlo illustrent les performances de ces tests sur des échantillons de taille modérée. Le premier test est fondé sur une statistique à la Cramér-von Mises construite à partir du processus de copule empirique séquentiel. Une procédure de rééchantillonnage à base de multiplicateurs est proposée pour la statistique de test ; sa validité asymptotique sous l’hypothèse nulle est démontrée sous des conditions de mélange fort sur les données. Le second test se focalise sur la détection d’un changement dans le rho de Spearman multivarié des observations. Bien que moins général, il présente de meilleurs résultats en terme de puissance que le premier test pour les alternatives caractérisées par un changement dans le rho de Spearman. Deux stratégies de calcul de la valeur p sont comparées théoriquement et empiriquement : l’une utilise un rééchantillonnage de la statistique, l’autre est fondée sur une estimation de la loi limite de la statistique de test. // Abstract : It is very well-known that the marginal distributions of a random vector do not characterize the distribution of the random vector. When the marginal distributions are continuous, the work of Sklar ensures the existence and uniqueness of a function called copula which can be regarded as capturing the dependence between the components of the random vector. The cumulative distribution function of the vector can then be rewritten using only the copula and the marginal cumulative distribution functions. In this work, we propose two non-parametric tests for change-point detection, particularly sensitive to changes in the copula of multivariate time series. They improve on recent propositions and are more powerful for relevant alternatives involving a change in the copula. The finite-sample behavior of these tests is investigated through Monte Carlo experiments. The first test is based on a Cramér-von Mises statistic and on the sequential empirical copula process. A multiplier resampling scheme is suggested and its asymptotic validity under the null hypothesis is demonstrated under strong mixing conditions. The second test focuses on the detection of a change in Spearman’s rho. Monte Carlo simulations reveal that this test is more powerful than the first test for alternatives characterized by a change in Spearman’s rho. Two approaches to compute approximate p-values for the test are studied empirically and theoretically. The first one is based on resampling, the second one consists of estimating the asymptotic null distribution of the test statistic.
APA, Harvard, Vancouver, ISO, and other styles
18

Politi, Jose Roberto dos Santos. "Inovações teoricas e experimentos computacionais em Monte Carlo Quantico." [s.n.], 2005. http://repositorio.unicamp.br/jspui/handle/REPOSIP/249222.

Full text
Abstract:
Orientador: Rogerio Custodio
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Quimica
Made available in DSpace on 2018-08-05T16:17:29Z (GMT). No. of bitstreams: 1 Politi_JoseRobertodosSantos_D.pdf: 1434250 bytes, checksum: 9907fe783c6f36299444800b1f937bc8 (MD5) Previous issue date: 2005
Doutorado
Físico-Química
Doutor em Ciências
APA, Harvard, Vancouver, ISO, and other styles
19

Harrington, Nicholas Lee. "Monte Carol simulation of the OLYMPUS experiment." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/51610.

Full text
Abstract:
Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Physics, 2009.
Includes bibliographical references (p. 53-55).
The OLYMPUS experiment seeks to measure the ratio of the cross sections for e--p and e+-p scattering in order to determine the magnitude of two photon interactions in lepton nucleon scattering. Measuring this observable to the accuracy required is dependent on a good understanding of the systematic uncertainties associated with the scattering experiment. To accomplish this, a simulation using the GEANT4 library and reconstruction code was written and studies were performed. This paper serves to document the software written and its use in understanding the experiment and some systematic uncertainties.
by Nicholas Lee Harrington.
S.B.
APA, Harvard, Vancouver, ISO, and other styles
20

Jesko, Karol. "Studying divertor relevant plasmas in linear devices : experiments and transport code modelling." Thesis, Aix-Marseille, 2019. http://www.theses.fr/2019AIXM0010.

Full text
Abstract:
Les prédictions concernant le fonctionnement des divertors de tokamak reposent généralement sur des codes de transport de bord, consistant en un code de plasma fluide associé à un code de Monte Carlo pour les espèces neutres. Les machines linéaires Magnum-PSI et Pilot-PSI chez DIFFER, produisant des plasmas comparables à ceux d'ITER ($T_e \sim 1$ eV, $n_e\sim10^{20}$ m$^{- 3}$). Dans cette thèse, les décharges de plasma ont été étudiées expérimentalement et par modélisation utilisant le code Soledge2D-Eirene afin de: a) rechercher quels phénomènes doivent être inclus dans la modélisation pour reproduire les tendances expérimentales et b) pour mieux interpréter les expériences . Expérimentalement, l’effet de la pression neutre $P_n$ a été étudié par diffusion Thomson, par une sonde de Langmuir, par spectroscopie visible et par calorimétrie. Nous avons montré qu'un faisceau de plasma peut être efficacement terminé par une couche de gaz neutre. Ensuite, à partir de comparaisons d’expériences et de simulations, nous avons montré qu’il était essentiel d’inclure les collisions élastiques entre le plasma et les molécules pour pouvoir reproduire les expériences. De plus, la $T_e$ proche de la cible est systématiquement surestimé, ce qui sous-estime le taux de recombinaison. Enfin, nous avons montré expérimentalement l’importance de l’inclusion de la recombinaison de surface dans le flux d’énergie de surface dans les plasmas à basse température. Les travaux présentés dans cette thèse contribuent à la compréhension des interactions plasma-neutre, en particulier dans les concepts de divertors plus fermés de nouvelle génération (MAST-upgrade, DIII-D)
Predictions for the operation of tokamak divertors typically rely on edge transport codes, consisting of a fluid plasma code in combination with a Monte Carlo code for neutral species. The linear devices Magnum-PSI and Pilot-PSI at DIFFER, operating with a cascaded arc plasma source that produces plasmas comparable to those expected in the ITER divertor ($T_e \sim 1 $ eV, $n_e \sim 10^{20}$m$^{-3}$). In this thesis, plasma discharges have been studied both experimentally and by modelling using the Soledge2D-Eirene code in order to a) investigate which phenomena need to be included in the modeling to reproduce experimental trends and b) provide new insights to the interpretation of experiments. Experimentally, the effect of neutral pressure $P_n$ was investigated using Thomson scattering, a Langmuir probe, visible spectroscopy and calorimetry. We have shown that a plasma beam can be effectively terminated by a blanket of neutral gas. Next, from comparisons of experiments and simulations, we have found that it is critical to include elastic collisions between the plasma and molecules if experiments are to be reproduced. Furthermore, the near-target $T_e$ is systematically overestimated by the code, underestimating the recombination rate thereby. Lastly, we have experimentally shown the importance of the inclusion of surface recombination to the surface energy flux in low temperature plasmas, an effect that is generally known but difficult to measure in fusion devices. The work presented in this thesis contributes to the understanding of plasma-neutral interactions especially in new generation, closed divertor concepts (i.e. MAST-upgrade, DIII-D)
APA, Harvard, Vancouver, ISO, and other styles
21

Coura, André da Silva 1984. "Experimentos com probabilidade e estatística : Jankenpon, Monte Carlo, variáveis antropométricas." [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/307584.

Full text
Abstract:
Orientador: Laura Leticia Ramos Rifo
Dissertação (mestrado profissional) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica
Made available in DSpace on 2018-08-26T10:19:46Z (GMT). No. of bitstreams: 1 Coura_AndredaSilva_M.pdf: 8253159 bytes, checksum: 4cf2d4abd8227260acd62a6dd9dc2b98 (MD5) Previous issue date: 2014
Resumo: A dissertação apresenta uma abordagem prática para o ensino da matemática nos níveis fundamental e médio. De forma mais específica, apresenta conceitos de estatística básica como tratamento de informações e estudo de probabilidades. Estes conceitos são de grande importância no âmbito científico (parte experimental, por exemplo) e social (compreensão de características populacionais), além de estarem inseridos na vida cotidiana dos alunos. Sendo assim, foi entendido que é primordial desenvolver as competências e habilidades para organizar e compreender informações. Foram realizados experimentos para a aplicação dos conceitos apresentados em sala de aula. Também uma pesquisa propondo questões para analisar aspectos sobre alimentação e prática de exercícios físicos. Estes experimentos, além da aplicação dos conceitos, pretendem desenvolver no público-alvo, raciocínio lógico e olhar crítico, para assuntos relacionados à disciplina de matemática, utilizando situações cotidianas. Para análise organizamos e interpretamos as informações por meio de tabelas e gráficos. A pesquisa teve como objetivo principal mostrar como é usada a teoria estatística para a tomada de decisão e, nesse caso, para melhorar a própria qualidade de vida. Desse modo, pretendemos que a metodologia apresentada neste trabalho possa contribuir para a disseminação do conhecimento destas ferramentas matemáticas para os níveis fundamental e médio do ensino escolar
Abstract: This dissertation presents a practical approach for teaching mathematics in the elementary and secondary levels. More specifically, presents concepts of Basic Statistics as information processing and the study of probabilities. These concepts are of great importance in scientific (experimental way, for example) and social (understanding of population characteristics), besides being inserted into the daily student's lives. Therefore, it was understood that is necessary to develop the skills and abilities to organize and understand information. Experiments were carried out for the application of the concepts presented in classroom. Also a search posing questions to analyze aspects of food and physical exercise. The realization of these experiments purpose, besides the application of classroom learnt concepts, develop in students, logic reasoning and critical look at issues related to the discipline of mathematics and daily situations by organizing and interpreting information with charts and graphs. The research aimed to show how it is used statistical theory for decision making and, if so , to improve their quality of life. Thus, we intend that presented methodology in this study may contribute to the dissemination of these mathematical knowledge tools for elementary and high school levels
Mestrado
Matemática em Rede Nacional
Mestre em Matemática em Rede Nacional
APA, Harvard, Vancouver, ISO, and other styles
22

Pekoz, Rengin. "Components Of Detector Response Function: Experiment And Monte Carlo Simulations." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605228/index.pdf.

Full text
Abstract:
Components of the response function of a high-purity germanium (HPGe) detector due to full or partial energy deposition by gamma- and X-rays were studied. Experimental response functions for 241Am, Ba and Tb were compared with those obtained from the Monte Carlo simulations. The role of physical mechanisms for each component was investigated by considering escape/absorption of photons, photoelectrons, Auger electrons, recoil electrons and X-rays of the detector material. A detailed comparison of the experimental Compton, photoelectron, detector X-ray escape components and full-energy peaks with those obtained from Monte Carlo program are presented.
APA, Harvard, Vancouver, ISO, and other styles
23

Panagiotopoulos, Athanassios Z. "High pressure phase equilibria : experimental and Monte Carlo simulation studies." Thesis, Massachusetts Institute of Technology, 1986. http://hdl.handle.net/1721.1/14883.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering, 1986.
MICROFICHE COPY AVAILABLE IN ARCHIVES AND SCIENCE
Bibliography: v.2, leaves 200-208.
by Athanassios Z. Panagiotopoulos.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
24

Arantes, Fabiana Rodrigues. "Sistemas de nanopartículas magnéticas: estudos experimentais e simulações Monte Carlo." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/43/43134/tde-26012015-111206/.

Full text
Abstract:
Nesta tese apresentamos um estudo do comportamento magnético de sistemas de nanopartículas por meio de medidas experimentais e simulações Monte Carlo. Estudamos o papel das interações entre partículas experimentalmente a baixas temperaturas em amostras de ferrofluidos comerciais por meio de curvas ZFC-FC, delta m e diagramas FORC. Observamos nas curvas ZFC-FC o fenômeno de super-resfriamento e transições de fase do estado sólido para o líquido em ferrofluidos. Para amostras de cristais líquidos dopados com nanopartículas magnéticas, observamos a transição entre as fases isotrópica e nemática. Detectamos em amostras de ferrofluidos e em soluções micelares dopadas com nanopartículas um aumento da viscosidade na presença de um campo magnético aplicado, o chamado efeito magnetoviscoso, que surge devido às interações entre partículas. Nas simulações Monte Carlo, vimos que a temperatura crítica (Tc) diminui com o tamanho das partículas, e que esse comportamento pode ser descrito por uma lei de escala. As simulações também mostraram que uma camada morta na superfície das nanopartículas provoca uma pequena diminuição na temperatura crítica, o que não ocorre quando adicionamos uma camada dura, que pode aumentar significativamente Tc. Para simulações de um sistema de nanopartículas interagentes, demos especial atenção a interpretar de que forma as interações magnetizantes e desmagnetizantes se manifestam em diagramas FORC para um conjunto de nanopartículas com distribuição de tamanhos. Observamos que uma interação desmagnetizante está associada a um deslocamento do pico do diagrama FORC para campos locais de interação Hb positivos e que a presença de uma interação magnetizante pode deslocar esse pico para campos Hc , relacionados à distribuição de coercividades do sistema, maiores.
In this thesis we present a study of the behavior of a system of magnetic nanoparticles by means of experimental measurements and Monte Carlo simulations. We experimentally study the role of the interactions between particles at low temperatures in commercial samples of ferrofluids through ZFC-FC, delta m curves, and FORC diagrams. We observed the phenomenon of supercooling and phase transitions from solid to liquid states in the ZFC-FC curves of ferrofluids. For the samples of liquid crystal doped with magnetic nanoparticles, we saw the transition between the isotropic and nematic phases. We detected in the samples of ferrofluids and in micellar solutions doped with nanoparticles an increase of the viscosity in the presence of an applied magnetic field, the so-called magnetoviscous effect, which arises due to interactions between particles. In the Monte Carlo simulations, we found that the critical temperature (Tc) decreases with particle size, a behavior that is described well by a scaling law. The simulations also showed that a dead layer on the surface of the nanoparticles causes a slight decrease in the critical temperature value, what does not occur when we add a hard layer, which increases Tc significantly. For simulations of a system of interacting nanoparticles, we paid special attention to interpret how the magnetizing and demagnetizing interactions manifest themselves in FORC diagrams for a set of nanoparticles with size distribution. We observed that demagnetizing interactions is associated with a displacement of the peak of the FORC diagram to positive values of the local field interaction Hb , and that the presence of a magnetizing interaction can shift this peak to larges values of the Hc field, related to the distribution of coercivities.
APA, Harvard, Vancouver, ISO, and other styles
25

Corasaniti, Maria. "Monte Carlo simulation of a neutron veto for the XENONnT experiment." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13974/.

Full text
Abstract:
XENON1T, located at the Laboratori Nazionali del Gran Sasso, is currently the largest experiment for direct dark matter search. It consists of a dual phase TPC filled with 2 tonnes of xenon, and has completed the first science run in January 2017, obtaining the most stringent exclusion limits on the spin-independent WIMP- nucleon interaction cross section for WIMP masses above 10 GeV/c2, with a minimum of 7.7·10−47 cm2 for 35-GeV/c2 WIMPs at 90% confidence level. Currently the experiment is still in data acquisition and aims at a sensitivity of 1.6 · 10−47 cm2 for WIMP masses of 50 GeV/c2 in 2 t·y exposure. A next generation detector, called XENONnT, is already foreseen by the collaboration. It will have a larger TPC with an increased xenon target (∼ 6 t) which will improve the WIMP sensitivity by another order of magnitude. For this purpose, it also requires a very low background level. The expected neutron background for the new designed time projection chamber is ∼5 events in the 4 t fiducial volume, in the nominal 20 ton·year exposure. In this work we present a Monte Carlo simulation study of a Gd-loaded liquid scintillator neutron veto for the XENONnT experiment, with the goal of tagging the background events from radiogenic neutrons. Results indicate that, for a scintillating mixture with 0.1% of gadolinium by weight, and a light collection efficiency of ∼7%, we obtain a neutron rejection factor higher than 80%. This allows to reduce the neutron background by a factor ∼5, in order to be in full agreement with the background goal of the XENONnT experiment: <1 background event in the total exposure.
APA, Harvard, Vancouver, ISO, and other styles
26

Massoli, Fabio Valerio <1987&gt. "The XENON1T experiment: Monte Carlo background estimation and sensitivity curves study." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amsdottorato.unibo.it/6776/.

Full text
Abstract:
Despite the scientific achievement of the last decades in the astrophysical and cosmological fields, the majority of the Universe energy content is still unknown. A potential solution to the “missing mass problem” is the existence of dark matter in the form of WIMPs. Due to the very small cross section for WIMP-nuleon interactions, the number of expected events is very limited (about 1 ev/tonne/year), thus requiring detectors with large target mass and low background level. The aim of the XENON1T experiment, the first tonne-scale LXe based detector, is to be sensitive to WIMP-nucleon cross section as low as 10^-47 cm^2. To investigate the possibility of such a detector to reach its goal, Monte Carlo simulations are mandatory to estimate the background. To this aim, the GEANT4 toolkit has been used to implement the detector geometry and to simulate the decays from the various background sources: electromagnetic and nuclear. From the analysis of the simulations, the level of background has been found totally acceptable for the experiment purposes: about 1 background event in a 2 tonne-years exposure. Indeed, using the Maximum Gap method, the XENON1T sensitivity has been evaluated and the minimum for the WIMP-nucleon cross sections has been found at 1.87 x 10^-47 cm^2, at 90% CL, for a WIMP mass of 45 GeV/c^2. The results have been independently cross checked by using the Likelihood Ratio method that confirmed such results with an agreement within less than a factor two. Such a result is completely acceptable considering the intrinsic differences between the two statistical methods. Thus, in the PhD thesis it has been proven that the XENON1T detector will be able to reach the designed sensitivity, thus lowering the limits on the WIMP-nucleon cross section by about 2 orders of magnitude with respect to the current experiments.
APA, Harvard, Vancouver, ISO, and other styles
27

钱Qian, 文斌Wenbin. "J/ψ production study at the LHCb experimentJ/ψ production study at the LHCb experiment." Paris 11, 2010. http://www.theses.fr/2010PA112109.

Full text
Abstract:
Dans cette thèse, les études de la production de J/ψ dans l'experience LHCb est presentée, basée sur un échantillon d’événements Monte Carlo complétement simulés. La procedure développée dans cette thèse sera utilisée pour analyser les données réelles lorsque suffisamment de statistique sera accumulé. Les événements J/ψ sont reconstruits en utilisant des critères de sélection optimisés pour atteindre la meilleure discrimination contre les processus de bruit de fond. L’étude réalisée montre que 6. 5 millions de J/ψ peuvent être reconstruits par pb-1 de donnees. La section efficace de production des prompt J/ψ et des J/ψ de désintégrations de b est mesurée dans 28 bins en pT et η recouvrant la région 0 < pT < 7 GeV/c et 3 <η < 5. Dans chaque bin, une variable est définie pour distinguer les prompt J/ψ de ceux de désintégrations de b. L’analyse montre également que la polarisation du J/ψ joue un rôle important dans la détermination de la section efficace. Elle peut contribuer à une erreur systématique jusqu’à 30% dans certains bins. Un tel effet peut être grandement réduit si une analyse de la polarisation du J/ψ est effectuée simultanément. La mesure des paramètres de polarisation aidera aussi grandement pour la compréhension du méchanisme de production du J/ψ. L’expérience LHCb ayant déjà enregistré 14 nbֿ¹ de données, une partie de l’analyse peut être effectuée. Environ 3000 candidats J/ψ sont reconstruits. En se basant sur cet échantillon, la section efficace en fonction de pT est mesurée. La mesure préliminaire de la section efficace des J/ψ dans la région pT entre 0 et 9 GeV/c et y entre 2,5 et 4 est 7. 6±0. 3 µb seule l’erreur statistique est reportée
In this thesis, the study of the J/ψ production at the LHCb is presented, based on a sample of fully simulated Monte Carlo events. The procedure developped in this thesis will be use to analyze real data when enough statistics will be accumulated. J/ψ events are reconstructed using selection criteria optimized to reach the best discrimitaion against background processes. The stude done shows that 6. 5 million J/ψ can be reconstructed per pb-1 of data. The production cross section of prompt J/ψ and of J/ψ from b is measured in pT and η 28 bins covering the region 0 < pT < 7 GeV/c and 3 < η < 5. In each bin, a variable is defined to distinguish prompt J/ψ and b decays. The analysis show also that J/ polarization plays an important role in the cross section determination. It can contribute to a systematic error up to 30% in some of the bins. Such an effect can be greatly reduced if a J/ψ polarization analysis is done simultaneously. The measurement of the polarization parameters will also help to understand J/ψ production mechanisms. The LHCb experiment already recorded 14 nbֿ¹ of data, part of the analysis can already be done. Approximately 3000 J/ψ candidates are reconstructed. Using this sample, the cross section as a function of pT is measured. The preliminary measurement of the J/ψ cross section in the region pT between 0 and 9 GeV/c and y between 2. 5 and 4 is 7. 6±0. 3 µb where only the statistical error is reported
APA, Harvard, Vancouver, ISO, and other styles
28

Guimarães, Carla da Costa. "Monitoração individual externa: experimentos e simulações com o método de Monte Carlo." Universidade de São Paulo, 2005. http://www.teses.usp.br/teses/disponiveis/43/43134/tde-06072009-185822/.

Full text
Abstract:
Neste trabalho avaliamos a possibilidade de aplicar técnicas de simulação utilizando o método de Monte Carlo em dosimetria de fótons na monitoração individual externa. Para isso, simulamos experimentos com monitores de radiação contendo detectores termoluminescentes, TLD-100 e CaF2:NaCl, empregando a ferramenta computacional GEANT4. Começamos desenvolvendo um método de simulação de feixes de radiação produzidos pela incidência de elétrons em um alvo de tungstênio e filtragem pela janela de berílio e filtros adicionais para obter a radiação de qualidade desejada. Este processo, usado para simular campos de radiação de um tubo de raios X, foi validado através da comparação de características dos espectros simulados com valores de referência estabelecidos em normas internacionais, sendo estas características a camada semi-redutora, também medida experimentalmente, a energia média e a resolução espectral. Na simulação dos monitores termoluminescentes foram introduzidas aproximações na modelagem do detector para possibilitar a comparação entre os resultados experimentais e teóricos. Uma delas foi na densidade do detector de CaF2:NaCl, acrescentando 6% de ar na sua composição, tendo em vista a diferença entre o valor calculado e o obtido através de medidas. Foi também introduzida a aproximação referente à auto-atenuação de luz no detector de CaF2:NaCl no processo de leitura, empregando o coeficiente de atenuação de luz de 2,20(25) mm-1. Determinamos os coeficientes de conversão cp, do kerma no ar para o equivalente de dose pessoal, em simuladores de paralelepípedo de polimetil metacrilato (PMMA) com água, irradiados com feixes de radiação X com espectro estreito e largo, recomendados em normas [ISO 4037-1], e com os feixes implantados no Laboratório de Dosimetria. Verificamos que a intensidade de radiação retro-espalhada por este simulador é similar àquela do simulador de paralelepípedo sólido de tecido-equivalente ICRU. Na prática, isto torna o simulador de PMMA repleto de água, que além de ser barato é fácil de construir, um bom substituto para o simulador ICRU. Uma análise detalhada dos resultados obtidos mostrou que a utilização da grandeza kerma no meio na avaliação dos coeficientes de conversão cp para profundidades da ordem ou menores que 0,07 mm não é boa para feixes de fótons com energia no intervalo de 200 a 1250 keV. Nesta região, deve-se calcular o coeficiente de conversão empregando a dose absorvida. Concluise que o GEANT4 é uma ferramenta adequada não só para simular os monitores termoluminescentes e os procedimentos empregados na rotina do Laboratório de Dosimetria, mas para auxiliar na interpretação de todos os resultados experimentais obtidos na monitoração individual externa, nem sempre previstos.
In this work, we have evaluated the possibility of applying the Monte Carlo simulation technique in photon dosimetry of external individual monitoring. The GEANT4 toolkit was employed to simulate experiments with radiation monitors containing TLD-100 and CaF2:NaCl thermoluminescent detectors. As a first step, X ray spectra were generated impinging electrons on a tungsten target. Then, the produced photon beam was filtered in a beryllium window and additional filters to obtain the radiation with desired qualities. This procedure, used to simulate radiation fields produced by a X ray tube, was validated by comparing characteristics such as half value layer, which was also experimentally measured, mean photon energy and the spectral resolution of simulated spectra with that of reference spectra established by international standards. In the construction of thermoluminescent dosimeter, two approaches for improvements have been introduced. The first one was the inclusion of 6% of air in the composition of the CaF2:NaCl detector due to the difference between measured and calculated values of its density. Also, comparison between simulated and experimental results showed that the self-attenuation of emitted light in the readout process of the fluorite dosimeter must be taken into account. Then, in the second approach, the light attenuation coefficient of CaF2:NaCl compound estimated by simulation to be 2,20(25) mm-1 was introduced. Conversion coefficients cp from air kerma to personal dose equivalent were calculated using a slab water phantom with polimethyl-metacrilate (PMMA) walls, for reference narrow and wide X ray spectrum series [ISO 4037-1], and also for the wide spectra implanted and used in routine at Laboratório de Dosimetria. Simulations of backscattered radiations by PMMA slab water phantom and slab phantom of ICRU tissue-equivalent material produced very similar results. Therefore, the PMMA slab water phantom that can be easily constructed with low price can be considered a convenient practical alternative to substitute the tissue-equivalent slab. Conversion coefficients from air kerma to personal dose equivalent obtained were compared with published data. It was found that the quantity kerma in the medium commonly used for the evaluation of conversion coefficients at depths of the order or less than 0,07 mm does not provide good results for monoenergetic photon beams with energy between 200 to 1250 keV. In this range, it is necessary to consider the absorbed dose quantity. We conclude that the GEANT4 is a suitable toolkit not only to simulate thermoluminescent dosimeters and experimental procedures employed in the routine of a dosimetry laboratory, but also to shed light upon all the experimental results obtained in external individual monitoring that are not always expected.
APA, Harvard, Vancouver, ISO, and other styles
29

Albaret, Claude. "Automated system for Monte Carlo determination of cutout factors of arbitrarily shaped electron beams and experimental verification of Monte Carlo calculated dose distributions." Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=81259.

Full text
Abstract:
Dose predictions by Monte-Carlo (MC) techniques could alleviate the measurement load required in linac commissioning and clinical radiotherapy practice, where small or irregular electron fields are routinely encountered. In particular, this study focused on the MC calculation of cutout factors for clinical electron beams. A MC model for a Varian linac CL2300C/D was built and validated for all electron energies and applicators. A MC user code for simulation of irregular cutouts was then developed and validated. Supported by a home-developed graphical user interface, it determines in situ cutout factors and depth dose curves for arbitrarily shaped electron fields and collects phase space data. Overall, the agreement between simulations and measurements was excellent for fields larger than 2 cm.
The MC model was also used to calculate dose distributions with the fast MC code XVMC in CT images of phantoms of clinical interest. These dose distributions were compared to dose calculations performed by the pencil-beam algorithm-based treatment planning system CadPlan and verified against measurements. Good agreement between calculations and measurements was achieved with both systems for phantoms containing 1-dimensional heterogeneities, provided a minimal quality of the CT images. In phantoms with 3-dimensional heterogeneities however, CadPlan appeared unable to predict the dose accurately, whereas MC provided with a more satisfactory dose distribution, despite some local discrepancies.
APA, Harvard, Vancouver, ISO, and other styles
30

Vives, i. Santa-Eulàlia Eduard. "Simulació Monte Carlo de sistemes amb acoblament de graus de llibertat." Doctoral thesis, Universitat de Barcelona, 1991. http://hdl.handle.net/10803/1594.

Full text
Abstract:
Els models microscòpics usuals que es proposen en Mecànica Estadística descriuen, únicament, un tipus de grau de llibertat. Aquests models permeten explicar algunes transicions de fase que es troben en els diagrames de fase de sistemes reals. Ara bé, aquests últims estan constituïts per elements que normalment presenten diferents tipus de graus de llibertat tals com els posicionals, orientacionals, magnètics, conformacionals, etc. L'estudi global de tot un diagrama de fases no pot fer-se mitjançant la simple superposició de models simples ja que, normalment, els diferents tipus de graus de llibertat interfereixen i s'acoblen. Cal, per tant, proposar i resoldre models microscòpics que descriguin la competició entre diferents graus de llibertat.

D'entre d'altres exemples de sistemes amb fenòmens d'acoblament destaquen els aliatges binaris amb àtoms magnètics, els cristalls líquids, els cristalls plàstics, les barreges de líquids moleculars, etc... D'altres fenòmens que també poden englobar-se dins d'aquest marc de l'acoblament són la dependència amb l'ordre atòmic d'algunes transicions estructurals en aliatges binaris i, fins i tot, els sistemes de partícules adsorbides sobre un substrat.

Desde un punt de vista fenomenològic la teoria de Landau amb dos paràmetres d'ordre posa de manifest els principals efectes que es poden donar. Entre d'altres destaquen el desplaçament o desaparició de fases que hom esperaria si no existissin termes d'acoblament, l'existència de fases reentrants, punts tricritics I multicrítics, etc ... D'entre tots els possibles termes d'acoblament entre dos paràmetres d'ordre "x" i "y" a l'energia lliure que hom pot imaginar el més estudiat ha estat l'acoblament biquadràtic x(2)y(2), encara que termes com x(2)y també s'han mostrat útils en alguns casos com per exemple en l'estudi de diagrames de fase de cristalls líquids.

La resolució exacta dels models complexos no pot fer-se analíticament. Els mètodes pertorbatius són adequats quan les energies d'acoblament són petites, però sovint aquest no és el cas. Per això la simulació de Monte CarIo és una eina indispensable per a aquests casos. Els principals problemes que presenta són que únicament podem simular sistemes finits durant un temps relativament curt. L'estudi de les transicions de fase, on el límit termodinàmic és indispensable i les correlacions temporals poden ésser molt llargues, requereix doncs de tècniques especifiques. Els efectes de mida finita es poden reduir mitjançant l'extrapolació a mida infinita a partir de l'estudi de sistemes de diferents mides o mitjançant la teoria del "Finite Size Scaling".

En aquest treball ens hem centrat en tres problemes concrets, relacionats amb l'acoblament de graus de llibertat.

En primer lloc hem proposat un model microscòpic per als cristalls líquids. Es basa en el model "lattice-gas" bidimensional i inclou graus de llibertat orientacionals de les partícules. La seva resolució s' ha fet mitjançant tècniques de camp mitjà i simulació de Monte Carlo. El model reprodueix qualitativament els diagrames de fase experimentals d'algunes barreges de cristalls líquids, així com l'existència d'un punt tricrític en la línea de transició Smèctica-Nemàtica i la variació dels exponents crítics efectius.

Un segon estudi s'ha centrat en el problema dels aliatges binaris amb estructura BCC que tenen una transició estructural cap a una fase més compacta a baixa temperatura. Aquesta transició involucra els graus de llibertat posicionals dels nusos de la zarza BCC que sofreixen l'acció d'una cisalla. Aquests aliatges presenten a temperatures més elevades fenòmens de reordenament dels àtoms en la.xarxa BCC. Aquests fenòmens de tipus difusiu poden estudiar-se prescindint dels detalls exactes de la dinàmica del moviment atòmic, mitjançant un model que inclogui graus de llibertat configuracionals (els nusos d'una xarxa poden ésser A o B). La temperatura a la qual es produeix la transició estructural (normalment de primer ordre) depèn de l'ordre configuracional dels àtoms. Aquesta ordenació pot vari.ar-se, de forma controlada, mitjançant trempes ràpides desde diferents temperatures dins la zona de reordenament atòmic. Mitjançant el mètode de Monte CarIo hem simulat amb un model molt simple els fenòmens de reordenament en un aliatge binari tipus BCC en funció de la temperatura. En particular s'han estudiat les transicions entre estructures D0(3) , B2 i A2. Estudiant com les constants elàstiques de la xarxa depenen de l'ordre configuracional hem pogut justificar qualitativament la dependència de la temperatura de transició estructural amb la temperatura des de la qual es fa la trempa. Hem estudiat també, mitjançant una energia lliure de Landau i un model microscòpic, com l'increment d'entropia de la transició estructural depèn de l'ordre configuracional.

Finalment hem estudiat el problema dels sistemes de partículas adsorbides sobre substrats. Hem proposat un model que separa els graus de llibertat posicionals de les partícules en dos: per un costat uns graus de llibertat discrets tipus "lattice-gas" que descriuen els salts de les partícules d'un pou de potencial ("corrugation potential") a un altre en el substrat i per altre uns graus de llibertat continus que descriuen el moviment de les partícules dins els pous. La simulació Monte CarIo d'aquest model ha permès estudiar la transició de fase sòlid-líquid en aquests sistemes per a diferents valors del "corrugation potential". En el límit de substrat pla els nostres resultats indiquen la presència d'una zona de coexistència entre la fase sòlida i la líquida amb propietats de tipus hexàtic. En el cas de que el "corrugation potential" sigui prou gran els factors d'estructura simulats coincideixen perfectament amb resultats teòrics trobats en la literatura. Ara bé, quan els pous del "corrugation potential" són molt petits es troben discrepàncies ja que les fluctuacions de les partícules són molt grans.
The particles that constitute the real systems have, normally, several degrees of freedom: positional, orientational, conformational, etc. The study of a complete phase diagram cannot be done by the mere superposition of simple models because the different degrees of freedom interfere and coupling phenomena appear. Several examples are: magnetic binary alloys, liquid crystals, etc.

Other systems whose behaviours can also be regarded as the result of coupling are alloys undergoing structural phase transitions and systems of adsorbed molecules on substrates. In this work we have focused our attention in three problems related to coupling between degrees of freedom: (a) First of all we have developed a microscopic model for Liquid Crystals. It is based on a lattice-gas 2-dimensional model that includes orientational degrees of freedom for the molecules. It reproduces qualitatively well the experimental phase diagrams of a number of liquid crystal mixtures, the existence of a tri-critical point in the Smectic-Nematic transition line, and a continuous variation of the effective critical exponents. (b) A second work has been the study of BCC binary alloys that undergo structural phase transitions to packed phases at low temperature, and also exhibit atomic reordering phenomena at higher temperatures. Coupling phenomena between the structural degrees of freedom and the configurational atomic order can appear by means of quenches starting at temperatures in the range where atomic reordering is operative. We have studied how the elastic constants depend on the configurational order and we have justified the dependence of the structural transition temperature upon the starting temperature of the quench. (c) Finally we have studied systems of adsorbed molecules on substrates. We have proposed a model that splits the positional degrees of freedom of the particles: on the one hand it considers variables associated with the jumps of the particles between neighbouring wells of the corrugation potential and on the other hand it considers continuous degrees of freedom associated with the movement of particles inside the wells. Monte Carlo simulation allows the study of the solid-liquid phase transition for different values of the corrugation potential. In the limit of flat substrate our results show a coexisting zone with hexatic properties between the solid and liquid phases. For big enough corrugation potential the simulated structure factors are in agreement with the results of previous theorie s found in the literature.
APA, Harvard, Vancouver, ISO, and other styles
31

Medin, Joakim. "Studies of clinical proton dosimetry using Monte Carlo simulation and experimental techniques /." Online version, 1997. http://bibpurl.oclc.org/web/26808.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Doucet, Robert. "Experimental verification of Monte Carlo calculated dose distributions for clinical electron beams." Thesis, McGill University, 2001. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=33750.

Full text
Abstract:
Current electron beam treatment planning algorithms are inadequate to calculate dose distributions in heterogeneous phantoms. Fast Monte Carlo algorithms are accurate in general but their clinical implementation needs validation. Calculations of electron beam dose distributions performed using the fast Monte Carlo system XVMC and the well-benchmarked general-purpose Monte Carlo code EGSnrc were compared with measurements. Irradiations were performed using the 9 MeV and 15 MeV beams from the Clinac 18 accelerator with standard conditions. Percent depth doses and lateral profiles were measured with thermoluminescent dosimeter and electron diode respectively. The accelerator was modelled using EGS4/BEAM, and using an experiment-based beam model. All measurements were corrected by EGSnrc calculated stopping power ratios. Overall, the agreement between measurement and calculation is excellent. Small remaining discrepancies can be attributed to the non-equivalence between physical and simulated lung material, precision in energy tuning, beam model parameters optimisation and detector fluence perturbation effects.
APA, Harvard, Vancouver, ISO, and other styles
33

Rees, Vaughan P. "Evaluation of a novel neutron detector using experimental and Monte Carlo Techniques." Thesis, University of Surrey, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.580602.

Full text
Abstract:
Following a literature survey into neutron detection methods, a conceptual design for a novel dose meter was proposed consisting of a pair of CdlnTe (ClT) detectors with a thin sandwiched layer of 6LiF between, achieved by coating one of the detectors with 6LiF. A detector was constructed using two 10 x 10 x 2 mm ClT detectors, of which one was coated with a 4 I-Im 6LiF layer. Coincidence counting of the alpha particle and triton arising from the 6U (n, a) 3H reaction was used to reduce the gamma background. Development of this design was carried out by undertaking experimental work on a simple prototype and by conducting Geant4 simulations. The Geant4 simulation work included comparison of simulations with experimental data to validate the modelling work carried out. Results from this work show that neutron detection is possible using this technique, using a 6UF layer of a few microns thickness, and that inference of neutron energy from the detector response is possible in principle. Also identified as a result of the modelling work were drawbacks and limitations with the use of ClT, notably the opacity of the 2 mm thick detectors used in the experimental work to thermal neutrons. This affected detector performance significantly, leading to detector separation playing a key role in detector efficiency, with efficiency also varying strongly with neutron field direction. The presence of neutrons was detected in a mixed neutron and gamma field, although 5 cm of lead shielding was required in order to detect a response above background. A review of the different methods of neutron detection is also presented with a discussion on their applicability to monitoring worker dose in the workplace. Having reviewed the performance of available dose meters, it has been concluded that there is scope for improved novel instruments as the response of existing instruments available has a strong energy dependence.
APA, Harvard, Vancouver, ISO, and other styles
34

He, Yufeng. "Experimental and Monte Carlo simulation investigations of adsorption heterogeneity in nanoporous materials." Thesis, University of Edinburgh, 2004. http://hdl.handle.net/1842/14036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Vegas, Lozano Esteban. "Optimización en estudios de Monte Carlo en Estadística: Aplicaciones al Contraste de Hipótesis." Doctoral thesis, Universitat de Barcelona, 1996. http://hdl.handle.net/10803/1565.

Full text
Abstract:
El principal resultado es la presentación de una técnica de optimización en estudios de Monte Carlo en Estadística. Se obtiene un estimador de la esperanza de una variable dicotómica (Y), que tiene una varianza menor que el estimador habitual, la frecuencia relativa. Este estimador optimizado se basa en el conocimiento de otra variable dicotómica (de control), C, correlacionada con Y y de esperanza conocida, E(C). La aplicación de esta técnica es sencilla de implementar. En simulación de Monte Carlo en es relativamente frecuente disponer de tales variables de control. Así, por ejemplo, en estudios de simulación de la potencia de un nuevo test no paramétrico se puede utilizar en ocasiones un test paramétrico comparable, de potencia conocida.

Se demuestra que este estimador es insesgado y se obtiene la expresión de su varianza. Se estudiaron varios estimadores de esta varianza, escogiendo a uno de ellos como el más adecuado. Además, se estudia el tanto por ciento de reducción de la varianza del nuevo estimador en comparación con el estimador habitual (frecuencia relativa). Se observan unos valores entre un 40% a un 90% según se incremente el valor de la correlación entre la variable de control (C) y la variable de estudio (Y).

Para validar los resultados teóricos anteriores e ilustrar la técnica propuesta se realizaron dos estudios de simulación. El primero sirve para obtener una estimación de la potencia de un nuevo test. Mientras que el segundo es un estudio de simulación general sin ninguna finalidad concreta.

Se propuso un nuevo test para resolver el problema de Behrens-Fisher, basado en la distancia de Hao, al cual se le aplica la anterior técnica para conocer su potencia y robustez. Se obtiene una potencia y robustez óptimas.

Por último, se exponen dos casos reales, dentro del entorno médico-biológico, donde surge el problema de Behrens-Fisher. En ambos estudios, se realiza un análisis crítico ya que las verdaderas probabilidades de error son distintas de las supuestas debido a ignorar probables diferencias entre varianzas.
The main purpose is the presentation of an optimization technique in Monte-Carlo studies in statistics and subsequent study of some statistical properties of the estimator associated with this technique. An estimator of the expectation of a dichotomous variable, Y, with variance less than the most obvious unbiased estimator, relative frequency, is obtained. This new estimator is based on the availability of another dichotomous variable (control), C, correlated with Y and expectation, E(C), which is known. The availability of this control variable is relatively common in Monte-Carlo simulations. So, for example, simulation studies of the power of a new nonparametric test may sometimes use a comparable parametric test, with known power.

Moreover, a new test for the Behrens-Fisher problem, based on geodesic distance criteria, is proposed. The power and robustness of this test are estimated through Monte-Carlo simulation using the previous optimization technique.
APA, Harvard, Vancouver, ISO, and other styles
36

Szameitat, Tobias [Verfasser], and Horst [Akademischer Betreuer] Fischer. "New geant4-based Monte Carlo software for the COMPASS-II experiment at CERN." Freiburg : Universität, 2017. http://d-nb.info/1125906251/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Yeung, Alan B. (Alan Brian) Carleton University Dissertation Physics. "A Monte Carlo study of the Sudbury Neutrino Observatory small test detector experiment." Ottawa, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
38

Paczkowski, Remi. "Monte Carlo Examination of Static and Dynamic Student t Regression Models." Diss., Virginia Tech, 1997. http://hdl.handle.net/10919/38691.

Full text
Abstract:
This dissertation examines a number of issues related to Static and Dynamic Student t Regression Models. The Static Student t Regression Model is derived and transformed to an operational form. The operational form is then examined in a series of Monte Carlo experiments. The model is judged based on its usefulness for estimation and testing and its ability to model the heteroskedastic conditional variance. It is also compared with the traditional Normal Linear Regression Model. Subsequently the analysis is broadened to a dynamic setup. The Student t Autoregressive Model is derived and a number of its operational forms are considered. Three forms are selected for a detailed examination in a series of Monte Carlo experiments. The models’ usefulness for estimation and testing is evaluated, as well as their ability to model the conditional variance. The models are also compared with the traditional Dynamic Linear Regression Model.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
39

Forster, Simon. "Nouveau matériau semi-conducteur à large bande interdite à base de carbures ternaires - Enquête sur Al4SiC4." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAI095.

Full text
Abstract:
Les matériaux semi-conducteurs à large bande interdite sont capables de résister aux environnements difficiles et de fonctionner dans une large plage de températures. Celles-ci sont idéales pour de nombreuses applications telles que les capteurs, la haute puissance et les radiofréquences.Cependant, des matériaux plus nouveaux sont nécessaires pour atteindre une efficacité énergétique significative dans diverses applications ou pour développer de nouvelles applications destinées à compléter les semi-conducteurs à bande interdite tels que le GaN et le SiC.Dans cette thèse, trois méthodes différentes sont utilisées pour étudier l’un de ces nouveauxmatériaux, carbure d'aluminium et de silicium (Al4SiC4): (1) simulations d'ensemble de Monte Carloafin d'étudier les propriétés de transport d'électrons du nouveau carbure ternaire, (2)études expérimentales pour déterminer ses propriétés matérielles et (3) simulations de dispositifsd'un dispositif à hétérostructure rendu possible par ce carbure ternaire. Toutes ces méthodesinterconnecter les uns avec les autres. Les données de chacun d’eux peuvent alimenter l’autre pour acquérir de nouvelles connaissances.résultats ou affiner les résultats obtenus conduisant ainsi à des propriétés électriques attrayantes telles qu’une bande interdite de 2,78 eV ou une vitesse de dérive maximale de 1,35 × 10 cm s.Ensemble Monte Carlo, développé en interne pour les simulations de Si, Ge, GaAs,AlxGa1-xAs, AlAs et InSb; est adopté pour les simulations du carbure ternaire en ajoutant untransformation de la nouvelle vallée pour tenir compte de la structure hexagonale de Al4SiC4. Nous prédisonsune vitesse maximale de dérive des électrons de 1,35 × 107 cm-1 à un champ électrique de 1400 kVcm-1 et une mobilité maximale des électrons de 82,9 cm V s. Nous avons vu une constante de diffusion de 2,14 cm2s-1 à un champ électrique faible et de 0,25 cm2s-1 à un champ électrique élevé. Enfin nousmontrer que Al4SiC4 a un champ critique de 1831 kVcmOn utilise des cristaux semi-conducteurs qui avaient été cultivés auparavant à l’IMGP, l’un par la croissance en solution et l’autre par la fusion en creuset. Trois expériences différentes sont effectuées sur eux; (1) spectroscopie UV, IR et visuelle, (2) spectroscopie photographique à rayons X, et (3) mesures à deux et à quatre sondes dans lesquelles un contact métallique est formé sur les cristaux. Nous avons trouvé ici une bande interdite de spectroscopie UV, IR et Vis de 2,78 ± 0,02 eV et une couche d’oxyde épaisse sur les échantillons en utilisant du XPS. Malheureusement, les mesures à deux et à quatre sondes n'ont donné aucun résultat autre que le bruit, probablement en raison de l'épaisse couche d'oxyde trouvée sur les échantillons.Dans les simulations de dispositifs, le logiciel commercial Atlas de Silvaco est utilisé pour prédire les performances des dispositifs à hétérostructure, avec des longueurs de grille de 5, 2 et 1 µm, rendues possibles par le carbure ternaire en combinaison avec du SiC. Le transistor à hétérostructure SiC / Al4SiC4 d'une longueur de grille de 5 µm délivre un courant de drain maximal de 1,68 × 10−4 A / µm, qui passe à 2,44 × 10−4 A / µm et à 3,50 × 10−4 A / µm pour des longueurs de grille de 2 µm et 1 µm, respectivement. La tension de claquage de l'appareil est de 59,0 V, ce qui réduit à 31,0 V et à 18,0 V les transistors mis à l'échelle des longueurs de grille de 2 µm et de 1 µm. Le dispositif à longueur de grille réduite de 1 μm bascule plus rapidement en raison de la transconductance supérieure de6,51 × 10−5 S / μm par rapport à une fois par an1,69 × 10−6 S / μm pour le plus grand périphérique.Enfin, une pente inférieure au seuil des dispositifs mis à l'échelle est égale à 197,3 mV / dec, 97,6 mV / dec et 96,1 mV / dec pour des longueurs de grille de 5 µm, 2 µm et 1 µm, respectivement
Wide bandgap semiconductor materials are able to withstand harsh environments and operate over a wide range of temperatures. These make them ideal for many applications such as sensors, high-power and radio-frequencies to name a few.However, more novel materials are required to achieve significant power efficiency of various applications or to develop new applications to complement current wide bandgap semiconductors such as GaN and SiC.In this dissertation, three different methods are used to study one of these novelmaterials, aluminium silicon carbide (Al4SiC4): (1) ensemble Monte Carlo simulationsin order to study the electron transport properties of the novel ternary carbide, (2)experimental studies to determine its material properties, and (3) device simulationsof a heterostructure device made possible by this ternary carbide. All these methodsinterlink with each other. Data from each of them can feed into the other to acquire newresults or refine obtained results thus leading way to attractive electrical properties such as a bandgap of 2.78 eV or a peak drift velocity of 1.35×10 cm s .Ensemble Monte Carlo toolbox, developed in-house for simulations of Si, Ge, GaAs,AlxGa1−xAs, AlAs, and InSb; is adopted for simulations of the ternary carbide by adding anew valley transformation to account for the hexagonal structure of Al4SiC4. We predicta peak electron drift velocity of 1.35×107 cms−1 at electric field of 1400 kVcm−1 and a maximum electron mobility of 82.9 cm V s . We have seen a diffusion constant of 2.14 cm2s−1 at a low electric field and of 0.25 cm2s−1 at a high electric field. Finally, weshow that Al4SiC4 has a critical field of 1831 kVcmsemiconductor crystals are used that had previously been grown at IMGP, one by solution grown and the other by crucible melt. Three different experiments are performed on them; (1) UV, IR and Vis Spectroscopy, (2) X-ray Photo Spectroscopy, and (3) Two- and four-probe measurements where metal contact are grown on the crystals. Here we have found a bandgap of 2.78 ± 0.02 eV UV, IR and Vis Spectroscopy and a thick oxide layer on the samples using XPS. Unfortunately the Two- and four-probe measurements failed to give any results other than noise, most likely due to the thick oxide layer that was found on the samples.In the device simulations, a commercial software Atlas by Silvaco is utilized to predict performance of heterostructure devices, with gates lengths of 5 μm, 2 μm and 1 μm, made possible by the ternary carbide in a combination with SiC. The 5 μm gate length SiC/Al4SiC4 heterostructure transistor delivers a maximum drain current of 1.68×10−4 A/μm, which increases to 2.44×10−4 A/μm and 3.50×10−4 A/μm for gate lengths of 2 μm and 1 μm, respectively. The device breakdown voltage is 59.0 V which reduces to 31.0 V and to 18.0 V for the scaled 2 μm and the 1 μm gate length transistors. The scaled down 1 μm gate length device switches faster because of the higher transconductance of6.51×10−5 S/μmcomparedtoonly1.69×10−6 S/μmforthelargestdevice.Finally,a sub-threshold slope of the scaled devices is 197.3 mV/dec, 97.6 mV/dec, and 96.1 mV/dec for gate lengths of 5 μm, 2 μm, and 1 μm, respectively
APA, Harvard, Vancouver, ISO, and other styles
40

Fleming, Austin. "Uncertainty Qualification of Photothermal Radiometry Measurements Using Monte Carlo Simulation and Experimental Repeatability." DigitalCommons@USU, 2014. https://digitalcommons.usu.edu/etd/3299.

Full text
Abstract:
Photothermal Radiometry is a common thermal property measurement technique which is used to measure the properties of layered materials. Photothermal Radiometry uses a modulated laser to heat a sample, in which the thermal response can be used to determine the thermal properties of layers in the sample. The motivation for this work is to provide a better understanding of the accuracy and the repeatability of the Photothermal Radiometry measurement technique. Through this work the sensitivity of results to input uncertainties will be determined. Additionally, using numerical simulations the overall uncertainty on a theoretical measurement will be determined. The repeatability of Photothermal Radiometry measurements is tested with the use of a proton irradiated zirconium carbide sample. Due to the proton irradiation this sample contains two layers with a thermal resistance between the layers. This sample has been independently measured by three different researchers, in three different countries and the results are compared to determine the repeatability of Photothermal Radiometry measurements. Finally, from sensitivity and uncertainty analysis experimental procedures and suggestions are provided to reduce the uncertainty in experimentally measured results.
APA, Harvard, Vancouver, ISO, and other styles
41

Lecina, Casas Daniel. "Studying protein-ligand interactions using a Monte Carlo procedure." Doctoral thesis, Universitat de Barcelona, 2017. http://hdl.handle.net/10803/459297.

Full text
Abstract:
Biomolecular simulations have been widely used in the study of protein-ligand interactions; comprehending the mechanisms involved in the prediction of binding affinities would have a significant repercussion in the pharmaceutical industry. Notwithstanding the intrinsic difficulty of sampling the phase space, hardware and methodological developments make computer simulations a promising candidate in the resolution of biophysically relevant problems. In this context, the objective of the thesis is the development of a protocol that permits studying protein-ligand interactions, in view to be applied in drug discovery pipelines. The author contributed to the rewriting PELE, our Monte Carlo sampling procedure, using good practices of software development. These involved testing, improving the readability, modularity, encapsulation, maintenance and version control, just to name a few. Importantly, the recoding resulted in a competitive cutting-edge software that is able to integrate new algorithms and platforms, such as new force fields or a graphical user interface, while being reliable and efficient. The rest of the thesis is built upon this development. At this point, we established a protocol of unbiased all-atom simulations using PELE, often combined with Markov (state) Models (MSM) to characterize the energy landscape exploration. In the thesis, we have shown that PELE is a suitable tool to map complex mechanisms in an accurate and efficient manner. For example, we successfully conducted studies of ligand migration in prolyl oligopeptidases and nuclear hormone receptors (NHRs). Using PELE, we could map the ligand migration and binding pathway in such complex systems in less than 48 hours. On the other hand, with this technique we often run batches of 100s of simulations to reduce the wall-clock time. MSM is a useful technique to join these independent simulations in a unique statistical model, as individual trajectories only need to characterize the energy landscape locally, and the global characterization can be extracted from the model. We successfully applied the combination of these two methodologies to quantify binding mechanisms and estimate the binding free energy in systems involving NHRs and tyorsinases. However, this technique represents a significant computational effort. To reduce the computational load, we developed a new methodology to overcome the sampling limitations caused by the ruggedness of the energy landscape. In particular, we used a procedure of iterative simulations with adaptive spawning points based on reinforcement learning ideas. This permits sampling binding mechanisms at a fraction of the cost, and represents a speedup of an order of magnitude in complex systems. Importantly, we show in a proof-of-concept that it can be used to estimate absolute binding free energies. Overall, we hope that the methodologies presented herein help streamline the drug design process.
Las simulaciones biomoleculares se han usado ampliamente en el estudio de interacciones proteína-ligando. Comprender los mecanismos involucrados en la predicción de afinidades de unión tiene una gran repercusión en la industria farmacéutica. A pesar de las dificultades intrínsecas en el muestreo del espacio de fases, mejoras de hardware y metodológicas hacen de las simulaciones por ordenador un candidato prometedor en la resolución de problemas biofísicos con alta relevancia. En este contexto, el objetivo de la tesis es el desarrollo de un protocolo que introduce un estudio más eficiente de las interacciones proteína-ligando, con vistas a diseminar PELE, un procedimiento de muestreo de Monte Carlo, en el diseño de fármacos. Nuestro principal foco ha sido sobrepasar las limitaciones de muestreo causadas por la rugosidad del paisaje de energías, aplicando nuestro protocolo para hacer analsis detallados a nivel atomístico en receptores nucleares de hormonas, receptores acoplados a proteínas G, tirosinasas y prolil oligopeptidasas, en colaboración con una compañía farmacéutica y de varios laboratorios experimentales. Con todo ello, esperamos que las metodologías presentadas en esta tesis ayuden a mejorar el diseño de fármacos.
APA, Harvard, Vancouver, ISO, and other styles
42

Moutoussamy, Vincent. "Contributions à l'analyse de fiabilité structurale : prise en compte de contraintes de monotonie pour les modèles numériques." Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30209/document.

Full text
Abstract:
Cette thèse se place dans le contexte de la fiabilité structurale associée à des modèles numériques représentant un phénomène physique. On considère que la fiabilité est représentée par des indicateurs qui prennent la forme d'une probabilité et d'un quantile. Les modèles numériques étudiés sont considérés déterministes et de type boîte-noire. La connaissance du phénomène physique modélisé permet néanmoins de faire des hypothèses de forme sur ce modèle. La prise en compte des propriétés de monotonie dans l'établissement des indicateurs de risques constitue l'originalité de ce travail de thèse. Le principal intérêt de cette hypothèse est de pouvoir contrôler de façon certaine ces indicateurs. Ce contrôle prend la forme de bornes obtenues par le choix d'un plan d'expériences approprié. Les travaux de cette thèse se concentrent sur deux thématiques associées à cette hypothèse de monotonie. La première est l'étude de ces bornes pour l'estimation de probabilité. L'influence de la dimension et du plan d'expériences utilisé sur la qualité de l'encadrement pouvant mener à la dégradation d'un composant ou d'une structure industrielle sont étudiées. La seconde est de tirer parti de l'information de ces bornes pour estimer au mieux une probabilité ou un quantile. Pour l'estimation de probabilité, l'objectif est d'améliorer les méthodes existantes spécifiques à l'estimation de probabilité sous des contraintes de monotonie. Les principales étapes d'estimation de probabilité ont ensuite été adaptées à l'encadrement et l'estimation d'un quantile. Ces méthodes ont ensuite été mises en pratique sur un cas industriel
This thesis takes place in a structural reliability context which involves numerical model implementing a physical phenomenon. The reliability of an industrial component is summarised by two indicators of failure,a probability and a quantile. The studied numerical models are considered deterministic and black-box. Nonetheless, the knowledge of the studied physical phenomenon allows to make some hypothesis on this model. The original work of this thesis comes from considering monotonicity properties of the phenomenon for computing these indicators. The main interest of this hypothesis is to provide a sure control on these indicators. This control takes the form of bounds obtained by an appropriate design of numerical experiments. This thesis focuses on two themes associated to this monotonicity hypothesis. The first one is the study of these bounds for probability estimation. The influence of the dimension and the chosen design of experiments on the bounds are studied. The second one takes into account the information provided by these bounds to estimate as best as possible a probability or a quantile. For probability estimation, the aim is to improve the existing methods devoted to probability estimation under monotonicity constraints. The main steps built for probability estimation are then adapted to bound and estimate a quantile. These methods have then been applied on an industrial case
APA, Harvard, Vancouver, ISO, and other styles
43

Martínez, Rovira Immaculada. "Monte Carlo and experimental small-field dosimetry applied to spatially fractionated synchrotron radiotherapy techniques." Doctoral thesis, Universitat Politècnica de Catalunya, 2012. http://hdl.handle.net/10803/81470.

Full text
Abstract:
Two innovative radiotherapy (RT) approaches are under development at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF): microbeam radiation therapy (MRT) and minibeam radiation therapy (MBRT). The two main distinct characteristics with respect to conventional RT are the use of submillimetric field sizes and spatial fractionation of the dose. This PhD work deals with different features related to small-field dosimetry involved in these techniques. Monte Carlo (MC) calculations and several experimental methods are used with this aim in mind. The core of this PhD Thesis consisted of the development and benchmarking of an MC-based computation engine for a treatment planning system devoted to MRT within the framework of the preparation of forthcoming MRT clinical trials. Additional achievements were the definition of safe MRT irradiation protocols, the assessment of scatter factors in MRT, the further improvement of the MRT therapeutic index by injecting a contrast agent into the tumour and the definition of a dosimetry protocol for preclinical trials in MBRT.
APA, Harvard, Vancouver, ISO, and other styles
44

Helgesson, Petter. "Experimental data and Total Monte Carlo : Towards justified, transparent and complete nuclear data uncertainties." Licentiate thesis, Uppsala universitet, Tillämpad kärnfysik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-265330.

Full text
Abstract:
The applications of nuclear physics are many with one important being nuclear power, which can help decelerating the climate change. In any of these applications, so-called nuclear data (ND, numerical representations of nuclear physics) is used in computations and simulations which are necessary for, e.g., design and maintenance. The ND is not perfectly known - there are uncertainties associated with it - and this thesis concerns the quantification and propagation of these uncertainties. In particular, methods are developed to include experimental data in the Total Monte Carlo methodology (TMC). The work goes in two directions. One is to include the experimental data by giving weights to the different "random files" used in TMC. This methodology is applied to practical cases using an automatic interpretation of an experimental database, including uncertainties and correlations. The weights are shown to give a consistent implementation of Bayes' theorem, such that the obtained uncertainty estimates in theory can be correct, given the experimental data. In the practical implementation, it is more complicated. This is much due to the interpretation of experimental data, but also because of model defects - the methodology assumes that there are parameter choices such that the model of the physics reproduces reality perfectly. This assumption is not valid, and in future work, model defects should be taken into account. Experimental data should also be used to give feedback to the distribution of the parameters, and not only to provide weights at a later stage.The other direction is based on the simulation of the experimental setup as a means to analyze the experiments in a structured way, and to obtain the full joint distribution of several different data points. In practice, this methodology has been applied to the thermal (n,α), (n,p), (n,γ) and (n,tot) cross sections of 59Ni. For example, the estimated expected value and standard deviation for the (n,α) cross section is (12.87 ± 0.72) b, which can be compared to the established value of (12.3 ± 0.6) b given in the work of Mughabghab. Note that also the correlations to the other thermal cross sections as well as other aspects of the distribution are obtained in this work - and this can be important when propagating the uncertainties. The careful evaluation of the thermal cross sections is complemented by a coarse analysis of the cross sections of 59Ni at other energies. The resulting nuclear data is used to study the propagation of the uncertainties through a model describing stainless steel in the spectrum of a thermal reactor. In particular, the helium production is studied. The distribution has a large uncertainty (a standard deviation of (17 ± 3) \%), and it shows a strong asymmetry. Much of the uncertainty and its shape can be attributed to the more coarse part of the uncertainty analysis, which, therefore, shall be refined in the future.
APA, Harvard, Vancouver, ISO, and other styles
45

Kang, Donghee. "Longitudinal lambda and anti-lambda polarization at the COMPASS experiment." [S.l. : s.n.], 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
46

Gschwender, Michael [Verfasser], and Tobias [Akademischer Betreuer] Lachenmaier. "Finite Element and Monte Carlo Simulations Accompanying the SOX Experiment / Michael Gschwender ; Betreuer: Tobias Lachenmaier." Tübingen : Universitätsbibliothek Tübingen, 2019. http://d-nb.info/1188613707/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

McClain, Christopher J. "A Monte Carlo simulation of the EEMC detector located in the STAR experiment at RHIC." Virtual Press, 2005. http://liblink.bsu.edu/uhtbin/catkey/1315173.

Full text
Abstract:
A Monte-Carlo simulation program of the response of the Endcap Electromagnetic Calorimeter (EEMC) and Shower Maximum Detector (SMD) was developed to determine the ability, of the detectors, to provide y/n° discrimination and calculate the effects crosstalk between readout channels from multianode photomultiplier tubes (MAPMT). The importance of this discrimination process is to allow a better measure of the direct-photon asymmetries, which are then used to calculate the gluon contribution to the proton spin structure. These measurements arise from polarized-proton collisions provided by the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory and are detected using the Solenoidal Tracker at RHIC (STAR), which includes the EEMC and SMD. In order to obtain accurate asymmetry measurements, the photons resulting from 7c° decay must be identified through pion-mass reconstruction to avoid confusing them as direct photons. This Monte-Carlo simulation and reconstruction algorithm successfully identified 60% of the pions from single-pion events and 40% of the pions from two-pion events. The effects of MAPMT crosstalk, as determined by the Monte Carlo, were less than 2% on n° identification, and therefore were determined to be insignificant.
Department of Physics and Astronomy
APA, Harvard, Vancouver, ISO, and other styles
48

Àgueda, Costafreda Neus. "Near-relativistic electron events. Monte Carlo simulations of solar injection and interplanetary transport." Doctoral thesis, Universitat de Barcelona, 2008. http://hdl.handle.net/10803/749.

Full text
Abstract:
We have developed a Monte Carlo model to simulate the transport of solar near-relativistic (NR; 30-300 keV) electrons along the interplanetary magnetic field (IMF), including adiabatic focusing, pitch-angle dependent scattering, and solar wind effects. By taking into account the angular response of the LEFS60 telescope of the EPAM experiment on board the "Advanced Composition Explorer" spacecraft, we have been able to transform simulated pitch-angle distributions into sectored intensities measured by the telescope. We have developed an algorithm that allows us, for the first time, to infer the best-fit transport conditions and the underlying solar injection profile of NR electrons from the deconvolution of the effects of interplanetary transport on observational sectored intensities. We have studied seven NR electron events observed by the LEFS60 telescope between 1998 and 2004 with the aim of estimating the roles that solar flares and CME-driven shocks play in the acceleration and injection of NR electrons, as well as the conditions of the electron transport along the IMF.
In this set of seven NR electron events, we have identified two types of injection episodes in the derived injection profiles: short (< 20 min) and time-extended (> 1 h). The injection profile of three events shows both components; an initial injection episode of short duration, followed by a second much longer lasting episode; two events only show a time-extended injection episode; while the others show an injection profile composed by several short injection episodes.
We have found that the timing of the prompt short injection episodes agrees with the timing of the hard X-rays and radio type III bursts. On the other hand, time-extended injection episodes seem to be related to intermittent radio emissions at the height of the CME leading edge or below, and sometimes to type II radio bursts. Thus, we conclude that short injection episodes are preferentially associated with the injection of flare-accelerated particles, while longer lasting episodes are provided by CME-driven shocks or post-eruptive reconnection phenomena at coronal heights lower than those of the CME-driven shocks.
From the fit of the events, we have derived the transport conditions of the electrons. We have found that the electron propagation was almost scatter-free (the radial mean free path of the electrons was ~0.9 AU) during two of the events, whereas during five of the events the propagation occurred under strong scattering conditions (the radial mean free path of the electrons was smaller than 0.2 AU). Those events showing a long radial mean free path reached the maximum intensity shortly (< 15 min) after the onset of the event; whereas those events showing a small radial mean free path reached the maximum intensity more than one hour after the onset.
The overall conclusion from this study is that there is a continuous spectrum of scenarios that allow for either flare or CME-driven shock NR electron injection, or for both, and that this can occur both under strong scattering and under almost "scatter-free" propagation conditions.

SUBJECT HEADINGS: Sun: coronal mass ejections (CMEs) Sun: flares Sun: particle emission
Hemos desarrollado un modelo Monte Carlo para simular el transporte de electrones solares casi-relativistas (30-300 keV) en el medio interplanetario que tiene en cuenta los efectos de la focalización adiabática, la dispersión en ángulo de batida y los efectos del viento solar. Teniendo en cuenta la respuesta angular del telescopio, hemos desarrollado un método que permite transformar las distribuciones angulares de partículas simuladas en intensidades sectoritzadas observadas por el telescopio LEFS60 a bordo de la sonda interplanetaria ACE. Esto nos ha permitido desarrollar un algoritmo que permite, por primera vez, deconvolucionar los efectos del transporte interplanetario en las intensidades sectoritzadas observadas, con el objetivo de determinar el perfil de inyección solar de electrones y las características del transporte. Hemos aplicado el modelo al estudio de siete sucesos de electrones observados por la sonda ACE. Los resultados ponen de manifiesto que la inyección de electrones casi-relativistas está asociada con procesos fulgurativos, choques conducidos por eyecciones de masa coronal o con ambos, y que el transporte se puede producir tanto en condiciones muy dispersivas como en condiciones muy poco dispersivas.
RESUM:

Hem desenvolupat un model Monte Carlo per simular el transport d'electrons solars quasi-relativistes (30-300 keV) en el medi interplanetari que té en compte els efectes de la focalització adiabàtica, la dispersió en angle de batuda i els efectes del vent solar. Hem desenvolupat un mètode per transformar les distribucions angulars de partícules simulades en intensitats sectoritzades observades pel telescopi LEFS60 a bord de la sonda interplanetària ACE, tenint en compte la resposta angular del telescopi. Això ens ha permès desenvolupar un algoritme que permet, per primera vegada, deconvolucionar els efectes del transport interplanetari en les intensitats sectoritzades observades, amb l'objectiu de determinar el perfil d'injecció solar d'electrons observats per la sonda ACE. Els resultats posen de manifest que la injecció d'electrons quasi-relativistes pot produir-se en processos fulguratius, en xocs conduïts per ejeccions de massa coronal o en ambdós, i que el transport es pot produir tant en condicions molt dispersives com en condicions molt poc dispersives.
APA, Harvard, Vancouver, ISO, and other styles
49

Prats, Garcia Hèctor. "Monte Carlo based methods applied to heterogeneous catalysis and gas separation." Doctoral thesis, Universitat de Barcelona, 2019. http://hdl.handle.net/10803/666583.

Full text
Abstract:
The research work presented in this thesis is divided in two main topics: gas separation and heterogeneous catalysis. Although the systems studied in one part and another are quite different, they share two fundamental features: both topics have a special industrial interest and they have been studied through stochastic Monte Carlo based methods. The present work on gas separation aims to assess the performance of several faujasite structures, a well-known family of zeolites, in CO2 capture processes. Concretely, ten faujasite structures with different Al content have been evaluated in the separation of post-combustion CO2 mixtures via simulation of swing adsorption processes. Through GCMC simulations performed on a wide range of pressures and temperatures, the pure and mixture adsorption isotherms and isobars for the different structures are obtained. This data is used to calculate several performance criteria such as purity, working capacity, selectivity and energy required per ton of CO2 captured. The results show that high Al content structures are suitable for operating under a TSA unit, while intermediate and low Al content structures show better performance in PSA and VSA units, respectively. On the other hand, the research work on chemical reactivity focus on the study of the water-gas shift reaction (WGSR) on copper surfaces both from a thermodynamic and from a kinetic point of view. First, a kMC study is performed on the flat Cu(111) surface. The lattice model is quite simple, consisting on an hexagonal periodic grid of points. All sites are considered equivalent, and only repulsive lateral interactions between neighboring CO adsorbed species are included. However, even with this simple model, the kMC results agree quite well to the available experimental data, and demonstrate that the dominant reaction mechanism is the COOH mediated associative mechanism. The effect of van der Waals interactions is then studied, by performing DFT calculations of the WGSR on Cu(321) using the Grimme D2 correction accounting for the dispersion forces. The results obtained are compared with previous DFT results published in the literature where no van der Waals corrections were included. The comparison shows big differences on the adsorption energies of some gas species, as well as important differences in some energy barriers, hence demonstrating the importance of including dispersion terms in order to obtain a meaningful description of the energetics of the WGSR. New kMC simulations are then performed for the WGSR on Cu(321) surface to study the effect of step sites on the WGSR activity. The recently developed graph- theoretical kMC framework is used, coupled with cluster expansion Hamiltonians to account for the lateral interactions between neighboring species. The simulation results show that the activity is much lower on the stepped Cu(321) surface. Analysis of the kMC simulations suggests that the reason in the poisoning of step sites by CO species, as well as the presence of low energy barriers for some key steps on the reverse direction (e.g. water dissociation and COOH formation). Finally, the thesis ends with a brief tutorial on kMC simulations where several issues are discussed, like the importance of including diffusion processes of the effect of lateral interactions.
El treball de recerca presentat en aquesta tesi es divideix en dos temes principals: separació de gasos i catàlisi heterogènia. Tot i que els sistemes estudiats en ambdues parts són molt diferents, comparteixen dues característiques fonamentals: ambdós temes tenen un elevat interès industrial i s'han estudiat mitjançant mètodes estocàstics de Monte Carlo. El treball corresponent a la separació de gasos pretén avaluar el rendiment de diverses estructures de faujasites, una coneguda família de zeolites, en processos de captura de CO2. Concretament, s'han avaluat deu estructures de faujasites amb diferent contingut d’alumini en la separació de mescles post-combustió. Mitjançant simulacions GCMC realitzades en una àmplia gamma de pressions i temperatures, s’han obtingut les isotermes i isòbares d'adsorció pures i de mescla per les diferents estructures. Aquesta informació s'ha emprat per calcular diversos criteris de rendiment com ara la puresa, la capacitat de treball, la selectivitat i l'energia requerida per tona de CO2 capturat. D'altra banda, els treballs de recerca sobre reactivitat química es centren en l'estudi de la reacció water-gas shift (WGSR) sobre superfícies de coure tant des d'un punt de vista termodinàmic com cinètic. En aquest context, s’ha estudiat l’efecte de les superfícies esglaonades i de les forces de van der Waals mitjançant càlculs d’estructura electrònica i simulacions amb el mètode de Monte Carlo cinètic (kMC) en la superfície plana Cu(111) i la superfície esglaonada Cu(321). Els resultats mostren que les superfícies esglaonades no sempre són més actives que les planes, i que la introducció de les interaccions de van der Waals és crucial per a obtenir una descripció correcta dels diferents processos que ocorren en superfície.
APA, Harvard, Vancouver, ISO, and other styles
50

Hertel, Ida Marlene [Verfasser]. "Schätzung des optimalen Designs eines nichtlinearen parametrischen Regressionsproblems mittels Monte-Carlo-Experimenten / Ida Marlene Hertel." Aachen : Shaker, 2014. http://d-nb.info/1053361564/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography