Dissertations / Theses on the topic 'Monte Carlo simulation model'

To see the other types of publications on this topic, follow the link: Monte Carlo simulation model.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Monte Carlo simulation model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hanlon, Peter E. "A retirement planning model using Monte Carlo simulation." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2000. http://handle.dtic.mil/100.2/ADA386389.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Dong-Mei. "Monte Carlo simulations for complex option pricing." Thesis, University of Manchester, 2010. https://www.research.manchester.ac.uk/portal/en/theses/monte-carlo-simulations-for-complex-option-pricing(a908ec86-2fb2-4d5d-83e5-9bff78033edd).html.

Full text
Abstract:
The thesis focuses on pricing complex options using Monte Carlo simulations. Due to the versatility of the Monte Carlo method, we are able to evaluate option prices with various underlying asset models: jump diffusion models, illiquidity models, stochastic volatility and so on. Both European options and Bermudan options are studied in this thesis.For the jump diffusion model in Merton (1973), we demonstrate European and Bermudan option pricing by the Monte Carlo scheme and extend this to multiple underlying assets; furthermore, we analyse the effect of stochastic volatility.For the illiquidity model in the spirit of Glover (2008), we model the illiquidity impact on option pricing in the simulation study. The four models considered are: the first order feedback model with constant illiquidity and stochastic illiquidity; the full feedback model with constant illiquidity and stochastic illiquidity. We provide detailed explanations for the present of path failures when simulating the underlying asset price movement and suggest some measures to overcome these difficulties.
APA, Harvard, Vancouver, ISO, and other styles
3

Kheirollah, Amir. "Monte Carlo Simulation of Heston Model in MATLAB GUI." Thesis, Mälardalen University, Mälardalen University, Department of Mathematics and Physics, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-4253.

Full text
Abstract:

In the Black-Scholes model, the volatility considered being deterministic and it causes some

inefficiencies and trends in pricing options. It has been proposed by many authors that the

volatility should be modelled by a stochastic process. Heston Model is one solution to this

problem. To simulate the Heston Model we should be able to overcome the correlation

between asset price and the stochastic volatility. This paper considers a solution to this issue.

A review of the Heston Model presented in this paper and after modelling some investigations

are done on the applet.

Also the application of this model on some type of options has programmed by MATLAB

Graphical User Interface (GUI).

APA, Harvard, Vancouver, ISO, and other styles
4

Smith, Graham. "The measurement of free energy by Monte Carlo computer simulation." Thesis, University of Edinburgh, 1996. http://hdl.handle.net/1842/6466.

Full text
Abstract:
One of the most important problems in statistical mechanics is the measurement of free energies, these being the quantities that determine the direction of chemical reactions and--the concern of this thesis--the location of phase transitions. While Monte Carlo (MC) computer simulation is a well-established and invaluable aid in statistical mechanical calculations, it is well known that, in its most commonly-practised form (where samples are generated from the Boltzmann distribution), it fails if applied directly to the free energy problem. This failure occurs because the measurement of free energies requires a much more extensive exploration of the system's configuration space than do most statistical mechanical calculations: configurations which have a very low Boltzmann probability make a substantial contribution to the free energy, and the important regions of configuration space may be separated by potential barriers. We begin the thesis with an introduction, and then give a review of the very substantial literature that the problem of the MC measurement of free energy has produced, explaining and classifying the various different approaches that have been adopted. We then proceed to present the results of our own investigations. First, we investigate methods in which the configurations of the system are sampled from a distribution other than the Boltzmann distribution, concentrating in particular on a recently developed technique known as the multicanonical ensemble. The principal difficulty in using the multicanonical ensemble is the difficulty of constructing it: implicit in it is at least partial knowledge of the very free energy that we are trying to measure, and so to produce it requires an iterative process. Therefore we study this iterative process, using Bayesian inference to extend the usual method of MC data analysis, and introducing a new MC method in which inferences are made based not on the macrostates visited by the simulation but on the transitions made between them. We present a detailed comparison between the multicanonical ensemble and the traditional method of free energy measurement, thermodynamic integration, and use the former to make a high-accuracy investigation of the critical magnetisation distribution of the 2d Ising model from the scaling region all the way to saturation. We also make some comments on the possibility of going beyond the multicanonical ensemble to `optimal' MC sampling. Second, we investigate an isostructural solid-solid phase transition in a system consisting of hard spheres with a square-well attractive potential. Recent work, which we have confirmed, suggests that this transition exists when the range of the attraction is very small (width of attractive potential/ hard core diameter ~ 0.01). First we study this system using a method of free energy measurement in which the square-well potential is smoothly transformed into that of the Einstein solid. This enables a direct comparison of a multicanonical-like method with thermodynamic integration. Then we perform extensive simulations using a different, purely multicanonical approach, which enables the direct connection of the two coexisting phases. It is found that the measurement of transition probabilities is again advantageous for the generation of the multicanonical ensemble, and can even be used to produce the final estimators. Some of the work presented in this thesis has been published or accepted for publication: the references are G. R. Smith & A. D. Bruce, A Study of the Multicanonical Monte Carlo Method, J. Phys. A. 28, 6623 (1995). [reference details doi:10.1088/0305-4470/28/23/015] G. R. Smith & A. D. Bruce, Multicanonical Monte Carlo Study of a Structural Phase Transition, to be published in Europhys. Lett. [reference details Europhys. Lett. 34, 91 (1996) doi:10.1209/epl/i1996-00421-1] G. R. Smith & A. D. Bruce, Multicanonical Monte Carlo Study of Solid-Solid Phase Coexistence in a Model Colloid, to be published in Phys. Rev. E [reference details Phys. Rev. E 53, 6530–6543 (1996) doi:10.1103/PhysRevE.53.6530].
APA, Harvard, Vancouver, ISO, and other styles
5

Ventura, Marcelo dos Santos. "Monte Carlo simulation studies in log-symmetric regressions." Universidade Federal de Goiás, 2018. http://repositorio.bc.ufg.br/tede/handle/tede/8278.

Full text
Abstract:
Submitted by Franciele Moreira (francielemoreyra@gmail.com) on 2018-03-29T12:30:01Z No. of bitstreams: 2 Dissertação - Marcelo dos Santos Ventura - 2018.pdf: 4739813 bytes, checksum: 52211670f6e17c893ffd08843056f075 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2018-03-29T13:40:08Z (GMT) No. of bitstreams: 2 Dissertação - Marcelo dos Santos Ventura - 2018.pdf: 4739813 bytes, checksum: 52211670f6e17c893ffd08843056f075 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2018-03-29T13:40:08Z (GMT). No. of bitstreams: 2 Dissertação - Marcelo dos Santos Ventura - 2018.pdf: 4739813 bytes, checksum: 52211670f6e17c893ffd08843056f075 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2018-03-09
Fundação de Amparo à Pesquisa do Estado de Goiás - FAPEG
This work deals with two Monte Carlo simulation studies in log-symmetric regression models, which are particularly useful for the cases when the response variable is continuous, strictly positive and asymmetric, with the possibility of the existence of atypical observations. In log- symmetric regression models, the distribution of the random errors multiplicative belongs to the log-symmetric class, which encompasses log-normal, log- Student-t, log-power- exponential, log-slash, log-hyperbolic distributions, among others. The first simulation study has as objective to examine the performance for the maximum-likelihood estimators of the model parameters, where various scenarios are considered. The objective of the second simulation study is to investigate the accuracy of popular information criteria as AIC, BIC, HQIC and their respective corrected versions. As illustration, a movie data set obtained and assembled for this dissertation is analyzed to compare log-symmetric models with the normal linear model and to obtain the best model by using the mentioned information criteria.
Este trabalho aborda dois estudos de simulação de Monte Carlo em modelos de regressão log- simétricos, os quais são particularmente úteis para os casos em que a variável resposta é contínua, estritamente positiva e assimétrica, com possibilidade da existência de observações atípicas. Nos modelos de regressão log-simétricos, a distribuição dos erros aleatórios multiplicativos pertence à classe log-simétrica, a qual engloba as distribuições log-normal, log-Student- t, log-exponencial- potência, log-slash, log-hyperbólica, entre outras. O primeiro estudo de simulação tem como objetivo examinar o desempenho dos estimadores de máxima verossimilhança desses modelos, onde vários cenários são considerados. No segundo estudo de simulação o objetivo é investigar a eficácia critérios de informação populares como AIC, BIC, HQIC e suas respectivas versões corrigidas. Como ilustração, um conjunto de dados de filmes obtido e montado para essa dissertação é analisado para comparar os modelos de regressão log-simétricos com o modelo linear normal e para obter o melhor modelo utilizando os critérios de informação mencionados.
APA, Harvard, Vancouver, ISO, and other styles
6

Johansson, Sam. "Efficient Monte Carlo Simulation for Counterparty Credit Risk Modeling." Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252566.

Full text
Abstract:
In this paper, Monte Carlo simulation for CCR (Counterparty Credit Risk) modeling is investigated. A jump-diffusion model, Bates' model, is used to describe the price process of an asset, and the counterparty default probability is described by a stochastic intensity model with constant intensity. In combination with Monte Carlo simulation, the variance reduction technique importance sampling is used in an attempt to make the simulations more efficient. Importance sampling is used for simulation of both the asset price and, for CVA (Credit Valuation Adjustment) estimation, the default time. CVA is simulated for both European and Bermudan options. It is shown that a significant variance reduction can be achieved by utilizing importance sampling for asset price simulations. It is also shown that a significant variance reduction for CVA simulation can be achieved for counterparties with small default probabilities by employing importance sampling for the default times. This holds for both European and Bermudan options. Furthermore, the regression based method least squares Monte Carlo is used to estimate the price of a Bermudan option, resulting in CVA estimates that lie within an interval of feasible values. Finally, some topics of further research are suggested.
I denna rapport undersöks Monte Carlo-simuleringar för motpartskreditrisk. En jump-diffusion-modell, Bates modell, används för att beskriva prisprocessen hos en tillgång, och sannolikheten att motparten drabbas av insolvens beskrivs av en stokastisk intensitetsmodell med konstant intensitet. Tillsammans med Monte Carlo-simuleringar används variansreduktionstekinken importance sampling i ett försök att effektivisera simuleringarna. Importance sampling används för simulering av både tillgångens pris och, för estimering av CVA (Credit Valuation Adjustment), tidpunkten för insolvens. CVA simuleras för både europeiska optioner och Bermuda-optioner. Det visas att en signifikant variansreduktion kan uppnås genom att använda importance sampling för simuleringen av tillgångens pris. Det visas även att en signifikant variansreduktion för CVA-simulering kan uppnås för motparter med små sannolikheter att drabbas av insolvens genom att använda importance sampling för simulering av tidpunkter för insolvens. Detta gäller både europeiska optioner och Bermuda-optioner. Vidare, används regressionsmetoden least squares Monte Carlo för att estimera priset av en Bermuda-option, vilket resulterar i CVA-estimat som ligger inom ett intervall av rimliga värden. Slutligen föreslås några ämnen för ytterligare forskning.
APA, Harvard, Vancouver, ISO, and other styles
7

Flores, Garth. "A stochastic model for sewer base flows using Monte Carlo simulation." Thesis, Stellenbosch : Stellenbosch University, 2015. http://hdl.handle.net/10019.1/96692.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2015.
ENGLISH ABSTRACT: This thesis deals with understanding and quantifying the components that make up sewage base flows (SBF). SBF is a steady flow that is ubiquitous in sewers, and is clearly seen when measuring the flow rate in the sewer between 03:00 and 04:00. The components of SBF are: ● return flow from residential night use, ● return flow from leaking plumbing, ● groundwater infiltration, ● stormwater inflow. By understanding each component of SBF, this research can answer the burning question as to how much of the SBF was due to plumbing leaks on residential properties. While previous work on SBF had been done, the work focused on groundwater ingress and stormwater inflows, and thus not much had been said about plumbing leaks. Furthermore, previous work focused on SBF as an isolated sewer related topic, whereas this research integrated SBF as both a sewer related topic and water conservation and demand management (WCDM) topic. Due to the high variability in each of the SBF components, a method of quantifying each component was developed using residential end-use modelling and Monte Carlo simulations. The author constructed the Leakage, Infiltration and Inflow Technique Model (LIFT Model). This stochastic model was built in MS Excel using the @Risk software add-on. The LIFT Model uses probability distributions to model the inflow variability. The results of the stochastic model were analysed and the findings discussed. This research can be used by water utilities as a tool to better understand the SBF in networks. Armed with this knowledge, water utilities could make informed decisions about how to best reduce the high SBF encountered in networks.
AFRIKAANSE OPSOMMING: Hierdie verhandeling bespreek die begrip en berekening van die komponente van riool nagvloei. Die nagvloei was duidelik wanneer die vloei in die rioolstelsel tussen 03:00 en 04:00 gemeet is. Die verskillende komponente van die nagvloei is: ● huishoudelike gebruik, ● lekkende krane en toilette, ● grondwaterinfiltrasie, en ● stormwaterinvloei. ’n Begrip van die komponente van nagvloei kan die brandende vraag van hoeveel nagvloei die gevolg van lekkende krane en toilette is, na aanleiding van die navorsing beantwoord. Vorige werk het op beter begrip van die grondwaterinfiltrasie en stormwaterinvloei gefokus en lekke het nie veel aandag geniet nie. Vorige werk het net op nagvloei as geïsoleerde rioolonderwerp gefokus, terwyl hierdie navorsing nagvloei as ’n onderwerp wat met riool verband hou, sowel as ’n waterverbruik- en behoeftebestuursonderwerp, ondersoek. As gevolg van die groot verskil tussen elk van die komponente van die nagvloei, is ’n metode ontwikkel wat elke komponent kwantifiseer deur gebruik te maak van eindgebruik-modelle en Monte Carlo-simulasies. Die outeur het die Leakage Infiltration and Inflow Technique Model (LIFT-Model) gebou. Hierdie stogastiese model is in MS Excel, met behulp van die @Risk sagtewarebyvoeging gebou. Die LIFT-Model gebruik waarskynlikheidverspreidings om invloeivariasie te modelleer. Die resultate van die stogastiese model is ontleed en die bevindinge bespreek. Hierdie navorsing mag moontlik deur watervoorsieningsmaatskapye as instrument gebruik word om nagvloei in rioolstelsels beter te verstaan. Hierdie nuwe kennis kan watervoorsieningsmaatskapye in staat stel om ingeligte besluite te neem rakende die beste metodes om te volg om nagvloei te verminder.
APA, Harvard, Vancouver, ISO, and other styles
8

Steinke, Tanja. "Ein Monte-Carlo-Modell zur Simulation plasmagespritzter Wärmedämmschichten /." Tönning ; Lübeck Marburg : Der Andere Verl, 2008. http://d-nb.info/989939944/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rodgers, Anthony C. Bailey Michael P. "ML-Recon simulation model : a Monte Carlo planning aid for Magic Lantern." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1995. http://handle.dtic.mil/100.2/ADA304223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Basik, Beata-Marie. "Direct simulation Monte Carlo model of a couette flow of granular materials." Thesis, McGill University, 1990. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=60433.

Full text
Abstract:
Since life-threatening natural phenomena, such as, snow avalanches and lava flows, and many industrial and agricultural material handling processes may be classified as granular flows, establishing constitutive relationships which model granular flow behaviour is of prime importance. While laboratory experiments attempting to support granular flow theory have been plagued by poor instrumentation, numerical simulations are becoming increasingly helpful in understanding the nature of these flows. The present investigation describes such a simulation developed within the framework of the Direct Simulation Monte Carlo model for rarefied gases presented in Bird (1976) and granular flow kinetic theory according to Lun, et al. (1984). More specifically, the model generates a Couette flow of smooth, inelastic, homogeneous, spherical granular particles. Two different boundary condition models are used to model the flow field's upper and lower boundaries: the Periodic Boundary Condition (PBC) model and the Finite Shear Layer (FSL) model. An essentially uniform shear flow with virtually no slip at the boundaries results from both boundary conditions. Stress and granular temperature results obtained with the PBC and FSL models for the lower range of solids fractions ($ nu < 0.3)$ compare very well with the Lun, et al. (1984) theory. At higher solids fractions, while the total stresses generated with both boundary models are in reasonable agreement with the latter theory and results from other numerical work, higher than expected streaming stresses appear to be compensating for lower than expected collisional stresses; as a result, granular temperature in this range of solids fractions proves to be higher than predicted.
APA, Harvard, Vancouver, ISO, and other styles
11

Rodgers, Anthony C., and Michael P. Bailey. "ML-Recon simulation model: a Monte Carlo planning aid for Magic Lantern." Thesis, Monterey, California. Naval Postgraduate School, 1995. http://hdl.handle.net/10945/35188.

Full text
Abstract:
The U.S. Navy currently has no means to conduct sea mine reconnaissance with assets that are organic to Aircraft Carrier Battle Groups or Amphibious Ready Groups. Magic Lanten is an Airborne Laser Mine Detection System (ALMDS) under development, that is designed to search for floating and shallow moored mines using a helicopter- mounted laser-optic sensor. It is the only ALMDS operationally tested by the Navy to date. This thesis develops a Monte Carlo simulation model called ML-Recon, which is intended for use as a tool to plan mine reconnaissance searches using the Magic Lantern system. By entering fundamental initial planning information, the user can determine the number of uniformly-spaced tracks to fly with a Magic Lantern-equipped helicopter to achieve a certain level of assurance that the area contains no floating or shallow moored mines. By employing Monte Carlo methods, ML-Recon models the three primary stochastic processes that take place during a typical search: the location of the mines, the cross-track error of the helicopter, and the detection/non-detection process of the sensor. By running ML-Recon, the user is given performance statistics for many replications of the search plan that he chooses. This approach is unique in that it provides the user with information indicating how much worse than the mean performance his plan may perform. ML- Recon also gives the user an Opportunity to view an animation of his lan, which he can use to look for tendencies in the lan to contain holes, or holidays. (AN)
APA, Harvard, Vancouver, ISO, and other styles
12

Weir, Brian S. "MONTE CARLO SIMULATIONS OF SHAPE DEPENDENCE IN MAGNETIC ANTIDOT ARRAYS." Miami University / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=miami1155065484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Furrer, Marc. "Numerical Accuracy of Least Squares Monte Carlo." St. Gallen, 2008. http://www.biblio.unisg.ch/org/biblio/edoc.nsf/wwwDisplayIdentifier/01650217002/$FILE/01650217002.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Marcks, von Würtemberg Klas. "Monte Carlo simulation of low energy electrons and positrons in liquid water." Thesis, Stockholms universitet, Fysikum, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-67389.

Full text
Abstract:
An advanced simulation code, LEEPS (Low Energy Electron Positron Simulation), has been adapted to simulation of electrons and positrons in liquid water for energies down to 50 eV. Different scattering parameters and results from simulations are compared with existing data in the literature.   Several programs including a subroutine package for simulation of secondary electrons created in binary like collisions have been developed in purpose of charting different characteristics of the energy deposition.   A toy model for DNA damage is presented as an example of how LEEPS possibly can be used for future investigation of cellular damage due to radiation.
APA, Harvard, Vancouver, ISO, and other styles
15

Gigg, Martyn Anthony. "Monte Carlo simulations of physics beyond the standard model." Thesis, Durham University, 2008. http://etheses.dur.ac.uk/2301/.

Full text
Abstract:
The Large Hadron Collider, currently under construction at CERN, will give direct access to physics at the TeV scale for the first time. The lack of certainty over the type of physics that will be revealed has produced a wealth of ideas for so-called Beyond the Standard Model physics, all with the aim of solving the problems possessed by the Standard Model. The oldest and most well studied is supersymmetry but new ideas based on extra dimensions and collective symmetry breaking have been proposed more recently. In order to study these models most effectively, we argue that they must be implemented within the framework of a Monte Carlo event generator so that their signals can be studied in a real world setting. In this thesis we develop a general approach for the simulation of new physics models with the aim of reducing the effort in implementing a new model into the Herwig++ event generator. The approach is based upon the external spin structures of production and decay matrix elements so that the amount of information required to input a new model is simply a set of Feynman rules and mass spectrum. The first method uses an on-shell approximation throughout but this is later refined to include the effects of finite widths, as these are found to be important when processes occur close to threshold. In all of the discussions regarding our new approach we make specific reference to two models of new physics, the Minimal Supersymmetrie Standard Model and the Minimal Universal Extra Dimensions model. Our general matrix elements and approach to finite widths are all demonstrated and tested using examples from these two models. The concluding discussion makes use of a third model, the Littlest Higgs model with T-parity, such that signals from the three models are compared and contrasted using the general framework developed here.
APA, Harvard, Vancouver, ISO, and other styles
16

Yang, Yang. "Risk-based assessment for distribution network via an efficient Monte Carlo simulation model." Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/40497.

Full text
Abstract:
Given the fact that Smart Grid technologies are implemented mainly in distribution networks, it is essential to build a risk-based assessment tool which can model the operational characteristics of distribution networks operation. This thesis presented a distribution network model which captures the features of distribution network restoration, based on approximations of real-time switching actions. It enables the evaluation of complex distribution network reliability with active network control. The development of an explicit switching model which better reflects actual network switching actions allows for deliberate accuracy and efficiency trade-offs. Combined with importance sampling approach, a significant improvement in computational efficiency has been achieved with both simplified and detailed network switching models. The assessment model also provides flexibility for users to analyse system reliability with various levels of complexity and efficiency. With the proposed assessment tool, different network improvement technologies were investigated for their values of substituting traditional network constructions and impacts on network reliability performances. It has been found that a combination of different technologies, according to specific network requirements, provide the best solution to network investments. Models of customer interruption cost were analysed and compared. The study shows that using different cost models will result in large differences in results and lead to different investment decisions. A single value of lost load is not appropriate to achieve an accurate interruption cost quantification. A chronological simulation model was also built for evaluating the implications of High Impact Low Probability events on distribution network planning. This model provides the insights for the cost of such events and helps network planners justify the cost-effectiveness of post-fault corrections and preventive solutions. Finally, the overall security of supply for GB system was assessed to investigate the impacts of a recent demand reduction at grid supply points (for transmission networks) resulting from the fast growing of generation capacity in distribution networks. It has been found that the current security standard may not be able to guarantee an acceptable reliability performance with the increasing penetration of distributed generation, if further balancing service investment is not available.
APA, Harvard, Vancouver, ISO, and other styles
17

Alhadabi, Amal Mohammed. "AUTOMATED GROWTH MIXTURE MODEL FITTING AND CLASSES HETEROGENEITY DEDUCTION: MONTE CARLO SIMULATION STUDY." Kent State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=kent1615986232296185.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Jiang, Hui. "Missing Data Treatments in Multilevel Latent Growth Model: A Monte Carlo Simulation Study." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1398830597.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Pitt, Michael K. "Bayesian inference for non-Gaussian state space model using simulation." Thesis, University of Oxford, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.389211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

He, Xingxi. "Monte Carlo simulation of ion transport of high strain ionomeric polymer transducers." Diss., Virginia Tech, 2008. http://hdl.handle.net/10919/26068.

Full text
Abstract:
Ionomeric polymer transducers exhibit electromechanical coupling capabilities. The transport of charge due to electric stimulus is the primary mechanism of actuation for a class of polymeric active materials known as ionomeric polymer transducers (IPTs). The research presented in this dissertation focuses on modeling the cation transport and cation steady state distribution due to the actuation of an IPT. Ion transport in the IPT depends on the morphology of the hydrated Nafion mem- brane and the morphology of the metal electrodes. Recent experimental findings show that adding conducting powders at the polymer-conductor interface increases the displacement output. However, it is difficult for a traditional continuum model based on transport theory to include morphology in the model. In this dissertation, a two-dimensional Monte Carlo simulation of ion hopping has been developed to describe ion transport in materials that have fixed and mobile charge similar to the structure of the ionic polymer transducer. In the simulation, cations can hop around in a square lattice. A step voltage is applied be- tween the electrodes of the IPT, causing the thermally-activated hopping between multiwell energy structures. By sampling the ion transition time interval as a random variable, the system evolution is obtained. Conducting powder spheres have been incorporated into the Monte Carlo simulation. Simulation results demonstrate that conducting powders increase the ion conductivity. Successful implementation of parallel computation makes it possible for the simulation to include more powder spheres to find out the saturation percentage of conducting powders for the ion conductivity. To compare simulation results with experimental data, a multiscale model has been developed to increase the scale of Monte Carlo simulation. Both transient responses and steady state responses show good agreement with experimental measurements.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
21

Jegourel, Cyrille. "Rare event simulation for statistical model checking." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S084/document.

Full text
Abstract:
Dans cette thèse, nous considérons deux problèmes auxquels le model checking statistique doit faire face. Le premier concerne les systèmes hétérogènes qui introduisent complexité et non-déterminisme dans l'analyse. Le second problème est celui des propriétés rares, difficiles à observer et donc à quantifier. Pour le premier point, nous présentons des contributions originales pour le formalisme des systèmes composites dans le langage BIP. Nous en proposons une extension stochastique, SBIP, qui permet le recours à l'abstraction stochastique de composants et d'éliminer le non-déterminisme. Ce double effet a pour avantage de réduire la taille du système initial en le remplaçant par un système dont la sémantique est purement stochastique sur lequel les algorithmes de model checking statistique sont définis. La deuxième partie de cette thèse est consacrée à la vérification de propriétés rares. Nous avons proposé le recours à un algorithme original d'échantillonnage préférentiel pour les modèles dont le comportement est décrit à travers un ensemble de commandes. Nous avons également introduit les méthodes multi-niveaux pour la vérification de propriétés rares et nous avons justifié et mis en place l'utilisation d'un algorithme multi-niveau optimal. Ces deux méthodes poursuivent le même objectif de réduire la variance de l'estimateur et le nombre de simulations. Néanmoins, elles sont fondamentalement différentes, la première attaquant le problème au travers du modèle et la seconde au travers des propriétés
In this thesis, we consider two problems that statistical model checking must cope. The first problem concerns heterogeneous systems, that naturally introduce complexity and non-determinism into the analysis. The second problem concerns rare properties, difficult to observe, and so to quantify. About the first point, we present original contributions for the formalism of composite systems in BIP language. We propose SBIP, a stochastic extension and define its semantics. SBIP allows the recourse to the stochastic abstraction of components and eliminate the non-determinism. This double effect has the advantage of reducing the size of the initial system by replacing it by a system whose semantics is purely stochastic, a necessary requirement for standard statistical model checking algorithms to be applicable. The second part of this thesis is devoted to the verification of rare properties in statistical model checking. We present a state-of-the-art algorithm for models described by a set of guarded commands. Lastly, we motivate the use of importance splitting for statistical model checking and set up an optimal splitting algorithm. Both methods pursue a common goal to reduce the variance of the estimator and the number of simulations. Nevertheless, they are fundamentally different, the first tackling the problem through the model and the second through the properties
APA, Harvard, Vancouver, ISO, and other styles
22

Jung, Dosub. "The model risk of option pricing models when volatility is stochastic : a Monte Carlo simulation approach /." free to MU campus, to others for purchase, 2000. http://wwwlib.umi.com/cr/mo/fullcit?p9974644.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Dubreus, Terrance Maurice. "Monte Carlo simulations for small-world stochastic processes." Diss., Mississippi State : Mississippi State University, 2005. http://library.msstate.edu/content/templates/?a=72.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Smit, Jacobus Petrus Johannes. "The quantification of prediction uncertainty associated with water quality models using Monte Carlo Simulation." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/85814.

Full text
Abstract:
Thesis (MEng)--Stellenbosch University, 2013.
ENGLISH ABSTRACT: Water Quality Models are mathematical representations of ecological systems and they play a major role in the planning and management of water resources and aquatic environments. Important decisions concerning capital investment and environmental consequences often rely on the results of Water Quality Models and it is therefore very important that decision makers are aware and understand the uncertainty associated with these models. The focus of this study was on the use of Monte Carlo Simulation for the quantification of prediction uncertainty associated with Water Quality Models. Two types of uncertainty exist: Epistemic Uncertainty and Aleatory Uncertainty. Epistemic uncertainty is a result of a lack of knowledge and aleatory uncertainty is due to the natural variability of an environmental system. It is very important to distinguish between these two types of uncertainty because the analysis of a model’s uncertainty depends on it. Three different configurations of Monte Carlo Simulation in the analysis of uncertainty were discussed and illustrated: Single Phase Monte Carlo Simulation (SPMCS), Two Phase Monte Carlo Simulation (TPMCS) and Parameter Monte Carlo Simulation (PMCS). Each configuration of Monte Carlo Simulation has its own objective in the analysis of a model’s uncertainty and depends on the distinction between the types of uncertainty. As an experiment, a hypothetical river was modelled using the Streeter-Phelps model and synthetic data was generated for the system. The generation of the synthetic data allowed for the experiment to be performed under controlled conditions. The modelling protocol followed in the experiment included two uncertainty analyses. All three types of Monte Carlo Simulations were used in these uncertainty analyses to quantify the model’s prediction uncertainty in fulfilment of their different objectives. The first uncertainty analysis, known as the preliminary uncertainty analysis, was performed to take stock of the model’s situation concerning uncertainty before any effort was made to reduce the model’s prediction uncertainty. The idea behind the preliminary uncertainty analysis was that it would help in further modelling decisions with regards to calibration and parameter estimation experiments. Parameter uncertainty was reduced by the calibration of the model. Once parameter uncertainty was reduced, the second uncertainty analysis, known as the confirmatory uncertainty analysis, was performed to confirm that the uncertainty associated with the model was indeed reduced. The two uncertainty analyses were conducted in exactly the same way. In conclusion to the experiment, it was illustrated how the quantification of the model’s prediction uncertainty aided in the calculation of a Total Maximum Daily Load (TMDL). The Margin of Safety (MOS) included in the TMDL could be determined based on scientific information provided by the uncertainty analysis. The total MOS assigned to the TMDL was -35% of the mean load allocation for the point source. For the sake of simplicity load allocations from non-point sources were disregarded.
AFRIKAANSE OPSOMMING: Watergehalte modelle is wiskundige voorstellings van ekologiese sisteme en speel ’n belangrike rol in die beplanning en bestuur van waterhulpbronne en wateromgewings. Belangrike besluite rakende finansiële beleggings en besluite rakende die omgewing maak dikwels staat op die resultate van watergehalte modelle. Dit is dus baie belangrik dat besluitnemers bewus is van die onsekerhede verbonde met die modelle en dit verstaan. Die fokus van hierdie studie het berus op die gebruik van die Monte Carlo Simulasie om die voorspellingsonsekerhede van watergehalte modelle te kwantifiseer. Twee tipes onsekerhede bestaan: Epistemologiese onsekerheid en toeval afhangende onsekerheid. Epistemologiese onsekerheid is die oorsaak van ‘n gebrek aan kennis terwyl toeval afhangende onsekerheid die natuurlike wisselvalligheid in ’n natuurlike omgewing behels. Dit is belangrik om te onderskei tussen hierdie twee tipes onsekerhede aangesien die analise van ’n model se onsekerheid hiervan afhang. Drie verskillende rangskikkings van Monte Carlo Simulasies in die analise van die onsekerhede word bespreek en geïllustreer: Enkel Fase Monte Carlo Simulasie (SPMCS), Dubbel Fase Monte Carlo Simulasie (TPMCS) en Parameter Monte Carlo Simulasie (PMCS). Elke rangskikking van Monte Carlo Simulasie het sy eie doelwit in die analise van ’n model se onsekerheid en hang af van die onderskeiding tussen die twee tipes onsekerhede. As eksperiment is ’n hipotetiese rivier gemodelleer deur gebruik te maak van die Streeter-Phelps teorie en sintetiese data is vir die rivier gegenereer. Die sintetiese data het gesorg dat die eksperiment onder beheerde toestande kon plaasvind. Die protokol in die eksperiment het twee onsekerheids analises ingesluit. Al drie die rangskikkings van die Monte Carlo Simulasie is gebruik in hierdie analises om die voorspellingsonsekerheid van die model te kwantifiseer en hul doelwitte te bereik. Die eerste analise, die voorlopige onsekerheidsanalise, is uitgevoer om die model se situasie met betrekking tot die onsekerheid op te som voor enige stappe geneem is om die model se voorspellings onsekerheid te probeer verminder. Die idee agter die voorlopige onsekerheidsanalise was dat dit sou help in verdere modelleringsbesluite ten opsigte van kalibrasie en die skatting van parameters. Onsekerhede binne die parameters is verminder deur die model te kalibreer, waarna die tweede onsekerheidsanalise uitgevoer is. Hierdie analise word die bevestigingsonsekerheidsanalise genoem en word uitgevoer met die doel om vas te stel of die onsekerheid geassosieer met die model wel verminder is. Die twee tipes analises word op presies dieselfde manier toegepas. In die afloop tot die eksperiment, is gewys hoe die resultate van ’n onsekerheidsanalise gebruik is in die berekening van ’n totale maksimum daaglikse belading (TMDL) vir die rivier. Die veiligheidgrens (MOS) ingesluit in die TMDL kon vasgestel word deur die gebruik van wetenskaplike kennis wat voorsien is deur die onsekerheidsanalise. Die MOS het bestaan uit -35% van die gemiddelde toegekende lading vir puntbelasting van besoedeling in die rivier. Om die eksperiment eenvoudig te hou is verspreide laste van besoedeling nie gemodelleer nie.
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Li-Fang Ph D. Massachusetts Institute of Technology. "Monte Carlo simulation model for electromagnetic scattering from vegetation and inversion of vegetation parameters." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/38923.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
Includes bibliographical references (p. 171-185).
In this thesis research, a coherent scattering model for microwave remote sensing of vegetation canopy is developed on the basis of Monte Carlo simulations. An accurate model of vegetation structure is essential for the calculation of scattering from vegetations, especially those with closely spaced elements in clusters. The Monte Carlo approach has an advantage over the conventional wave theory in dealing with complex vegetation structures because it is not necessary to find the probability density functions and the pair-distribution functions required in the analytic formulation and usually difficult to obtain for natural vegetation. To achieve a realistic description of the vegetation structure under consideration, two methods may be employed. One method requires the specification of the number of each type of component and the relative orientations of the components. In a structural model which incorporates this method, the detailed features can be preserved to the desired level of accuracy. This structural model is applied to two types of vegetation- --rice crops and sunflowers.
(cont.) The developed structural model for rice crops takes into account the coherent wave interactions made prominent by the clustered and closely spaced structure of rice crops, and is validated with the ERS-1 and RADARSAT data. It is utilized to interpret the experimental observations from the JERS-1 data, such as the effects of the structure of rice fields, and to predict the temporal response of rice growth. The structural model developed for sunflowers is validated using the airborne Remote Sensing Campaign Mac-Europe 91 multi-frequency and multi-polarization data acquired for sunflower fields at the Montespertoli test site in Italy. Another method to characterize vegetation structure uses growth rules. This is especially useful in modeling trees, which are structurally more complex. The Lindenmayer systems (L-systems) are utilized to fully capture the architecture of trees and describe their growth. Monte Carlo simulation results of the scattering returns from trees with different structures and at different growth stages are calculated and analyzed. The concept of the "structure factor" which extracts the structural information of a tree and and provides a measure of the spatial distribution of branches is defined, and computed for trees with different architectures.
(cont.) After study of the forward scattering problem in which the scattering coefficients are determined on the basis of known physical characteristics of the scattering objects or medium, the inverse scattering problem is considered in which the characteristics of the scattering objects or medium are to be calculated from the scattering data. In this thesis research, neural networks are applied to the inversion of geophysical parameters including soil moisture and surface parameters, sunflower biomass, as well as forest age (or equivalently, forest biomass). They are found to be especially useful for multi-dimensional inputs such as multi-frequency polarimetric scattering data. For the inversion of soil moisture and surface parameters, neural networks are trained with theoretical surface scattering models. To retrieve the sunflower biomass, neural networks are trained with the scattering returns obtained from the developed vegetation scattering model based on the Monte Carlo approach. To assess the performance of the use of experimental data to train the neural networks, the polarimetric radar data acquired by the Spaceborne Imaging Radar-C (SIR-C) over the Landes Forest in France are utilized as the training data to retrieve the forest age. Different combinations of backscattering data are used as input to the neural net in order to determine the combination which yields the best inversion result.
by L-i-Fang Wang.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
26

Greberg, Felix. "Debt Portfolio Optimization at the Swedish National Debt Office: : A Monte Carlo Simulation Model." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-275679.

Full text
Abstract:
It can be difficult for a sovereign debt manager to see the implications on expected costs and risk of a specific debt management strategy, a simulation model can therefore be a valuable tool. This study investigates how future economic data such as yield curves, foreign exchange rates and CPI can be simulated and how a portfolio optimization model can be used for a sovereign debt office that mainly uses financial derivatives to alter its strategy. The programming language R is used to develop a bespoke software for the Swedish National Debt Office, however, the method that is used can be useful for any debt manager. The model performs well when calculating risk implications of different strategies but debt managers that use this software to find optimal strategies must understand the model's limitations in calculating expected costs. The part of the code that simulates economic data is developed as a separate module and can thus be used for other studies, key parts of the code are available in the appendix of this paper. Foreign currency exposure is the factor that had the largest effect on both expected cost and risk, moreover, the model does not find any cost advantage of issuing inflation-protected debt. The opinions expressed in this thesis are the sole responsibility of the author and should not be interpreted as reflecting the views of the Swedish National Debt Office.
Det kan vara svårt för en statsskuldsförvaltare att se påverkan på förväntade kostnader och risk när en skuldförvaltningsstrategi väljs, en simuleringsmodell kan därför vara ett värdefullt verktyg. Den här studien undersöker hur framtida ekonomiska data som räntekurvor, växelkurser ock KPI kan simuleras och hur en portföljoptimeringsmodell kan användas av ett skuldkontor som främst använder finansiella derivat för att ändra sin strategi. Programmeringsspråket R används för att utveckla en specifik mjukvara åt Riksgälden, men metoden som används kan vara användbar för andra skuldförvaltare. Modellen fungerar väl när den beräknar risk i olika portföljer men skuldförvaltare som använder modellen för att hitta optimala strategier måste förstå modellens begränsningar i att beräkna förväntade kostnader. Delen av koden som simulerar ekonomiska data utvecklas som en separat modul och kan därför användas för andra studier, de viktigaste delarna av koden finns som en bilaga till den här rapporten. Valutaexponering är den faktor som hade störst påverkan på både förväntade kostnader och risk och modellen hittar ingen kostnadsfördel med att ge ut inflationsskyddade lån. Åsikterna som uttrycks i den här uppsatsen är författarens egna ansvar och ska inte tolkas som att de reflekterar Riksgäldens syn.
APA, Harvard, Vancouver, ISO, and other styles
27

Sallans, Brian, Alexander Pfister, Alexandros Karatzoglou, and Georg Dorffner. "Simulation and validation of an integrated markets model." SFB Adaptive Information Systems and Modelling in Economics and Management Science, WU Vienna University of Economics and Business, 2003. http://epub.wu.ac.at/1454/1/document.pdf.

Full text
Abstract:
The behavior of boundedly rational agents in two interacting markets is investigated. A discrete-time model of coupled financial and consumer markets is described. The integrated model consists of heterogenous consumers, financial traders, and production firms. The production firms operate in the consumer market, and offer their shares to be traded on the financial market. The model is validated by comparing its output to known empirical properties of real markets. In order to better explore the influence of model parameters on behavior, a novel Markov chain Monte Carlo method is introduced. This method allows for the efficient exploration of large parameter spaces, in order to find which parameter regimes lead to reproduction of empirical phenomena. It is shown that the integrated markets model can reproduce a number of empirical "stylized facts", including learning-by-doing effects, fundamental price effects, low autocorrelations, volatility clustering, high kurtosis, and volatility-volume correlations. (author's abstract)
Series: Working Papers SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
APA, Harvard, Vancouver, ISO, and other styles
28

Lee, Sungjoo. "Pricing Path-Dependent Derivative Securities Using Monte Carlo Simulation and Intra-Market Statistical Trading Model." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4914.

Full text
Abstract:
This thesis is composed of two parts. The first parts deals with a technique for pricing American-style contingent options. The second part details a statistical arbitrage model using statistical process control approaches. We propose a novel simulation approach for pricing American-style contingent claims. We develop an adaptive policy search algorithm for obtaining the optimal policy in exercising an American-style option. The option price is first obtained by estimating the optimal option exercising policy and then evaluating the option with the estimated policy through simulation. Both high-biased and low-biased estimators of the option price are obtained. We show that the proposed algorithm leads to convergence to the true optimal policy with probability one. This policy search algorithm requires little knowledge about the structure of the optimal policy and can be naturally implemented using parallel computing methods. As illustrative examples, computational results on pricing regular American options and American-Asian options are reported and they indicate that our algorithm is faster than certain alternative American option pricing algorithms reported in the literature. Secondly, we investigate arbitrage opportunities arising from continuous monitoring of the price difference of highly correlated assets. By differentiating between two assets, we can separate common macroeconomic factors that influence the asset price movements from an idiosyncratic condition that can be monitored very closely by itself. Since price movements are in line with macroeconomic conditions such as interest rates and economic cycles, we can easily see out of the normal behaviors on the price changes. We apply a statistical process control approach for monitoring time series with the serially correlated data. We use various variance estimators to set up and establish trading strategy thresholds.
APA, Harvard, Vancouver, ISO, and other styles
29

Wang, Geng. "Monte Carlo simulation of MeV ion implantation with computationally efficient models." Access restricted to users with UT Austin EID, 2001. http://wwwlib.umi.com/cr/utexas/fullcit?p3036609.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Shourabi, Neda Bazyar. "A Model for Cyber Attack Risks in Telemetry Networks." International Foundation for Telemetering, 2015. http://hdl.handle.net/10150/596409.

Full text
Abstract:
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV
This paper develops a method for analyzing, modeling and simulating cyber threats in a networked telemetry environment as part of a risk management model. The paper includes an approach for incorporating a Monte Carlo computer simulation of this modeling with sample results.
APA, Harvard, Vancouver, ISO, and other styles
31

Cheung, Wing-Keung. "Monte Carlo simulation on 2D random point pattern : Potts model and its application to econophysics /." View abstract or full-text, 2005. http://library.ust.hk/cgi/db/thesis.pl?PHYS%202005%20CHEUNG.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Yeh, Chun hung. "Diffusion Microscopist Simulator - The Development and Application of a Monte Carlo Simulation System for Diffusion MRI." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00660279.

Full text
Abstract:
Diffusion magnetic resonance imaging (dMRI) has made a significant breakthrough in neurological disorders and brain research thanks to its exquisite sensitivity to tissue cytoarchitecture. However, as the water diffusion process in neuronal tissues is a complex biophysical phenomena at molecular scale, it is difficult to infer tissue microscopic characteristics on a voxel scale from dMRI data. The major methodological contribution of this thesis is the development of an integrated and generic Monte Carlo simulation framework, 'Diffusion Microscopist Simulator' (DMS), which has the capacity to create 3D biological tissue models of various shapes and properties, as well as to synthesize dMRI data for a large variety of MRI methods, pulse sequence design and parameters. DMS aims at bridging the gap between the elementary diffusion processes occurring at a micrometric scale and the resulting diffusion signal measured at millimetric scale, providing better insights into the features observed in dMRI, as well as offering ground-truth information for optimization and validation of dMRI acquisition protocols for different applications.We have verified the performance and validity of DMS through various benchmark experiments, and applied to address particular research topics in dMRI. Based on DMS, there are two major application contributions in this thesis. First, we use DMS to investigate the impact of finite diffusion gradient pulse duration (delta) on fibre orientation estimation in dMRI. We propose that current practice of using long delta, which is enforced by the hardware limitation of clinical MRI scanners, is actually beneficial for mapping fibre orientations, even though it violates the underlying assumption made in q-space theory. Second, we employ DMS to investigate the feasibility of estimating axon radius using a clinical MRI system. The results suggest that the algorithm for mapping the direct microstructures is applicable to dMRI data acquired from standard MRI scanners.
APA, Harvard, Vancouver, ISO, and other styles
33

Zárubová, Radka. "Simulační model vývoje penzijního připojištění." Master's thesis, Vysoká škola ekonomická v Praze, 2011. http://www.nusl.cz/ntk/nusl-73034.

Full text
Abstract:
First, this thesis introduces the system of pension insurance with state contribution including its proposed amendment made in 2009. Its aim is to forecast and to analyse expected development in pension insurance with state contribution. The main part of the thesis is focused on the simulation model of this insurance product. Within this model, annual interest on contributions is randomly generated and the amount of money a client of a hypothetical pension fund would receive is calculated. To facilitate this simulation, I programmed and attached (as a part of the thesis) an application in VBA language which enables to run this simulation in the preset number of replications. The thesis gives four examples of simulation experiments -- a simulation of pension insurance, and a simulation of pension saving, both versions both with and without contributions made by client's employer. The comparison of the expected efficiency of the both systems from the point of view of the government and a client is drawn at the end of the thesis.
APA, Harvard, Vancouver, ISO, and other styles
34

Canepa, Alessandra. "Bootstrap inference in cointegrated VAR models." Thesis, University of Southampton, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.268384.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Ward, Patrick L. "An evaluation of the Production Recruiting Incentive Model vs quota-based recruiting using Monte Carlo simulation." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1997. http://handle.dtic.mil/100.2/ADA343434.

Full text
Abstract:
Thesis (M.S. in Management) Naval Postgraduate School, December 1997.
"December 1997." Thesis advisor(s): Katsuaki L. Terasawa, William R. Gates. Includes bibliographical references (p. 81-82). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
36

Zöllner, Dana [Verfasser]. "Monte Carlo Potts Model Simulation and Statistical Mean-Field Theory of Normal Grain Growth / Dana Zöllner." Aachen : Shaker, 2006. http://d-nb.info/1166514811/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Chappell, Isaac Samuel 1972. "Calculation of interface tension and stiffness in a two dimensional Ising Model by Monte Carlo simulation." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/49672.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Physics, 1998.
Includes bibliographical references (p. 55).
The two dimensional Ising Model is important because it describes various condensed matter systems. At low temperatures, spontaneous symmetry breaking occurs such that two coexisting phases are separated by interfaces. These interfaces can be described as vibrating strings and are characterized by their tension and stiffness. Then the partition function can be calculated as a function of the magnetization with the interface tension and stiffness as parameters. Simulating the two dimensional Ising Model on square lattices of various sizes, the partition function is determined in order to extract the interface tension. The configurations being studied have low probability of actual occurrence and would require a large number of Monte Carlo steps before obtaining a good sampling. By using improved estimators and a trial distribution, fewer steps are needed. Improved estimators decrease the number of steps to achieve a certain level of accuracy. The trial distribution allows increased statistics once the general shape of the probability distribution is calculated from a Monte Carlo simulation. For small lattice sizes, it is easy to run Monte Carlo simulations to generate the trial distribution. At larger lattice sizes, it is necessary to build the trial distribution from a combination of a Monte Carlo simulation and an Ansatz from theory due to lower statistics. The extracted values of the interface tension agree with the analytical solution by Onsager.
by Isaac Samuel Chappell, II.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
38

CHIESA, DAVIDE. "Development and experimental validation of a Monte Carlo simulation model for the Triga Mark II reactor." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2014. http://hdl.handle.net/10281/50064.

Full text
Abstract:
In recent years, many computer codes, based on Monte Carlo methods or deterministic calculations, have been developed to separately analyze different aspects regarding nuclear reactors. Nuclear reactors are very complex systems, which require an integrated analysis of all the variables which are intrinsically correlated: neutron fluxes, reaction rates, neutron moderation and absorption, thermal and power distributions, heat generation and transfer, criticality coefficients, fuel burnup, etc. For this reason, one of the main challenges in the analysis of nuclear reactors is the coupling of neutronics and thermal-hydraulics simulation codes, with the purpose of achieving a good modeling and comprehension of the mechanisms which rule the transient phases and the dynamic behavior of the reactor. This is very important to guarantee the control of the chain reaction, for a safe operation of the reactor. In developing simulation tools, benchmark analyses are needed to prove the reliability of the simulations. The experimental measurements conceived to be compared with the results coming out from the simulations are really precious and can provide useful information to improve the description of the physics phenomena in the simulation models. My PhD research activity was held in this framework, as part of the research project Analysis of Reactor COre (ARCO, promoted by INFN) whose task was the development of modern, flexible and integrated tools for the analysis of nuclear reactors, relying on the experimental data collected at the research reactor TRIGA Mark II, installed at the Applied Nuclear Energy Laboratory (LENA) at the University of Pavia. In this way, once the effectiveness and the reliability of these tools for modeling an experimental reactor have been demonstrated, these could be applied to develop new generation systems. In this thesis, I present the complete neutronic characterization of the TRIGA Mark II reactor, which was analyzed in different operating conditions through experimental measurements and the development of a Monte Carlo simulation tool (relied on the MCNP code) able to take into account the ever increasing complexity of the conditions to be simulated. First of all, after giving an overview of some theoretical concepts which are fundamental for the nuclear reactor analysis, a model that reconstructs the first working period of the TRIGA Mark II reactor, in which the “fresh” fuel was not heavily contaminated with fission reaction products, is described. In particular, all the geometries and the materials are described in the MCNP simulation model with good detail, in order to reconstruct the reactor criticality and all the effects on the neutron distributions. The very good results obtained from the simulations of the reactor at low power condition -in which the fuel elements can be considered to be in thermal equilibrium with the water around them- are then used to implement a model for simulating the full power condition (250kW), in which the effects arising from the temperature increase in the fuel-moderator must be taken into account. The MCNP simulation model was exploited to evaluate the reactor power distribution and a dedicated experimental campaign was performed to measure the water temperature within the reactor core. In this way, through a thermal-hydraulic calculation tool, it has been possible to determine the temperature distribution within the fuel elements and to include the description of the thermal effects in the MCNP simulation model. Thereafter, since the neutron flux is a crucial parameter affecting the reaction rates and thus the fuel burnup, its energy and space distributions are analyzed presenting the results of several neutron activation measurements. Particularly, the neutron flux was firstly measured in the reactor's irradiation facilities through the neutron activation of many different isotopes. Hence, in order to analyze the energy flux spectra, I implemented an analysis tool, based on Bayesian statistics, which allows to combine the experimental data from the different activated isotopes and reconstruct a multi-group flux spectrum. Subsequently, the spatial neutron flux distribution within the core was measured by activating several aluminum-cobalt samples in different core positions, thus allowing the determination of the integral and fast flux distributions from the analysis of cobalt and aluminum, respectively. Finally, I present the results of the fuel burnup calculations, that were performed for simulating the current core configuration after a 48 years-long operation. The good accuracy that was reached in the simulation of the neutron fluxes, as confirmed by the experimental measurements, has allowed to evaluate the burnup of each fuel element from the knowledge of the operating hours and the different positions occupied in the core over the years. In this way, it has been possible to exploit the MCNP simulation model to determine a new optimized core configuration which could ensure, at the same time, a higher reactivity and the use of less fuel elements. This configuration was realized in September 2013 and the experimental results confirm the high quality of the work done. The results of this Ph.D. thesis highlight that it is possible to implement analysis tools -ranging from Monte Carlo simulations to the fuel burnup time evolution software, from neutron activation measurements to the Bayesian statistical analysis of flux spectra, and from temperature measurements to thermal-hydraulic models-, which can be appropriately exploited to describe and comprehend the complex mechanisms ruling the operation of a nuclear reactor. Particularly, it was demonstrated the effectiveness and the reliability of these tools in the case of an experimental reactor, where it was possible to collect many precious data to perform benchmark analyses. Therefore, for as these tools have been developed and implemented, they can be used to analyze other reactors and, possibly, to project and develop new generation systems, which will allow to decrease the production of high-level nuclear waste and to exploit the nuclear fuel with improved efficiency.
APA, Harvard, Vancouver, ISO, and other styles
39

Kong, Chi-Wah. "Monte-Carlo simulation on a 2-D random point pattern : ising model and its application to econophysics /." View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?PHYS%202002%20KONG.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2002.
Includes bibliographical references (leaves 81-82). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
40

Kang, Hway-Chuan Weinberg Henry Weinberg Henry. "Model studies of adsorbate ordering, adsorption and reaction using Monte-Carlo simulations /." Diss., Pasadena, Calif. : California Institute of Technology, 1990. http://resolver.caltech.edu/CaltechETD:etd-05152007-125719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Shrestha, Surendra Prakash. "An effective medium approximation and Monte Carlo simulation in subsurface flow modeling." Diss., Virginia Tech, 1993. http://hdl.handle.net/10919/38642.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Spencer, Paul E. "Continuous-time quantum Monte Carlo studies of lattice polarons." Thesis, Loughborough University, 2000. https://dspace.lboro.ac.uk/2134/33799.

Full text
Abstract:
The polaron problem is studied, on an infinite lattice, using the continuous-time path-integral quantum Monte Carlo scheme The method is based on the Feynman technique to analytically integrate out the phonon degrees of freedom. The transformed problem is that of a single electron with retarded self-interaction in imaginary time. The Metropolis algorithm is used to sample an ensemble of electron trajectories with twisted (rather than periodic) boundary conditions in imaginary time, which allows dynamic properties of the system to by measured directly. The method is numerically "exact", in the sense that there are no systematic errors due to finite system size, trotter decomposition or finite temperature The implementation of the algorithm in continuous imaginary time dramatically increases computational efficiency compared with the traditional discrete imaginary time algorithms.
APA, Harvard, Vancouver, ISO, and other styles
43

Vocke, Carsten. "Hedging with multi-factor interest rate models /." [St. Gallen] : [s.n.], 2005. http://www.gbv.de/dms/zbw/503121223.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Hatton, Marc. "Requirements specification for the optimisation function of an electric utility's energy flow simulator." Thesis, Stellenbosch : Stellenbosch University, 2015. http://hdl.handle.net/10019.1/96956.

Full text
Abstract:
Thesis (MEng)--Stellenbosch University, 2015.
ENGLISH ABSTRACT: Efficient and reliable energy generation capability is vital to any country's economic growth. Many strategic, tactical and operational decisions take place along the energy supply chain. Shortcomings in South Africa's electricity production industry have led to the development of an energy ow simulator. The energy ow simulator is claimed to incorporate all significant factors involved in the energy ow process from primary energy to end-use consumption. The energy ow simulator thus provides a decision support system for electric utility planners. The original aim of this study was to develop a global optimisation model and integrate it into the existing energy ow simulator. After gaining an understanding of the architecture of the energy ow simulator and scrutinising a large number of variables, it was concluded that global optimisation was infeasible. The energy ow simulator is made up of four modules and is operated on a module-by-module basis, with inputs and outputs owing between modules. One of the modules, namely the primary energy module, lends itself well to optimisation. The primary energy module simulates coal stockpile levels through Monte Carlo simulation. Classic inventory management policies were adapted to fit the structure of the primary energy module, which is treated as a black box. The coal stockpile management policies that are introduced provide a prescriptive means to deal with the stochastic nature of the coal stockpiles. As the planning horizon continuously changes and the entire energy ow simulator has to be re-run, an efficient algorithm is required to optimise stockpile management policies. Optimisation is achieved through the rapidly converging cross-entropy method. By integrating the simulation and optimisation model, a prescriptive capability is added to the primary energy module. Furthermore, this study shows that coal stockpile management policies can be improved. An integrated solution is developed by nesting the primary energy module within the optimisation model. Scalability is incorporated into the optimisation model through a coding approach that automatically adjusts to an everchanging planning horizon as well as the commission and decommission of power stations. As this study is the first of several research projects to come, it paves the way for future research on the energy ow simulator by proposing future areas of investigation.
AFRIKAANSE OPSOMMING: Effektiewe en betroubare energie-opwekkingsvermoë is van kardinale belang in enige land se ekonomiese groei. Baie strategiese, taktiese en operasionele besluite word deurgaans in die energie-verskaffingsketting geneem. Tekortkominge in Suid-Afrika se elektrisiteitsopwekkingsindustrie het tot die ontwikkeling van 'n energie-vloei-simuleerder gelei. Die energie-vloei-simuleerder vervat na bewering al die belangrike faktore wat op die energie-vloei-proses betrekking het van primêre energieverbruik tot eindgebruik. Die energie-vloei-simuleerder verskaf dus 'n ondersteuningstelsel aan elektrisiteitsdiensbeplanners vir die neem van besluite. Die oorspronklike doel van hierdie studie was om 'n globale optimeringsmodel te ontwikkel en te integreer in die bestaande energie-vloeisimuleerder. Na 'n begrip aangaande die argitektuur van die energievloei- simuleerder gevorm is en 'n groot aantal veranderlikes ondersoek is, is die slotsom bereik dat globale optimering nie lewensvatbaar is nie. Die energie-vloei-simuleerder bestaan uit vier eenhede en werk op 'n eenheid-tot-eenheid basis met insette en uitsette wat tussen eenhede vloei. Een van die eenhede, naamlik die primêre energiemodel, leen dit goed tot optimering. Die primêre energiemodel boots steenkoolreserwevlakke deur Monte Carlo-simulering na. Tradisionele voorraadbestuursbeleide is aangepas om die primêre energiemodel se struktuur wat as 'n swartboks hanteer word, te pas. Die steenkoolreserwebestuursbeleide wat ingestel is, verskaf 'n voorgeskrewe middel om met die stogastiese aard van die steenkoolreserwes te werk. Aangesien die beplanningshorison deurgaans verander en die hele energie-vloei-simulering weer met die energie-vloei-simuleerder uitgevoer moet word, word 'n effektiewe algoritme benodig om die re-serwebestuursbeleide te optimeer. Optimering word bereik deur die vinnige konvergerende kruis-entropie-metode. 'n Geïntegreerde oplossing is ontwikkel deur die primêre energiemodel en die optimering funksie saam te voeg. Skalering word ingesluit in die optimeringsmodel deur 'n koderingsbenadering wat outomaties aanpas tot 'n altyd-veranderende beplanningshorison asook die ingebruikneem en uitgebruikstel van kragstasies. Aangesien hierdie studie die eerste van verskeie navorsingsprojekte is, baan dit die weg vir toekomstige navorsing oor die energie-vloeisimuleerder deur ondersoekareas vir die toekoms voor te stel.
APA, Harvard, Vancouver, ISO, and other styles
45

Bonfrate, Anthony. "Développement d'un modèle analytique dédié au calcul des doses secondaires neutroniques aux organes sains des patients en protonthérapie." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS408/document.

Full text
Abstract:
Les doses secondaires neutroniques ne sont actuellement pas estimées lors de la planification de traitement dans les centres de protonthérapie puisque les logiciels de planification de traitement (TPS) ne le proposent pas tandis que les simulations Monte Carlo (MC) et les mesures sont inadaptées pour un environnement clinique. L’objectif de la thèse est de développer un modèle analytique dédié à l’estimation des doses secondaires neutroniques aux organes sains qui reste pratique et simple d’utilisation en routine clinique. Dans un premier temps, la géométrie existante de la gantry installée au Centre de protonthérapie d’Orsay (CPO) de l’institut Curie modélisée avec le code de calcul MCNPX a été étendue à trois configurations de traitement supplémentaires (énergie en entrée de ligne de 162, 192 et 220 MeV). Une approche comparative simulation-mesure a ensuite été entreprise afin de vérifier la capacité de ces modélisations à reproduire les distributions de doses (en profondeur et latérales) des protons primaires ainsi que le champ secondaire neutronique. Des écarts inférieurs à 2 mm ont été observés pour les protons primaires. Pour les neutrons secondaires, les écarts sont plus mitigés avec des rapports simulation sur mesure de ~2 et de ~6, respectivement pour la spectrométrie et les équivalents de dose dans un fantôme physique. L’analyse des résultats a permis d’identifier l’origine de ces écarts et de mettre en perspective la nécessité de conduire de nouvelles études pour améliorer à la fois les mesures expérimentales et les simulations MC. Dans un deuxième temps, une approche purement numérique a été considérée pour calculer les doses neutroniques aux organes sains de fantômes voxélisés représentant des patients d’un an, de dix ans et adulte, traités pour un craniopharyngiome. Une variation de chaque paramètre de traitement a été réalisée afin d’étudier leur influence respective sur les doses neutroniques. Ces paramètres ont pu être ordonnés par ordre décroissant d’influence : incidence de traitement, distance organe-collimateur et organe-champ de traitement, taille/âge des patients, énergie de traitement, largeur de modulation, ouverture du collimateur, etc. Des suggestions ont également été avancées pour réduire les doses neutroniques.Dans un troisième temps, un modèle analytique a été conçu de façon à être utilisable en routine clinique, pour tous les types de tumeur et toutes les installations de protonthérapie. Son entraînement séparé pour trois incidences de traitement a montré des écarts inferieurs à ~30% et ~60 µGy Gy⁻¹ entre les données d’apprentissage (doses neutroniques calculées aux organes sains) et les valeurs prédites par le modèle analytique. La validation a consisté à comparer les doses neutroniques estimées par le modèle analytique à celles calculées avec MCNPX pour des conditions différentes des données d’apprentissage. Globalement, un accord acceptable a été observé avec des écarts moyens de ~30% et ~100 µGy Gy⁻¹. La flexibilité et la fiabilité du modèle analytique ont ainsi été mises en évidence. L’entraînement du modèle analytique à partir d’équivalents de dose neutroniques mesurés dans un fantôme solide au Centre Antoine Lacassagne a confirmé son universalité, bien qu’il requière néanmoins quelques ajustements supplémentaires pour améliorer sa précision
Stray neutron doses are currently not evaluated during treatment planning within proton therapy centers since treatment planning systems (TPS) do not allow this feature while Monte Carlo (MC) simulations and measurements are unsuitable for routine practice. The PhD aims at developing an analytical model dedicated to the estimation of stray neutron doses to healthy organs which remains easy-to-use in clinical routine. First, the existing MCNPX model of the gantry installed at the Curie institute - proton therapy center of Orsay (CPO) was extended to three additional treatment configurations (energy at the beam line entrance of 162, 192 and 220 MeV). Then, the comparison of simulations and measurements was carried out to verify the ability of the MC model to reproduce primary proton dose distributions (in depth and lateral) as well as the stray neutron field. Errors within 2 mm were observed for primary protons. For stray neutrons, simulations overestimated measurements by up to a factor of ~2 and ~6 for spectrometry and dose equivalent in a solid phantom, respectively. The result analysis enabled to identify the source of these errors and to put into perspective new studies in order to improve both experimental measurements and MC simulations. Secondly, MC simulations were used to calculate neutron doses to healthy organs of a one-year-old, a ten-year-old and an adult voxelized phantoms, treated for a carniopharyngioma. Treatment parameters were individually varied to study their respective influence on neutron doses. Parameters in decreasing order of influence are: beam incidence, organ-to-collimator and organ-to-treatment field distances, patient’ size/age, treatment energy, modulation width, collimator aperture, etc. Based on these calculations, recommendations were given to reduce neutron doses. Thirdly, an analytical model was developed complying with a use in clinical routine, for all tumor localizations and proton therapy facilities. The model was trained to reproduce calculated neutron doses to healthy organs and showed errors within ~30% and ~60 µGy Gy⁻¹ between learning data and predicted values; this was separately done for each beam incidence. Next, the analytical model was validated against neutron dose calculations not considered during the training step. Overall, satisfactory errors were observed within ~30% and ~100 µGy Gy⁻¹. This highlighted the flexibility and reliability of the analytical model. Finally, the training of the analytical model made using neutron dose equivalent measured in a solid phantom at the center Antoine Lacassagne confirmed its universality while also indicating that additional modifications are required to enhance its accuracy
APA, Harvard, Vancouver, ISO, and other styles
46

Khanal, Kiran. "Liquid-Crystalline Ordering in Semiflexible Polymer Melts and Blends: A Monte Carlo Simulation Study." University of Akron / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=akron1373901748.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Fu, Jianlin. "A markov chain monte carlo method for inverse stochastic modeling and uncertainty assessment." Doctoral thesis, Universitat Politècnica de València, 2008. http://hdl.handle.net/10251/1969.

Full text
Abstract:
Unlike the traditional two-stage methods, a conditional and inverse-conditional simulation approach may directly generate independent, identically distributed realizations to honor both static data and state data in one step. The Markov chain Monte Carlo (McMC) method was proved a powerful tool to perform such type of stochastic simulation. One of the main advantages of the McMC over the traditional sensitivity-based optimization methods to inverse problems is its power, flexibility and well-posedness in incorporating observation data from different sources. In this work, an improved version of the McMC method is presented to perform the stochastic simulation of reservoirs and aquifers in the framework of multi-Gaussian geostatistics. First, a blocking scheme is proposed to overcome the limitations of the classic single-component Metropolis-Hastings-type McMC. One of the main characteristics of the blocking McMC (BMcMC) scheme is that, depending on the inconsistence between the prior model and the reality, it can preserve the prior spatial structure and statistics as users specified. At the same time, it improves the mixing of the Markov chain and hence enhances the computational efficiency of the McMC. Furthermore, the exploration ability and the mixing speed of McMC are efficiently improved by coupling the multiscale proposals, i.e., the coupled multiscale McMC method. In order to make the BMcMC method capable of dealing with the high-dimensional cases, a multi-scale scheme is introduced to accelerate the computation of the likelihood which greatly improves the computational efficiency of the McMC due to the fact that most of the computational efforts are spent on the forward simulations. To this end, a flexible-grid full-tensor finite-difference simulator, which is widely compatible with the outputs from various upscaling subroutines, is developed to solve the flow equations and a constant-displacement random-walk particle-tracking method, which enhances the com
Fu, J. (2008). A markov chain monte carlo method for inverse stochastic modeling and uncertainty assessment [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1969
Palancia
APA, Harvard, Vancouver, ISO, and other styles
48

Ma, Genuo. "JACKKNIFE MODEL AVERAGING ON FUNCTIONAL LOGISTIC MODEL." Thesis, Uppsala universitet, Statistiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-413059.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Almutairi, Amani. "Investigation of sexithiophene properties with Monte Carlo simulations of a coarse-grained model." University of Akron / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=akron1481301927214166.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Gu, Chenchen. "Option Pricing Using MATLAB." Digital WPI, 2011. https://digitalcommons.wpi.edu/etd-theses/382.

Full text
Abstract:
This paper describes methods for pricing European and American options. Monte Carlo simulation and control variates methods are employed to price call options. The binomial model is employed to price American put options. Using daily stock data I am able to compare the model price and market price and speculate as to the cause of difference. Lastly, I build a portfolio in an Interactive Brokers paper trading [1] account using the prices I calculate. This project was done a part of the masters capstone course Math 573: Computational Methods of Financial Mathematics.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography