Dissertations / Theses on the topic 'Variance reduction'

To see the other types of publications on this topic, follow the link: Variance reduction.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Variance reduction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Chouinard, Hayley Helene. "Reduction of yield variance through crop insurance." Thesis, Montana State University, 1994. http://etd.lib.montana.edu/etd/1994/chouinard/ChouinardH1994.pdf.

Full text
Abstract:
The variance of a producer's yield provides uncertainty and may be considered the risk a producer faces. crop insurance may provide protection against yield variability. If yields are necessarily low, an insured producer may receive an indemnity payment. Currently, crop insurance is based on each individual's yield. If the individual's yield falls below a specified level, the individual will receive an indemnity. An alternative crop insurance program bases indemnities on . an area yield. If the yield of the predetermined area falls below a specific level, all insured producers will receive an indemnity. This thesis examines the yield variability reduction received by purchasing various forms of area yield and individual yield crop insurance and the actuarially fair premium costs associated with them. When a producer purchases insurance two decisions are made. First, the producer selects a trigger level which determines the critical yield which generates an indemnity payment. Second, the producer may be able to select a coverage level which is the amount of acreage covered by the contract. Each contract examined allows different levels for the trigger and coverage levels. The variance reduction provided from each contract is the variance of the yield without insurance less the variance of the yield with an insurance contract. The results indicate most producers receive some variance reduction from the area yield contracts. And, producers who have yields which are closely correlated with the area yield receive more variance reduction from the area yield insurance than from the individual yield insurance contracts. However, the area yield contracts which provide on average more yield variance reduction than the individual yield contracts, also have much higher actuarially fair premium costs. The area yield insurance contracts should be considered as an alternative to individual yield insurance, but the premium costs must be evaluated also.
APA, Harvard, Vancouver, ISO, and other styles
2

Gagakuma, Bertelsen. "Variance Reduction in Wind Farm Layout Optimization." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7758.

Full text
Abstract:
As demand for wind power continues to grow, it is becoming increasingly important to minimize the risk, characterized by the variance, that is associated with long-term power forecasts. This thesis investigated variance reduction in power forecasts from wind farm layout optimization.The problem was formulated as a multi-objective optimization one of maximizing mean-plant-power and minimizing variance. The ε−constraint method was used to solve the bi-objectiveproblem in a two-step optimization framework where two sequential optimizations are performed. The first is maximizing mean wind farm power alone and the second, minimizing variance with a constraint on the mean power which is the value from the first optimization. The results show that the variance in power estimates can be reduced by up to 30%, without sacrificing mean-plant-power for the different farm sizes and wind conditions studied. This reduction is attributed to the multi-modality of the design space which allows for unique solutions of high mean plant power at different power variances. Thus, wind farms can be designed to maximize power capture with greater confidence.
APA, Harvard, Vancouver, ISO, and other styles
3

Greensmith, Evan, and evan greensmith@gmail com. "Policy Gradient Methods: Variance Reduction and Stochastic Convergence." The Australian National University. Research School of Information Sciences and Engineering, 2005. http://thesis.anu.edu.au./public/adt-ANU20060106.193712.

Full text
Abstract:
In a reinforcement learning task an agent must learn a policy for performing actions so as to perform well in a given environment. Policy gradient methods consider a parameterized class of policies, and using a policy from the class, and a trajectory through the environment taken by the agent using this policy, estimate the performance of the policy with respect to the parameters. Policy gradient methods avoid some of the problems of value function methods, such as policy degradation, where inaccuracy in the value function leads to the choice of a poor policy. However, the estimates produced by policy gradient methods can have high variance.¶ In Part I of this thesis we study the estimation variance of policy gradient algorithms, in particular, when augmenting the estimate with a baseline, a common method for reducing estimation variance, and when using actor-critic methods. A baseline adjusts the reward signal supplied by the environment, and can be used to reduce the variance of a policy gradient estimate without adding any bias. We find the baseline that minimizes the variance. We also consider the class of constant baselines, and find the constant baseline that minimizes the variance. We compare this to the common technique of adjusting the rewards by an estimate of the performance measure. Actor-critic methods usually attempt to learn a value function accurate enough to be used in a gradient estimate without adding much bias. In this thesis we propose that in learning the value function we should also consider the variance. We show how considering the variance of the gradient estimate when learning a value function can be beneficial, and we introduce a new optimization criterion for selecting a value function.¶ In Part II of this thesis we consider online versions of policy gradient algorithms, where we update our policy for selecting actions at each step in time, and study the convergence of the these online algorithms. For such online gradient-based algorithms, convergence results aim to show that the gradient of the performance measure approaches zero. Such a result has been shown for an algorithm which is based on observing trajectories between visits to a special state of the environment. However, the algorithm is not suitable in a partially observable setting, where we are unable to access the full state of the environment, and its variance depends on the time between visits to the special state, which may be large even when only few samples are needed to estimate the gradient. To date, convergence results for algorithms that do not rely on a special state are weaker. We show that, for a certain algorithm that does not rely on a special state, the gradient of the performance measure approaches zero. We show that this continues to hold when using certain baseline algorithms suggested by the results of Part I.
APA, Harvard, Vancouver, ISO, and other styles
4

Tjärnström, Fredrik. "Variance expressions and model reduction in system identification /." Linköping : Univ, 2002. http://www.bibl.liu.se/liupubl/disp/disp2002/tek730s.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wise, Michael Anthony. "A variance reduction technique for production cost simulation." Ohio : Ohio University, 1989. http://www.ohiolink.edu/etd/view.cgi?ohiou1182181023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rowland, Kelly L. "Advanced Quadrature Selection for Monte Carlo Variance Reduction." Thesis, University of California, Berkeley, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10817512.

Full text
Abstract:

Neutral particle radiation transport simulations are critical for radiation shielding and deep penetration applications. Arriving at a solution for a given response of interest can be computationally difficult because of the magnitude of particle attenuation often seen in these shielding problems. Hybrid methods, which aim to synergize the individual favorable aspects of deterministic and stochastic solution methods for solving the steady-state neutron transport equation, are commonly used in radiation shielding applications to achieve statistically meaningful results in a reduced amount of computational time and effort. The current state of the art in hybrid calculations is the Consistent Adjoint-Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) methods, which generate Monte Carlo variance reduction parameters based on deterministically-calculated scalar flux solutions. For certain types of radiation shielding problems, however, results produced using these methods suffer from unphysical oscillations in scalar flux solutions that are a product of angular discretization. These aberrations are termed “ray effects”.

The Lagrange Discrete Ordinates (LDO) equations retain the formal structure of the traditional discrete ordinates formulation of the neutron transport equation and mitigate ray effects at high angular resolution. In this work, the LDO equations have been implemented in the Exnihilo parallel neutral particle radiation transport framework, with the deterministic scalar flux solutions passed to the Automated Variance Reduction Generator (ADVANTG) software and the resultant Monte Carlo variance reduction parameters’ efficacy assessed based on results from MCNP5. Studies were conducted in both the CADIS and FW-CADIS contexts, with the LDO equations’ variance reduction parameters seeing their best performance in the FW-CADIS method, especially for photon transport.

APA, Harvard, Vancouver, ISO, and other styles
7

Greensmith, Evan. "Policy gradient methods : variance reduction and stochastic convergence /." View thesis entry in Australian Digital Theses Program, 2005. http://thesis.anu.edu.au/public/adt-ANU20060106.193712/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yang, Yani. "Dimension reduction in the regressions through weighted variance estimation." HKBU Institutional Repository, 2009. http://repository.hkbu.edu.hk/etd_ra/1073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Höök, Lars Josef. "Variance reduction methods for numerical solution of plasma kinetic diffusion." Licentiate thesis, KTH, Fusionsplasmafysik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-91332.

Full text
Abstract:
Performing detailed simulations of plasma kinetic diffusion is a challenging task and currently requires the largest computational facilities in the world. The reason for this is that, the physics in a confined heated plasma occur on a broad range of temporal and spatial scales. It is therefore of interest to improve the computational algorithms together with the development of more powerful computational resources. Kinetic diffusion processes in plasmas are commonly simulated with the Monte Carlo method, where a discrete set of particles are sampled from a distribution function and advanced in a Lagrangian frame according to a set of stochastic differential equations. The Monte Carlo method introduces computational error in the form of statistical random noise produced by a finite number of particles (or markers) N and the error scales as αN−β where β = 1/2 for the standard Monte Carlo method. This requires a large number of simulated particles in order to obtain a sufficiently low numerical noise level. Therefore it is essential to use techniques that reduce the numerical noise. Such methods are commonly called variance reduction methods. In this thesis, we have developed new variance reduction methods with application to plasma kinetic diffusion. The methods are suitable for simulation of RF-heating and transport, but are not limited to these types of problems. We have derived a novel variance reduction method that minimizes the number of required particles from an optimization model. This implicitly reduces the variance when calculating the expected value of the distribution, since for a fixed error the  optimization model ensures that a minimal number of particles are needed. Techniques that reduce the noise by improving the order of convergence, have also been considered. Two different methods have been tested on a neutral beam injection scenario. The methods are the scrambled Brownian bridge method and a method here called the sorting and mixing method of L´ecot and Khettabi[1999]. Both methods converge faster than the standard Monte Carlo method for modest number of time steps, but fail to converge correctly for large number of time steps, a range required for detailed plasma kinetic simulations. Different techniques are discussed that have the potential of improving the convergence to this range of time steps.
QC 20120314
APA, Harvard, Vancouver, ISO, and other styles
10

Aghedo, Maurice Enoghayinagbon. "Variance reduction in Monte Carlo methods of estimating distribution functions." Thesis, Imperial College London, 1985. http://hdl.handle.net/10044/1/37385.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Larsson, Julia. "Optimization of option pricing : - Variance reduction and low-discrepancy techniques." Thesis, Umeå universitet, Företagsekonomi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-173093.

Full text
Abstract:
In recent years, the importance and the interest in financial instrument especially derivatives have increased. The Nobel Prize in Economics 1997 was dedicated to Black & Scholes for their work with finding a new method that estimates option prices for Plain Vanilla Options. Since the dynamics of the underlying assets can be very complex it is preferable to use numerical methods such as Monte Carlo simulations, to get rid of the assumptions in the Black & Scholes models. Monte Carlo simulations is a preferable pricing method and is often the only method available for pricing options with complex trajectories dependencies. Simulations can moreover give further insights into how the options actually works. A huge advantage is that the convergence rate is independent of the dimension of the simulated problem. Meanwhile the backside of the method is the slow convergence, the need of significant computational power and that the simulations are time consuming. Hence the aim with this research is to find an optimized pricing method for both Plain Vanilla options and Exotic options. In order to optimize option pricing different variance reduction- and low-discrepancy methods are examined and com- pared based on following criteria. Convergence to the option price, execution time and the uncertainty in the estimated price. The findings of this research are that there are both strengths and weaknesses with each of the described methods. Depending on if one wants to optimize only the convergence/uncertainty or execution time. In conclusion there exists a silver bullet in option pricing, which has performed on each criterion, this method is named Importance Sampling.
APA, Harvard, Vancouver, ISO, and other styles
12

Whittle, Joss. "Quality assessment and variance reduction in Monte Carlo rendering algorithms." Thesis, Swansea University, 2018. https://cronfa.swan.ac.uk/Record/cronfa40271.

Full text
Abstract:
Over the past few decades much work has been focused on the area of physically based rendering which attempts to produce images that are indistinguishable from natural images such as photographs. Physically based rendering algorithms simulate the complex interactions of light with physically based material, light source, and camera models by structuring it as complex high dimensional integrals [Kaj86] which do not have a closed form solution. Stochastic processes such as Monte Carlo methods can be structured to approximate the expectation of these integrals, producing algorithms which converge to the true rendering solution as the amount of computation is increased in the limit. When a finite amount of computation is used to approximate the rendering solution, images will contain undesirable distortions in the form of noise from under-sampling in image regions with complex light interactions. An important aspect of developing algorithms in this domain is to have a means of accurately comparing and contrasting the relative performance gains between different approaches. Image Quality Assessment (IQA) measures provide a way of condensing the high dimensionality of image data to a single scalar value which can be used as a representative measure of image quality and fidelity. These measures are largely developed in the context of image datasets containing natural images (photographs) coupled with their synthetically distorted versions, and quality assessment scores given by human observers under controlled viewing conditions. Inference using these measures therefore relies on whether the synthetic distortions used to develop the IQA measures are representative of the natural distortions that will be seen in images from domain being assessed. When we consider images generated through stochastic rendering processes, the structure of visible distortions that are present in un-converged images is highly complex and spatially varying based on lighting and scene composition. In this domain the simple synthetic distortions used commonly to train and evaluate IQA measures are not representative of the complex natural distortions from the rendering process. This raises a question of how robust IQA measures are when applied to physically based rendered images. In this thesis we summarize the classical and recent works in the area of physicallybased rendering using stochastic approaches such as Monte Carlo methods. We develop a modern C++ framework wrapping MPI for managing and running code on large scale distributed computing environments. With this framework we use high performance computing to generate a dataset of Monte Carlo images. From this we provide a study on the effectiveness of modern and classical IQA measures and their robustness when evaluating images generated through stochastic rendering processes. Finally, we build on the strengths of these IQA measures and apply modern deep-learning methods to the No Reference IQA problem, where we wish to assess the quality of a rendered image without knowing its true value.
APA, Harvard, Vancouver, ISO, and other styles
13

Stockbridge, Rebecca. "Bias and Variance Reduction in Assessing Solution Quality for Stochastic Programs." Diss., The University of Arizona, 2013. http://hdl.handle.net/10150/301665.

Full text
Abstract:
Stochastic programming combines ideas from deterministic optimization with probability and statistics to produce more accurate models of optimization problems involving uncertainty. However, due to their size, stochastic programming problems can be extremely difficult to solve and instead approximate solutions are used. Therefore, there is a need for methods that can accurately identify optimal or near optimal solutions. In this dissertation, we focus on improving Monte-Carlo sampling-based methods that assess the quality of potential solutions to stochastic programs by estimating optimality gaps. In particular, we aim to reduce the bias and/or variance of these estimators. We first propose a technique to reduce the bias of optimality gap estimators which is based on probability metrics and stability results in stochastic programming. This method, which requires the solution of a minimum-weight perfect matching problem, can be run in polynomial time in sample size. We establish asymptotic properties and present computational results. We then investigate the use of sampling schemes to reduce the variance of optimality gap estimators, and in particular focus on antithetic variates and Latin hypercube sampling. We also combine these methods with the bias reduction technique discussed above. Asymptotic properties of the resultant estimators are presented, and computational results on a range of test problems are discussed. Finally, we apply methods of assessing solution quality using antithetic variates and Latin hypercube sampling to a sequential sampling procedure to solve stochastic programs. In this setting, we use Latin hypercube sampling when generating a sequence of candidate solutions that is input to the procedure. We prove that these procedures produce a high-quality solution with high probability, asymptotically, and terminate in a finite number of iterations. Computational results are presented.
APA, Harvard, Vancouver, ISO, and other styles
14

Flaspoehler, Timothy Michael. "FW-CADIS variance reduction in MAVRIC shielding analysis of the VHTR." Thesis, Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45743.

Full text
Abstract:
In the following work, the MAVRIC sequence of the Scale6.1 code package was tested for its efficacy in calculating a wide range of shielding parameters with respect to HTGRs. One of the NGNP designs that has gained large support internationally is the VHTR. The development of the Scale6.1 code package at ORNL has been primarily directed towards supporting the current United States' reactor fleet of LWR technology. Since plans have been made to build a prototype VHTR, it is important to verify that the MAVRIC sequence can adequately meet the simulation needs of a different reactor technology. This was accomplished by creating a detailed model of the VHTR power plant; identifying important, relevant radiation indicators; and implementing methods using MAVRIC to simulate those indicators in the VHTR model. The graphite moderator used in the design shapes a different flux spectrum than water-moderated reactors. The different flux spectrum could lead to new considerations when quantifying shielding characteristics and possibly a different gamma-ray spectrum escaping the core and surrounding components. One key portion of this study was obtaining personnel dose rates in accessible areas within the power plant from both neutron and gamma sources. Additionally, building from professional and regulatory standards a surveillance capsule monitoring program was designed to mimic those used in the nuclear industry. The high temperatures were designed to supply heat for industrial purposes and not just for power production. Since tritium, a heavier radioactive isotope of hydrogen, is produced in the reactor it is important to know the distribution of tritium production and the subsequent diffusion from the core to secondary systems to prevent contamination outside of the nuclear island. Accurately modeling indicators using MAVRIC is the main goal. However, it is almost equally as important for simulations to be carried out in a timely manner. MAVRIC uses the discrete ordinates method to solve the fixed-source transport equation for both neutron and gamma rays on a crude geometric representation of the detailed model. This deterministic forward solution is used to solve an adjoint equation with the adjoint source specified by the user. The adjoint solution is then used to create an importance map that can weight particles in a stochastic Monte Carlo simulation. The goal of using this hybrid methodology is to provide complete accuracy with high precision while decreasing overall simulation times by orders of magnitude. The MAVRIC sequence provides a platform to quickly alter inputs so that vastly different shielding studies can be simulated using one model with minimal effort by the user. Each separate shielding study required unique strategies while looking at different regions in the VHTR plant. MAVRIC proved to be effective for each case.
APA, Harvard, Vancouver, ISO, and other styles
15

Adewunmi, Adrian. "Selection of simulation variance reduction techniques through a fuzzy expert system." Thesis, University of Nottingham, 2010. http://eprints.nottingham.ac.uk/11260/.

Full text
Abstract:
In this thesis, the design and development of a decision support system for the selection of a variance reduction technique for discrete event simulation studies is presented. In addition, the performance of variance reduction techniques as stand alone and combined application has been investigated. The aim of this research is to mimic the process of human decision making through an expert system and also handle the ambiguity associated with representing human expert knowledge through fuzzy logic. The result is a fuzzy expert system which was subjected to three different validation tests, the main objective being to establish the reasonableness of the systems output. Although these validation tests are among the most widely accepted tests for fuzzy expert systems, the overall results were not in agreement with expectations. In addition, results from the stand alone and combined application of variance reduction techniques, demonstrated that more instances of stand alone applications performed better at reducing variance than the combined application. The design and development of a fuzzy expert system as an advisory tool to aid simulation users, constitutes a significant contribution to the selection of variance reduction technique(s), for discrete event simulation studies. This is a novelty because it demonstrates the practicalities involved in the design and development process, which can be used on similar decision making problems by discrete event simulation researchers and practitioners using their own knowledge and experience. In addition, the application of a fuzzy expert system to this particular discrete event simulation problem, demonstrates the flexibility and usability of an alternative to the existing algorithmic approach. Under current experimental conditions, a new specific class of systems, in particular the Crossdocking Distribution System has been identified, for which the application of variance reduction techniques, i.e. Antithetic Variates and Control Variates are beneficial for variance reduction.
APA, Harvard, Vancouver, ISO, and other styles
16

Yang, Wei-Ning. "Multivariate estimation and variance reduction for terminating and steady-state simulation /." The Ohio State University, 1989. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487672631602302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Song, Chenxiao. "Monte Carlo Variance Reduction Methods with Applications in Structural Reliability Analysis." Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/29801.

Full text
Abstract:
Monte Carlo variance reduction methods have attracted significant interest due to the continuous demand for reducing computational costs in various fields of application. This thesis is based on the content of a collection of six papers contributing to the theory and application of Monte Carlo methods and variance reduction techniques. For theoretical developments, we establish a novel framework of Monte Carlo integration over simplices, throughout from sampling to variance reduction. We also investigate the effect of batching for adaptive variance reduction, which aims at running the Monte Carlo simulation simultaneously with the parameter search algorithm using a common sequence of random realizations. Such adaptive variance reduction is moreover employed by strata in a newly proposed stratified sampling framework with dynamic budget allocation. For application in estimating the probability of failure in the context of structural reliability analysis, we formulate adaptive frameworks of stratified sampling with variance reduction by strata as well as stratified directional importance sampling, and survey a variety of numerical approaches employing Monte Carlo methods.
APA, Harvard, Vancouver, ISO, and other styles
18

Balasubramanian, Vijay. "Variance reduction and outlier identification for IDDQ testing of integrated chips using principal component analysis." Texas A&M University, 2006. http://hdl.handle.net/1969.1/4766.

Full text
Abstract:
Integrated circuits manufactured in current technology consist of millions of transistors with dimensions shrinking into the nanometer range. These small transistors have quiescent (leakage) currents that are increasingly sensitive to process variations, which have increased the variation in good-chip quiescent current and consequently reduced the effectiveness of IDDQ testing. This research proposes the use of a multivariate statistical technique known as principal component analysis for the purpose of variance reduction. Outlier analysis is applied to the reduced leakage current values as well as the good chip leakage current estimate, to identify defective chips. The proposed idea is evaluated using IDDQ values from multiple wafers of an industrial chip fabricated in 130 nm technology. It is shown that the proposed method achieves significant variance reduction and identifies many outliers that escape identification by other established techniques. For example, it identifies many of the absolute outliers in bad neighborhoods, which are not detected by Nearest Neighbor Residual and Nearest Current Ratio. It also identifies many of the spatial outliers that pass when using Current Ratio. The proposed method also identifies both active and passive defects.
APA, Harvard, Vancouver, ISO, and other styles
19

Caples, Jerry Joseph. "Variance reduction and variable selection methods for Alho's logistic capture recapture model with applications to census data /." Full text (PDF) from UMI/Dissertation Abstracts International, 2000. http://wwwlib.umi.com/cr/utexas/fullcit?p9992762.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

NASRO-ALLAH, ABDELAZIZ. "Simulation de chaines de markov et techniques de reduction de la variance." Rennes 1, 1991. http://www.theses.fr/1991REN10025.

Full text
Abstract:
Les besoins en systemes informatiques tolerant les pannes augmentent de plus en plus. En raison de la complexite et de la raideur des chaines de markov modelisant ces systemes, l'evaluation quantitative des mesures de performance par une simulation du type monte carlo standard demande un temps generalement prohibitif. Des methodes, dites de reduction de la variance, permettent d'inflechir ce probleme et de lui donner des reponses relativement satisfaisantes. Le but de cette these est de contribuer a la reduction du temps de simulation des modeles markoviens raides et/ou complexes. Dans un premier temps, on etudie quelques methodes de reduction de la variance adaptees aux modeles markoviens. Ensuite, on propose trois nouvelles techniques. Les deux premieres ont abouti a une reduction significative du temps de simulation par rapport a la simulation standard. La troisieme conduit a une nette amelioration par rapport aux techniques les plus recentes. Toutes ces methodes sont destinees a la simulation des performances stationnaires. Dans le cas transitoire, nous avons egalement obtenu un algorithme plus efficace que la simulation standard. Finalement notre travail s'est termine par l'elaboration d'un logiciel de simulation, integrant les techniques de reduction de la variance les plus recentes et les algorithmes developpes lors de cette these. Sur le plan graphique, ce logiciel comprend un editeur et permet certaines sorties graphiques
APA, Harvard, Vancouver, ISO, and other styles
21

Pupashenko, Mykhailo [Verfasser], and Ralf [Akademischer Betreuer] Korn. "Variance Reduction Procedures for Market Risk Estimation / Mykhailo Pupashenko. Betreuer: Ralf Korn." Kaiserslautern : Technische Universität Kaiserslautern, 2014. http://d-nb.info/1059109360/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Louvin, Henri. "Development of an adaptive variance reduction technique for Monte Carlo particle transport." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS351/document.

Full text
Abstract:
L’algorithme Adaptive Multilevel Splitting (AMS) a récemment fait son apparition dans la littérature de mathématiques appliquées, en tant que méthode de réduction de variance pour la simulation Monte Carlo de chaı̂nes de Markov. Ce travail de thèse se propose d’implémenter cette méthode de réduction de variance adaptative dans le code Monte-Carlo de transport de particules TRIPOLI-4,dédié entre autres aux études de radioprotection et d’instrumentation nucléaire. Caractérisées par de fortes atténuations des rayonnements dans la matière, ces études entrent dans la problématique du traitement d’évènements rares. Outre son implémentation inédite dans ce domaine d’application, deux nouvelles fonctionnalités ont été développées pour l’AMS, testées puis validées. La première est une procédure d’encaissement au vol permettant d’optimiser plusieurs scores en une seule simulation AMS. La seconde est une extension de l’AMS aux processus branchants, courants dans les simulations de radioprotection, par exemple lors du transport couplé de neutrons et des photons induits par ces derniers. L’efficacité et la robustesse de l’AMS dans ce nouveau cadre applicatif ont été démontrées dans des configurations physiquement très sévères (atténuations du flux de particules de plus de 10 ordres de grandeur), mettant ainsi en évidence les avantages prometteurs de l’AMS par rapport aux méthodes de réduction de variance existantes
The Adaptive Multilevel Splitting algorithm (AMS) has recently been introduced to the field of applied mathematics as a variance reduction scheme for Monte Carlo Markov chains simulation. This Ph.D. work intends to implement this adaptative variance reduction method in the particle transport Monte Carlo code TRIPOLI-4, dedicated among others to radiation shielding and nuclear instrumentation studies. Those studies are characterized by strong radiation attenuation in matter, so that they fall within the scope of rare events analysis. In addition to its unprecedented implementation in the field of particle transport, two new features were developed for the AMS. The first is an on-the-fly scoring procedure, designed to optimize the estimation of multiple scores in a single AMS simulation. The second is an extension of the AMS to branching processes, which are common in radiation shielding simulations. For example, in coupled neutron-photon simulations, the neutrons have to be transported alongside the photons they produce. The efficiency and robustness of AMS in this new framework have been demonstrated in physically challenging configurations (particle flux attenuations larger than 10 orders of magnitude), which highlights the promising advantages of the AMS algorithm over existing variance reduction techniques
APA, Harvard, Vancouver, ISO, and other styles
23

Åberg, K. Magnus. "Variance Reduction in Analytical Chemistry : New Numerical Methods in Chemometrics and Molecular Simulation." Doctoral thesis, Stockholm University, Department of Analytical Chemistry, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-283.

Full text
Abstract:

This thesis is based on five papers addressing variance reduction in different ways. The papers have in common that they all present new numerical methods.

Paper I investigates quantitative structure-retention relationships from an image processing perspective, using an artificial neural network to preprocess three-dimensional structural descriptions of the studied steroid molecules.

Paper II presents a new method for computing free energies. Free energy is the quantity that determines chemical equilibria and partition coefficients. The proposed method may be used for estimating, e.g., chromatographic retention without performing experiments.

Two papers (III and IV) deal with correcting deviations from bilinearity by so-called peak alignment. Bilinearity is a theoretical assumption about the distribution of instrumental data that is often violated by measured data. Deviations from bilinearity lead to increased variance, both in the data and in inferences from the data, unless invariance to the deviations is built into the model, e.g., by the use of the method proposed in paper III and extended in paper IV.

Paper V addresses a generic problem in classification; namely, how to measure the goodness of different data representations, so that the best classifier may be constructed.

Variance reduction is one of the pillars on which analytical chemistry rests. This thesis considers two aspects on variance reduction: before and after experiments are performed. Before experimenting, theoretical predictions of experimental outcomes may be used to direct which experiments to perform, and how to perform them (papers I and II). After experiments are performed, the variance of inferences from the measured data are affected by the method of data analysis (papers III-V).

APA, Harvard, Vancouver, ISO, and other styles
24

Åberg, K. Magnus. "Variance reduction in analytical chemistry : new numerical methods in chemometrics and molecular simulation /." Stockholm : Institutionen för analytisk kemi, Univ, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Järnberg, Emelie. "Dynamic Credit Models : An analysis using Monte Carlo methods and variance reduction techniques." Thesis, KTH, Matematisk statistik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-197322.

Full text
Abstract:
In this thesis, the credit worthiness of a company is modelled using a stochastic process. Two credit models are considered; Merton's model, which models the value of a firm's assets using geometric Brownian motion, and the distance to default model, which is driven by a two factor jump diffusion process. The probability of default and the default time are simulated using Monte Carlo and the number of scenarios needed to obtain convergence in the simulations is investigated. The simulations are performed using the probability matrix method (PMM), which means that a transition probability matrix describing the process is created and used for the simulations. Besides this, two variance reduction techniques are investigated; importance sampling and antithetic variates.
I den här uppsatsen modelleras kreditvärdigheten hos ett företag med hjälp av en stokastisk process. Två kreditmodeller betraktas; Merton's modell, som modellerar värdet av ett företags tillgångar med geometrisk Brownsk rörelse, och "distance to default", som drivs av en två-dimensionell stokastisk process med både diffusion och hopp. Sannolikheten för konkurs och den förväntade tidpunkten för konkurs simuleras med hjälp av Monte Carlo och antalet scenarion som behövs för konvergens i simuleringarna undersöks. Vid simuleringen används metoden "probability matrix method", där en övergångssannolikhetsmatris som beskriver processen används. Dessutom undersöks två metoder för variansreducering; viktad simulering (importance sampling) och antitetiska variabler (antithetic variates).
APA, Harvard, Vancouver, ISO, and other styles
26

Besirevic, Edin, and Anders Dahl. "Variance reduction of product parameters in wire rope production by optimisation of process parameters." Thesis, Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-63634.

Full text
Abstract:
The usage of statistical methods in the production industry has resulted in quality improvements for several organisations during the last decade. However, advanced statistical methods are still underutilised and underappreciated in quality improvement programs and projects in many companies. Therefore it is of interest to investigate how these methods can be used for quality improvements in the production industry. A case study was conducted at Teufelberger’s wire rope production plant in Wels, Austria. Wire rope type BS 909 was studied by utilising the arsenal of tools and methods that Six Sigma provides, with an emphasis on statistical methods -- especially Design of Experiments. Teufelberger is currently (2016) experiencing diameter issues along the rope and it has been found through customer reclamations and quality controls in the production that the variation in a production lot can be substantial. Furthermore, there is no master setting of process parameters and each operator has their own way of setting and adjusting them, as there are different ways to achieve a product which is within given tolerances. The purpose of this study is to investigate how statistical tools can be applied to minimise variance in a Teufelberger rope manufacturing process, by conducting a case study utilising the quality improvement methodology DMAIC. Experiments were conducted in the following four sub processes which produce input components used during the manufacturing of BS909: KL-A, KL-B, IL and Al. In KL-A the following main effects were identified as significant: Postformers Spin and Postformers Diameter. In KL-B the main effect Postformers Spin was significant. In IL the following main effects were identified as significant: Compacting device, Postformers Spin and Postformers Diameter. In AL the main effect Compacting device was significant. Based on the conclusion derived from analysing these experiments theoretical master setups were created in order to address the variance issue and standardise process parameters. Further verification testing must be conducted in order to verify and tune the proposed master setups before they can be utilised.
Det senaste årtiondet har användningen av statistiska metoder inom tillverkningsindustrin resulterat i kvalitetsförbättringar för flera organisationer men dessa metoder är fortfarande undervärderade och utnyttjas ej till fullo inom program och projekt för kvalitetsförbättringar. Därför är det av intresse att undersöka hur dessa metoder kan användas för kvalitetsförbättringar inom tillverkningsindustrin. Vid en av Teufelbergers produktionsanläggningar av stålvajrar i Wels, Österrike, har en fallstudie med kvalitetsförbättringsmetodiken DMAIC genomförts. Stålvajer typ BS 909 har studerats genom att använda den arsenal av verktyg och metoder som Six Sigma innefattar, med betoning på statistiska metoder och särskilt försöksplanering. Teufelberger hade för tillfället problem med främst diametern av stålvajern. Det har visat sig genom kundreklamationer och kvalitetskontroller i produktionen att variationen i en produktionsserie kan vara betydande. Dessutom finns det ej några dokumenterade optimala inställningar av processparametrar så varje maskinoperatör har sitt eget sätt att ställa in och justera processparametrarna. Detta är möjligt då det finns olika kombinationer av  parameterinställningar som kan ge en produkt som är inom givna toleranser. Syftet med denna studie är att undersöka hur statistikverktyg kan användas för att minimera variansen i en tillverkningsprocess av stålvajer hos företaget Teufelberger, detta genom att utföra en fallstudie med kvalitetsförbättringsmetodiken DMAIC Experiment utfördes i följande fyra processer som producerar ingående komponenter som används vid tillverkningen av BS909: KL-A, KL-B, IL och Al. I processen för KL-A identifierades följande huvudeffekter som aktiva;  Postformers-Spin och Postformers-Diameter. Den enda huvudeffekt som identifierades vara aktiv för KL-B var Postformers-Spin. För IL var följande huvudeffekter aktiva: Compacting device, Postformers-Spin och Postformers-Diameter. I processen AL var endast huvudeffekten Compacting device aktiv. Baserat på det resultat som framkom vid analysen av dessa experiment har nya teoretiskt optimala inställningar beräknats, som förväntas minska variationen i responsvariabeln diameter. De nya rekommenderade inställningarna bör tills vidare kunna fungera som ny standard för produktionen, men verifieringsförsök bör ändå utföras för att bekräfta och finjustera  inställningarna.
APA, Harvard, Vancouver, ISO, and other styles
27

Hall, Nathan E. "A Radiographic Analysis of Variance in Lower Incisor Enamel Thickness." VCU Scholars Compass, 2005. http://scholarscompass.vcu.edu/etd/887.

Full text
Abstract:
The purpose of this study was to help predict the enamel thickness of mandibular incisors. At least two direct digital periapical radiographs were made for each of the 80 subjects. Radiographs were scaled to control for magnification errors using dental study models and computer software. Mesiodistal incisor width and mesial and distal enamel thicknesses were measured. Lateral incisors were determined to be wider than central incisors and distal enamel thicknesses were larger than mesial enamel thicknesses on average. The African American group demonstrated wider incisors and enamel thicknesses than the Caucasian group on average. Enamel thickness positively correlated with tooth width for all incisors. No statistically significant differences were detected between male and female groups. Some conclusions relating to enamel thickness can be made based on race, incisor position, and incisor width, but correlations were not considered strong enough to accurately determine enamel width, without the aid of radiographs.
APA, Harvard, Vancouver, ISO, and other styles
28

Van, der Walt de Kock Marisa. "Variance reduction techniques for MCNP applied to PBMR / by Marisa van der Walt de Kock." Thesis, North-West University, 2009. http://hdl.handle.net/10394/3841.

Full text
Abstract:
The applicability of the Monte Carlo N-Particle code (MCNP) to evaluate reactor shielding applications is greatly improved through the use of variance reduction techniques. This study deals with the analysis of variance reduction methods, more specifically, variance reduction methods applied in MCNP such as weight windows, geometry splitting and source biasing consistent with weight windows. Furthermore, different cases are presented to show how to improve the Figure of Merit (FOM) of an MCNP calculation when weight windows and source biasing consistent with weight windows are used. Various methodologies to generate weight windows are clearly defined in this dissertation. All the above-mentioned concepts are used to analyse a system similar to the upper part of the Pebble Bed Modular Reactor’s (PBMR) bottom reflector.
Thesis (M.Sc. (Nuclear Engineering)--North-West University, Potchefstroom Campus, 2009.
APA, Harvard, Vancouver, ISO, and other styles
29

Ressler, Richard L. "An investigation of nonlinear controls and regression-adjusted estimators for variance reduction in computer simulation." Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/26602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Li, Zhijian. "On Applications of Semiparametric Methods." Ohio University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1534258291194968.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Vasiloglou, Nikolaos. "Isometry and convexity in dimensionality reduction." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/28120.

Full text
Abstract:
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.
Committee Chair: David Anderson; Committee Co-Chair: Alexander Gray; Committee Member: Anthony Yezzi; Committee Member: Hongyuan Zha; Committee Member: Justin Romberg; Committee Member: Ronald Schafer.
APA, Harvard, Vancouver, ISO, and other styles
32

Solomon, Clell J. Jr. "Discrete-ordinates cost optimization of weight-dependent variance reduction techniques for Monte Carlo neutral particle transport." Diss., Kansas State University, 2010. http://hdl.handle.net/2097/7014.

Full text
Abstract:
Doctor of Philosophy
Department of Mechanical and Nuclear Engineering
J. Kenneth Shultis
A method for deterministically calculating the population variances of Monte Carlo particle transport calculations involving weight-dependent variance reduction has been developed. This method solves a set of equations developed by Booth and Cashwell [1979], but extends them to consider the weight-window variance reduction technique. Furthermore, equations that calculate the duration of a single history in an MCNP5 (RSICC version 1.51) calculation have been developed as well. The calculation cost, defined as the inverse figure of merit, of a Monte Carlo calculation can be deterministically minimized from calculations of the expected variance and expected calculation time per history.The method has been applied to one- and two-dimensional multi-group and mixed material problems for optimization of weight-window lower bounds. With the adjoint (importance) function as a basis for optimization, an optimization mesh is superimposed on the geometry. Regions of weight-window lower bounds contained within the same optimization mesh element are optimized together with a scaling parameter. Using this additional optimization mesh restricts the size of the optimization problem, thereby eliminating the need to optimize each individual weight-window lower bound. Application of the optimization method to a one-dimensional problem, designed to replicate the variance reduction iron-window effect, obtains a gain in efficiency by a factor of 2 over standard deterministically generated weight windows. The gain in two dimensional problems varies. For a 2-D block problem and a 2-D two-legged duct problem, the efficiency gain is a factor of about 1.2. The top-hat problem sees an efficiency gain of 1.3, while a 2-D 3-legged duct problem sees an efficiency gain of only 1.05. This work represents the first attempt at deterministic optimization of Monte Carlo calculations with weight-dependent variance reduction. However, the current work is limited in the size of problems that can be run by the amount of computer memory available in computational systems. This limitation results primarily from the added discretization of the Monte Carlo particle weight required to perform the weight-dependent analyses. Alternate discretization methods for the Monte Carlo weight should be a topic of future investigation. Furthermore, the accuracy with which the MCNP5 calculation times can be calculated deterministically merits further study.
APA, Harvard, Vancouver, ISO, and other styles
33

Ramström, Alexander. "Pricing of European and Asian options with Monte Carlo simulations : Variance reduction and low-discrepancy techniques." Thesis, Umeå universitet, Nationalekonomi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-145942.

Full text
Abstract:
This thesis evaluates different models accuracy of option pricing by MonteCarlo simulations when changing parameter values and the number of simulations. By simulating the asset movements thousands of times and use well established theory one can approximate the price of one-year financialoptions and for the European options also compare them to the price from Black-Scholes exact pricing formula. The models in this thesis became two direct variance reducing models, a low-discrepancy model and also the Standard model. The results show that the models that controls the generating of random numbers has the best estimating of the price where Quasi-MC performed better than the others. A Hybrid model was con-structed from two established models and it showed that it was accurate when it comes to estimating the option price and even beat the Quasi-MCmost of the times.
APA, Harvard, Vancouver, ISO, and other styles
34

Landon, Colin Donald. "Weighted particle variance reduction of Direct Simulation Monte Carlo for the Bhatnagar-Gross-Krook collision operator." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61882.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 67-69).
Direct Simulation Monte Carlo (DSMC)-the prevalent stochastic particle method for high-speed rarefied gas flows-simulates the Boltzmann equation using distributions of representative particles. Although very efficient in producing samples of the distribution function, the slow convergence associated with statistical sampling makes DSMC simulation of low-signal situations problematic. In this thesis, we present a control-variate-based approach to obtain a variance-reduced DSMC method that dramatically enhances statistical convergence for lowsignal problems. Here we focus on the Bhatnagar-Gross-Krook (BGK) approximation, which as we show, exhibits special stability properties. The BGK collision operator, an approximation common in a variety of fields involving particle mediated transport, drives the system towards a local equilibrium at a prescribed relaxation rate. Variance reduction is achieved by formulating desired (non-equilibrium) simulation results in terms of the difference between a non-equilibrium and a correlated equilibrium simulation. Subtracting the two simulations results in substantial variance reduction, because the two simulations are correlated. Correlation is achieved using likelihood weights which relate the relative probability of occurrence of an equilibrium particle compared to a non-equilibrium particle. The BGK collision operator lends itself naturally to the development of unbiased, stable weight evaluation rules. Our variance-reduced solutions are compared with good agreement to simple analytical solutions, and to solutions obtained using a variance-reduced BGK based particle method that does not resemble DSMC as strongly. A number of algorithmic options are explored and our final simulation method, (VR)2-BGK-DSMC, emerges as a simple and stable version of DSMC that can efficiently resolve arbitrarily low-signal flows.
by Colin Donald Landon.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
35

Loggins, William Conley 1953. "The development and evaluation of an expert system for identification of variance reduction techniques in simulation." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/277131.

Full text
Abstract:
The subject of this thesis is the development of an expert system to offer advising for variance reduction technique (VRT) selection in simulation. Simulation efficiency is increased by appropriate use of variance reduction techniques. The process of selecting VRTs brings a sharper focus to issues of experimental design and thus to the very purpose and objectives to be attained by the simulation. Students in the University of Arizona Systems and Industrial Engineering Department graduate courses are the intended users of this expert system, with the expectation that their practice of simulation will be facilitated.
APA, Harvard, Vancouver, ISO, and other styles
36

Leblond, Rémi. "Asynchronous optimization for machine learning." Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEE057/document.

Full text
Abstract:
Les explosions combinées de la puissance computationnelle et de la quantité de données disponibles ont fait des algorithmes les nouveaux facteurs limitants en machine learning. L’objectif de cette thèse est donc d’introduire de nouvelles méthodes capables de tirer profit de quantités de données et de ressources computationnelles importantes. Nous présentons deux contributions indépendantes. Premièrement, nous développons des algorithmes d’optimisation rapides, adaptés aux avancées en architecture de calcul parallèle pour traiter des quantités massives de données. Nous introduisons un cadre d’analyse pour les algorithmes parallèles asynchrones, qui nous permet de faire des preuves correctes et simples. Nous démontrons son utilité en analysant les propriétés de convergence et d’accélération de deux nouveaux algorithmes. Asaga est une variante parallèle asynchrone et parcimonieuse de Saga, un algorithme à variance réduite qui a un taux de convergence linéaire rapide dans le cas d’un objectif lisse et fortement convexe. Dans les conditions adéquates, Asaga est linéairement plus rapide que Saga, même en l’absence de parcimonie. ProxAsaga est une extension d’Asaga au cas plus général où le terme de régularisation n’est pas lisse. ProxAsaga obtient aussi une accélération linéaire. Nous avons réalisé des expériences approfondies pour comparer nos algorithms à l’état de l’art. Deuxièmement, nous présentons de nouvelles méthodes adaptées à la prédiction structurée. Nous nous concentrons sur les réseaux de neurones récurrents (RNNs), dont l’algorithme d’entraînement traditionnel – basé sur le principe du maximum de vraisemblance (MLE) – présente plusieurs limitations. La fonction de coût associée ignore l’information contenue dans les métriques structurées ; de plus, elle entraîne des divergences entre l’entraînement et la prédiction. Nous proposons donc SeaRNN, un nouvel algorithme d’entraînement des RNNs inspiré de l’approche dite “learning to search”. SeaRNN repose sur une exploration de l’espace d’états pour définir des fonctions de coût globales-locales, plus proches de la métrique d’évaluation que l’objectif MLE. Les modèles entraînés avec SeaRNN ont de meilleures performances que ceux appris via MLE pour trois tâches difficiles, dont la traduction automatique. Enfin, nous étudions le comportement de ces modèles et effectuons une comparaison détaillée de notre nouvelle approche aux travaux de recherche connexes
The impressive breakthroughs of the last two decades in the field of machine learning can be in large part attributed to the explosion of computing power and available data. These two limiting factors have been replaced by a new bottleneck: algorithms. The focus of this thesis is thus on introducing novel methods that can take advantage of high data quantity and computing power. We present two independent contributions. First, we develop and analyze novel fast optimization algorithms which take advantage of the advances in parallel computing architecture and can handle vast amounts of data. We introduce a new framework of analysis for asynchronous parallel incremental algorithms, which enable correct and simple proofs. We then demonstrate its usefulness by performing the convergence analysis for several methods, including two novel algorithms. Asaga is a sparse asynchronous parallel variant of the variance-reduced algorithm Saga which enjoys fast linear convergence rates on smooth and strongly convex objectives. We prove that it can be linearly faster than its sequential counterpart, even without sparsity assumptions. ProxAsaga is an extension of Asaga to the more general setting where the regularizer can be non-smooth. We prove that it can also achieve a linear speedup. We provide extensive experiments comparing our new algorithms to the current state-of-art. Second, we introduce new methods for complex structured prediction tasks. We focus on recurrent neural networks (RNNs), whose traditional training algorithm for RNNs – based on maximum likelihood estimation (MLE) – suffers from several issues. The associated surrogate training loss notably ignores the information contained in structured losses and introduces discrepancies between train and test times that may hurt performance. To alleviate these problems, we propose SeaRNN, a novel training algorithm for RNNs inspired by the “learning to search” approach to structured prediction. SeaRNN leverages test-alike search space exploration to introduce global-local losses that are closer to the test error than the MLE objective. We demonstrate improved performance over MLE on three challenging tasks, and provide several subsampling strategies to enable SeaRNN to scale to large-scale tasks, such as machine translation. Finally, after contrasting the behavior of SeaRNN models to MLE models, we conduct an in-depth comparison of our new approach to the related work
APA, Harvard, Vancouver, ISO, and other styles
37

Schaden, Daniel [Verfasser], Elisabeth [Akademischer Betreuer] Ullmann, Elisabeth [Gutachter] Ullmann, Stefan [Gutachter] Vandewalle, and Benjamin [Gutachter] Peherstorfer. "Variance reduction with multilevel estimators / Daniel Schaden ; Gutachter: Elisabeth Ullmann, Stefan Vandewalle, Benjamin Peherstorfer ; Betreuer: Elisabeth Ullmann." München : Universitätsbibliothek der TU München, 2021. http://d-nb.info/1234149133/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Park, Misook. "Design and Analysis Methods for Cluster Randomized Trials with Pair-Matching on Baseline Outcome: Reduction of Treatment Effect Variance." VCU Scholars Compass, 2006. http://hdl.handle.net/10156/2195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Chin, Philip Allen. "Sensor Networks: Studies on the Variance of Estimation, Improving Event/Anomaly Detection, and Sensor Reduction Techniques Using Probabilistic Models." Thesis, Virginia Tech, 2012. http://hdl.handle.net/10919/33645.

Full text
Abstract:
Sensor network performance is governed by the physical placement of sensors and their geometric relationship to the events they measure. To illustrate this, the entirety of this thesis covers the following interconnected subjects: 1) graphical analysis of the variance of the estimation error caused by physical characteristics of an acoustic target source and its geometric location relative to sensor arrays, 2) event/anomaly detection method for time aggregated point sensor data using a parametric Poisson distribution data model, 3) a sensor reduction or placement technique using Bellman optimal estimates of target agent dynamics and probabilistic training data (Goode, Chin, & Roan, 2011), and 4) transforming event monitoring point sensor data into event detection and classification of the direction of travel using a contextual, joint probability, causal relationship, sliding window, and geospatial intelligence (GEOINT) method.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
40

Sampson, Andrew. "Principled Variance Reduction Techniques for Real Time Patient-Specific Monte Carlo Applications within Brachytherapy and Cone-Beam Computed Tomography." VCU Scholars Compass, 2013. http://scholarscompass.vcu.edu/etd/3063.

Full text
Abstract:
This dissertation describes the application of two principled variance reduction strategies to increase the efficiency for two applications within medical physics. The first, called correlated Monte Carlo (CMC) applies to patient-specific, permanent-seed brachytherapy (PSB) dose calculations. The second, called adjoint-biased forward Monte Carlo (ABFMC), is used to compute cone-beam computed tomography (CBCT) scatter projections. CMC was applied for two PSB cases: a clinical post-implant prostate, and a breast with a simulated lumpectomy cavity. CMC computes the dose difference between the highly correlated dose computing homogeneous and heterogeneous geometries. The particle transport in the heterogeneous geometry assumed a purely homogeneous environment, and altered particle weights accounted for bias. Average gains of 37 to 60 are reported from using CMC, relative to un-correlated Monte Carlo (UMC) calculations, for the prostate and breast CTV’s, respectively. To further increase the efficiency up to 1500 fold above UMC, an approximation called interpolated correlated Monte Carlo (ICMC) was applied. ICMC computes using CMC on a low-resolution (LR) spatial grid followed by interpolation to a high-resolution (HR) voxel grid followed. The interpolated, HR is then summed with a HR, pre-computed, homogeneous dose map. ICMC computes an approximate, but accurate, HR heterogeneous dose distribution from LR MC calculations achieving an average 2% standard deviation within the prostate and breast CTV’s in 1.1 sec and 0.39 sec, respectively. Accuracy for 80% of the voxels using ICMC is within 3% for anatomically realistic geometries. Second, for CBCT scatter projections, ABFMC was implemented via weight windowing using a solution to the adjoint Boltzmann transport equation computed either via the discrete ordinates method (DOM), or a MC implemented forward-adjoint importance generator (FAIG). ABFMC, implemented via DOM or FAIG, was tested for a single elliptical water cylinder using a primary point source (PPS) and a phase-space source (PSS). The best gains were found by using the PSS yielding average efficiency gains of 250 relative to non-weight windowed MC utilizing the PPS. Furthermore, computing 360 projections on a 40 by 30 pixel grid requires only 48 min on a single CPU core allowing clinical use via parallel processing techniques.
APA, Harvard, Vancouver, ISO, and other styles
41

Abbas-Turki, Lokman. "Calcul parallèle pour les problèmes linéaires, non-linéaires et linéaires inverses en finance." Thesis, Paris Est, 2012. http://www.theses.fr/2012PEST1055/document.

Full text
Abstract:
De ce fait, le premier objectif de notre travail consiste à proposer des générateurs de nombres aléatoires appropriés pour des architectures parallèles et massivement parallèles de clusters de CPUs/GPUs. Nous testerons le gain en temps de calcul et l'énergie consommée lors de l'implémentation du cas linéaire du pricing européen. Le deuxième objectif est de reformuler le problème non-linéaire du pricing américain pour que l'on puisse avoir des gains de parallélisation semblables à ceux obtenus pour les problèmes linéaires. La méthode proposée fondée sur le calcul de Malliavin est aussi plus avantageuse du point de vue du praticien au delà même de l'intérêt intrinsèque lié à la possibilité d'une bonne parallélisation. Toujours dans l'objectif de proposer des algorithmes parallèles, le dernier point est l'étude de l'unicité de la solution de certains cas linéaires inverses en finance. Cette unicité aide en effet à avoir des algorithmes simples fondés sur Monte Carlo
Handling multidimensional parabolic linear, nonlinear and linear inverse problems is the main objective of this work. It is the multidimensional word that makes virtually inevitable the use of simulation methods based on Monte Carlo. This word also makes necessary the use of parallel architectures. Indeed, the problems dealing with a large number of assets are major resources consumers, and only parallelization is able to reduce their execution times. Consequently, the first goal of our work is to propose "appropriate" random number generators to parallel and massively parallel architecture implemented on CPUs/GPUs cluster. We quantify the speedup and the energy consumption of the parallel execution of a European pricing. The second objective is to reformulate the nonlinear problem of pricing American options in order to get the same parallelization gains as those obtained for linear problems. In addition to its parallelization suitability, the proposed method based on Malliavin calculus has other practical advantages. Continuing with parallel algorithms, the last point of this work is dedicated to the uniqueness of the solution of some linear inverse problems in finance. This theoretical study enables the use of simple methods based on Monte Carlo
APA, Harvard, Vancouver, ISO, and other styles
42

Chen, Quan. "Risk Management of Cascading Failure in Composite Reliability of a Deregulated Power System with Microgrids." Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/52980.

Full text
Abstract:
Due to power system deregulations, transmission expansion not keeping up with the load growth, and higher frequency of natural hazards resulting from climate change, major blackouts are becoming more frequent and are spreading over larger regions, entailing higher losses and costs to the economy and the society of many countries in the world. Large-scale blackouts typically result from cascading failure originating from a local event, as typified by the 2003 U.S.-Canada blackout. Their mitigation in power system planning calls for the development of methods and algorithms that assess the risk of cascading failures due to relay over-tripping, short-circuits induced by overgrown vegetation, voltage sags, line and transformer overloading, transient instabilities, voltage collapse, to cite a few. How to control the economic losses of blackouts is gaining a lot of attention among power researchers. In this research work, we develop new Monte Carlo methods and algorithms that assess and manage the risk of cascading failure in composite reliability of deregulated power systems. To reduce the large computational burden involved by the simulations, we make use of importance sampling techniques utilizing the Weibull distribution when modeling power generator outages. Another computing time reduction is achieved by applying importance sampling together with antithetic variates. It is shown that both methods noticeably reduce the number of samples that need to be investigated while maintaining the accuracy of the results at a desirable level. With the advent of microgrids, the assessment of their benefits in power systems is becoming a prominent research topic. In this research work, we investigate their potential positive impact on power system reliability while performing an optimal coordination among three energy sources within microgrids, namely renewable energy conversion, energy storage and micro-turbine generation. This coordination is modeled when applying sequential Monte Carlo simulations, which seek the best placement and sizing of microgrids in composite reliability of a deregulated power system that minimize the risk of cascading failure leading to blackouts subject to fixed investment budget. The performance of the approach is evaluated on the Roy Billinton Test System (RBTS) and the IEEE Reliability Test System (RTS). Simulation results show that in both power systems, microgrids contribute to the improvement of system reliability and the decrease of the risk of cascading failure.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
43

Costaouec, Ronan. "Numerical methods for homogenization : applications to random media." Thesis, Paris Est, 2011. http://www.theses.fr/2011PEST1012/document.

Full text
Abstract:
Le travail de cette thèse a porté sur le développement de techniques numériques pour l'homogénéisation de matériaux présentant à une petite échelle des hétérogénéités aléatoires. Sous certaines hypothèses, la théorie mathématique de l'homogénéisation stochastique permet d'expliciter les propriétés effectives de tels matériaux. Néanmoins, en pratique, la détermination de ces propriétés demeure difficile. En effet, celle-ci requiert la résolution d'équations aux dérivées partielles stochastiques posées sur l'espace tout entier. Dans cette thèse, cette difficulté est abordée de deux manières différentes. Les méthodes classiques d'approximation conduisent à approcher les propriétés effectives par des quantités aléatoires. Réduire la variance de ces quantités est l'objectif des travaux de la Partie I. On montre ainsi comment adapter au cadre de l'homogénéisation stochastique une technique de réduction de variance déjà éprouvée dans d'autres domaines. Les travaux de la Partie II s'intéressent à des cas pour lesquels le matériau d'intérêt est considéré comme une petite perturbation aléatoire d'un matériau de référence. On montre alors numériquement et théoriquement que cette simplification de la modélisation permet effectivement une réduction très importante du coût calcul
In this thesis we investigate numerical methods for the homogenization of materials the structures of which, at fine scales, are characterized by random heterogenities. Under appropriate hypotheses, the effective properties of such materials are given by closed formulas. However, in practice the computation of these properties is a difficult task because it involves solving partial differential equations with stochastic coefficients that are additionally posed on the whole space. In this work, we address this difficulty in two different ways. The standard discretization techniques lead to random approximate effective properties. In Part I, we aim at reducing their variance, using a well-known variance reduction technique that has already been used successfully in other domains. The works of Part II focus on the case when the material can be seen as a small random perturbation of a periodic material. We then show both numerically and theoretically that, in this case, computing the effective properties is much less costly than in the general case
APA, Harvard, Vancouver, ISO, and other styles
44

Dehaye, Benjamin. "Accélération de la convergence dans le code de transport de particules Monte-Carlo TRIPOLI-4® en criticité." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112332/document.

Full text
Abstract:
Un certain nombre de domaines tels que les études de criticité requièrent le calcul de certaines grandeurs neutroniques d'intérêt. Il existe deux types de code : les codes déterministes et les codes stochastiques. Ces derniers sont réputés simuler la physique de la configuration traitée de manière exacte. Toutefois, le temps de calcul nécessaire peut s'avérer très élevé.Le travail réalisé dans cette thèse a pour but de bâtir une stratégie d'accélération de la convergence de la criticité dans le code de calcul TRIPOLI-4®. Nous souhaitons mettre en œuvre le jeu à variance nulle. Pour ce faire, il est nécessaire de calculer le flux adjoint. L'originalité de cette thèse est de calculer directement le flux adjoint par une simulation directe Monte-Carlo sans passer par un code externe, grâce à la méthode de la matrice de fission. Ce flux adjoint est ensuite utilisé comme carte d'importance afin d'accélérer la convergence de la simulation
Fields such as criticality studies need to compute some values of interest in neutron physics. Two kind of codes may be used : deterministic ones and stochastic ones. The stochastic codes do not require approximation and are thus more exact. However, they may require a lot of time to converge with a sufficient precision.The work carried out during this thesis aims to build an efficient acceleration strategy in the TRIPOLI-4®. We wish to implement the zero variance game. To do so, the method requires to compute the adjoint flux. The originality of this work is to directly compute the adjoint flux directly from a Monte-Carlo simulation without using external codes thanks to the fission matrix method. This adjoint flux is then used as an importance map to bias the simulation
APA, Harvard, Vancouver, ISO, and other styles
45

Liu, Liu. "Stochastic Optimization in Machine Learning." Thesis, The University of Sydney, 2019. http://hdl.handle.net/2123/19982.

Full text
Abstract:
Stochastic optimization has received extensive attention in recent years due to their extremely potential for solving the large-scale optimization problem. However, the classical optimization algorithm and original stochastic method might prove to be inefficient due to the fact that: 1) the cost-per-iteration is a computational challenge, 2) the convergence and complexity are poorly performed. In this thesis, we exploit the stochastic optimization from three kinds of "order" optimization to address the problem. For the stochastic zero-order optimization, we introduce a novel variance reduction based method under Gaussian smoothing and establish the complexity for optimizing non-convex problems. With variance reduction on both sample space and search space, the complexity of our algorithm is sublinear to d and is strictly better than current approaches, in both smooth and non-smooth cases. Moreover, we extend the proposed method to the mini-batch version. For the stochastic first-order optimization, we consider two kinds of functions with one finite-sum and two finite-sums. The one first structure, we apply the dual coordinate ascent and accelerated algorithm to propose a general scheme for the double-accelerated stochastic method to deal with the ill-conditioned problem. The second structure, we apply the variance-reduced technique to derive the stochastic composition, including inner and outer finite-sum functions with a large number of component functions, via variance reduction that significantly improves the query complexity when the number of inner component functions is sufficiently large. For the stochastic second-order optimization, we study a family of stochastic trust region and cubic regularization methods when gradient, Hessian and function values are computed inexactly, and show the iteration complexity to achieve $\epsilon$-approximate second-order optimality is in the same order with previous work for which gradient and function values are computed exactly. The mild conditions on inexactness can be achieved in finite-sum minimization using random sampling.
APA, Harvard, Vancouver, ISO, and other styles
46

Xu, Yushun. "Asymptotique suramortie de la dynamique de Langevin et réduction de variance par repondération." Thesis, Paris Est, 2019. http://www.theses.fr/2019PESC2024/document.

Full text
Abstract:
Cette thèse est consacrée à l’étude de deux problèmes différents : l’asymptotique suramortie de la dynamique de Langevin d’une part, et l’étude d’une technique de réduction de variance dans une méthode de Monte Carlo par une repondération optimale des échantillons, d’autre part. Dans le premier problème, on montre la convergence en distribution de processus de Langevin dans l’asymptotique sur-amortie. La preuve repose sur la méthode classique des “fonctions test perturbées”, qui est utilisée pour montrer la tension dans l’espace des chemins, puis pour identifier la limite comme solution d’un problème de martingale. L’originalité du résultat tient aux hypothèses très faibles faites sur la régularité de l’énergie potentielle. Dans le deuxième problème, nous concevons des méthodes de réduction de la variance pour l’estimation de Monte Carlo d’une espérance de type E[φ(X, Y )], lorsque la distribution de X est exactement connue. L’idée générale est de donner à chaque échantillon un poids, de sorte que la distribution empirique pondérée qui en résulterait une marginale par rapport à la variable X aussi proche que possible de sa cible. Nous prouvons plusieurs résultats théoriques sur la méthode, en identifiant des régimes où la réduction de la variance est garantie. Nous montrons l’efficacité de la méthode en pratique, par des tests numériques qui comparent diverses variantes de notre méthode avec la méthode naïve et des techniques de variable de contrôle. La méthode est également illustrée pour une simulation d’équation différentielle stochastique de Langevin
This dissertation is devoted to studying two different problems: the over-damped asymp- totics of Langevin dynamics and a new variance reduction technique based on an optimal reweighting of samples.In the first problem, the convergence in distribution of Langevin processes in the over- damped asymptotic is proven. The proof relies on the classical perturbed test function (or corrector) method, which is used (i) to show tightness in path space, and (ii) to identify the extracted limit with a martingale problem. The result holds assuming the continuity of the gradient of the potential energy, and a mild control of the initial kinetic energy. In the second problem, we devise methods of variance reduction for the Monte Carlo estimation of an expectation of the type E [φ(X, Y )], when the distribution of X is exactly known. The key general idea is to give each individual sample a weight, so that the resulting weighted empirical distribution has a marginal with respect to the variable X as close as possible to its target. We prove several theoretical results on the method, identifying settings where the variance reduction is guaranteed, and also illustrate the use of the weighting method in Langevin stochastic differential equation. We perform numerical tests comparing the methods and demonstrating their efficiency
APA, Harvard, Vancouver, ISO, and other styles
47

Larsson, Stefan. "Mixing Processes for Ground Improvement by Deep Mixing." Doctoral thesis, KTH, Civil and Architectural Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3667.

Full text
Abstract:

The thesis is dealing with mixing processes havingapplication to ground improvement by deep mixing. The mainobjectives of the thesis is to make a contribution to knowledgeof the basic mechanisms in mixing binding agents into soil andimprove the knowledge concerning factors that influence theuniformity of stabilised soil.

A great part of the work consists of a literature surveywith particular emphasis on literature on the processindustries. This review forms a basis for a profounddescription and discussion of the mixing process and factorsaffecting the process in connection with deep mixingmethods.

The thesis presents a method for a simple field test for thestudy of influential factors in the mixing process. A number offactors in the installation process of lime-cement columns havebeen studied in two field tests using statistical multifactorexperiment design. The effects of retrieval rate, number ofmixing blades, rotation speed, air pressure in the storagetank, and diameter of the binder outlet on the stabilisationeffect and the coefficient of variation determined byhand-operated penetrometer tests for excavated lime-cementcolumns, were studied.

The literature review, the description of the mixingprocess, and the results from the field tests provide a morebalanced picture of the mixing process and are expected to beuseful in connection to ground improvement projects and thedevelopment of mixing equipments.

The concept of sufficient mixture quality, i.e. theinteraction between the mixing process and the mechanicalsystem, is discussed in the last section. By means ofgeostatistical methods, the analysis considers thevolume-variability relationship with reference to strengthproperties. According to the analysis, the design values forstrength properties depends on the mechanical system, the scaleof scrutiny, the spatial correlation structure, and the conceptof safety, i.e. the concept of sufficient mixture quality isproblem specific.

Key words:Deep Mixing, Lime cement columns, Mixingmechanisms, Mixture quality, Field test, ANOVA, Variancereduction.

APA, Harvard, Vancouver, ISO, and other styles
48

Luna, René Edgardo. "Mathematical modelling andsimulation for tumour growth andangiogenesis." Thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-85077.

Full text
Abstract:
Cancer is a complex illness that affects millions of people every year. Amongst the most frequently encountered variants of this illness are solid tumours. The growth of solid tumours depends on a large number of factors such as oxygen concentration, cell reproduction, cell movement, cell death, and vascular environment. The aim of this thesis is to provide further insight in the interconnections between these factors by means of numerical simulations. We present a multiscale model for tumor growth by coupling a microscopic, agent-based model for normal and tumor cells with macroscopic mean-field models for oxygen and extracellular concentrations. We assume the cell movement to be dominated by Brownian motion. The temporal and spatial evolution of the oxygen concentration is governed by a reaction-diffusion equation that mimics a balance law.To complement this macroscopic oxygen evolution with microscopic information, we propose a lattice-free approach that extends the vascular distribution of oxygen. We employ a Markov chain to estimate the sprout probability of new vessels. The extension of the new vessels is modeled by enhancing the agent-based cell model with chemotactic sensitivity. Our results include finite-volume discretizations of the resulting partial differential equations and suitable approaches to approximate the stochastic differential equations governing the agent-based motion. We provide a simulation framework that evaluates the effect of the various parameters on, for instance, the spread of oxygen. We also show results of numerical experiments where we allow new vessels to sprout, i.e. we explore angiogenesis. In the case of a static vasculature, we simulate the full multiscale model using a coupled stochastic/deterministic discretization approach which is able to reduce variance at least for a chosen computable indicator, leading to improved efficiency and a potential increased reliability on models of this type.
APA, Harvard, Vancouver, ISO, and other styles
49

Burešová, Jana. "Oceňování derivátů pomocí Monte Carlo simulací." Master's thesis, Vysoká škola ekonomická v Praze, 2009. http://www.nusl.cz/ntk/nusl-11476.

Full text
Abstract:
Pricing of more complex derivatives is very often based on Monte Carlo simulations. Estimates given by these simulations are derived from thousands of scenarions for the underlying asset price developement. These estimates can be more precise in case of higher number of scenarions or in case of modifications of a simulation mentioned in this master thesis. First part of the thesis includes theoretic description of variance reduction techniques, second part consists of implementation of all techniques in pricing a barrier option and of their comparison. We conclude the thesis by two statements. The former one says that usage of each technique is subject to simulation specifics, the latter one recommends to use MC simulations even in the case a closed-form formula was derived.
APA, Harvard, Vancouver, ISO, and other styles
50

Farah, Jad. "Amélioration des mesures anthroporadiamétriques personnalisées assistées par calcul Monte Carlo : optimisation des temps de calculs et méthodologie de mesure pour l’établissement de la répartition d’activité." Thesis, Paris 11, 2011. http://www.theses.fr/2011PA112183/document.

Full text
Abstract:
Afin d’optimiser la surveillance des travailleuses du nucléaire par anthroporadiamétrie, il est nécessaire de corriger les coefficients d’étalonnage obtenus à l’aide du fantôme physique masculin Livermore. Pour ce faire, des étalonnages numériques basés sur l’utilisation des calculs Monte Carlo associés à des fantômes numériques ont été utilisés. De tels étalonnages nécessitent d’une part le développement de fantômes représentatifs des tailles et des morphologies les plus communes et d’autre part des simulations Monte Carlo rapides et fiables. Une bibliothèque de fantômes thoraciques féminins a ainsi été développée en ajustant la masse des organes internes et de la poitrine suivant la taille et les recommandations de la chirurgie plastique. Par la suite, la bibliothèque a été utilisée pour étalonner le système de comptage du Secteur d’Analyses Médicales d’AREVA NC La Hague. De plus, une équation décrivant la variation de l’efficacité de comptage en fonction de l’énergie et de la morphologie a été développée. Enfin, des recommandations ont été données pour corriger les coefficients d’étalonnage du personnel féminin en fonction de la taille et de la poitrine. Enfin, pour accélérer les simulations, des méthodes de réduction de variance ainsi que des opérations de simplification de la géométrie ont été considérées.Par ailleurs, pour l’étude des cas de contamination complexes, il est proposé de remonter à la cartographie d’activité en associant aux mesures anthroporadiamétriques le calcul Monte Carlo. La méthode développée consiste à réaliser plusieurs mesures spectrométriques avec différents positionnements des détecteurs. Ensuite, il s’agit de séparer la contribution de chaque organe contaminé au comptage grâce au calcul Monte Carlo. L’ensemble des mesures réalisées au LEDI, au CIEMAT et au KIT ont démontré l’intérêt de cette méthode et l’apport des simulations Monte Carlo pour une analyse plus précise des mesures in vivo, permettant ainsi de déterminer la répartition de l’activité à la suite d’une contamination interne
To optimize the monitoring of female workers using in vivo spectrometry measurements, it is necessary to correct the typical calibration coefficients obtained with the Livermore male physical phantom. To do so, numerical calibrations based on the use of Monte Carlo simulations combined with anthropomorphic 3D phantoms were used. Such computational calibrations require on the one hand the development of representative female phantoms of different size and morphologies and on the other hand rapid and reliable Monte Carlo calculations. A library of female torso models was hence developed by fitting the weight of internal organs and breasts according to the body height and to relevant plastic surgery recommendations. This library was next used to realize a numerical calibration of the AREVA NC La Hague in vivo counting installation. Moreover, the morphology-induced counting efficiency variations with energy were put into equation and recommendations were given to correct the typical calibration coefficients for any monitored female worker as a function of body height and breast size. Meanwhile, variance reduction techniques and geometry simplification operations were considered to accelerate simulations.Furthermore, to determine the activity mapping in the case of complex contaminations, a method that combines Monte Carlo simulations with in vivo measurements was developed. This method consists of realizing several spectrometry measurements with different detector positioning. Next, the contribution of each contaminated organ to the count is assessed from Monte Carlo calculations. The in vivo measurements realized at LEDI, CIEMAT and KIT have demonstrated the effectiveness of the method and highlighted the valuable contribution of Monte Carlo simulations for a more detailed analysis of spectrometry measurements. Thus, a more precise estimate of the activity distribution is given in the case of an internal contamination
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography