Dissertations / Theses on the topic 'Variance reduction'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Variance reduction.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Chouinard, Hayley Helene. "Reduction of yield variance through crop insurance." Thesis, Montana State University, 1994. http://etd.lib.montana.edu/etd/1994/chouinard/ChouinardH1994.pdf.
Full textGagakuma, Bertelsen. "Variance Reduction in Wind Farm Layout Optimization." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7758.
Full textGreensmith, Evan, and evan greensmith@gmail com. "Policy Gradient Methods: Variance Reduction and Stochastic Convergence." The Australian National University. Research School of Information Sciences and Engineering, 2005. http://thesis.anu.edu.au./public/adt-ANU20060106.193712.
Full textTjärnström, Fredrik. "Variance expressions and model reduction in system identification /." Linköping : Univ, 2002. http://www.bibl.liu.se/liupubl/disp/disp2002/tek730s.pdf.
Full textWise, Michael Anthony. "A variance reduction technique for production cost simulation." Ohio : Ohio University, 1989. http://www.ohiolink.edu/etd/view.cgi?ohiou1182181023.
Full textRowland, Kelly L. "Advanced Quadrature Selection for Monte Carlo Variance Reduction." Thesis, University of California, Berkeley, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10817512.
Full textNeutral particle radiation transport simulations are critical for radiation shielding and deep penetration applications. Arriving at a solution for a given response of interest can be computationally difficult because of the magnitude of particle attenuation often seen in these shielding problems. Hybrid methods, which aim to synergize the individual favorable aspects of deterministic and stochastic solution methods for solving the steady-state neutron transport equation, are commonly used in radiation shielding applications to achieve statistically meaningful results in a reduced amount of computational time and effort. The current state of the art in hybrid calculations is the Consistent Adjoint-Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) methods, which generate Monte Carlo variance reduction parameters based on deterministically-calculated scalar flux solutions. For certain types of radiation shielding problems, however, results produced using these methods suffer from unphysical oscillations in scalar flux solutions that are a product of angular discretization. These aberrations are termed “ray effects”.
The Lagrange Discrete Ordinates (LDO) equations retain the formal structure of the traditional discrete ordinates formulation of the neutron transport equation and mitigate ray effects at high angular resolution. In this work, the LDO equations have been implemented in the Exnihilo parallel neutral particle radiation transport framework, with the deterministic scalar flux solutions passed to the Automated Variance Reduction Generator (ADVANTG) software and the resultant Monte Carlo variance reduction parameters’ efficacy assessed based on results from MCNP5. Studies were conducted in both the CADIS and FW-CADIS contexts, with the LDO equations’ variance reduction parameters seeing their best performance in the FW-CADIS method, especially for photon transport.
Greensmith, Evan. "Policy gradient methods : variance reduction and stochastic convergence /." View thesis entry in Australian Digital Theses Program, 2005. http://thesis.anu.edu.au/public/adt-ANU20060106.193712/index.html.
Full textYang, Yani. "Dimension reduction in the regressions through weighted variance estimation." HKBU Institutional Repository, 2009. http://repository.hkbu.edu.hk/etd_ra/1073.
Full textHöök, Lars Josef. "Variance reduction methods for numerical solution of plasma kinetic diffusion." Licentiate thesis, KTH, Fusionsplasmafysik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-91332.
Full textQC 20120314
Aghedo, Maurice Enoghayinagbon. "Variance reduction in Monte Carlo methods of estimating distribution functions." Thesis, Imperial College London, 1985. http://hdl.handle.net/10044/1/37385.
Full textLarsson, Julia. "Optimization of option pricing : - Variance reduction and low-discrepancy techniques." Thesis, Umeå universitet, Företagsekonomi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-173093.
Full textWhittle, Joss. "Quality assessment and variance reduction in Monte Carlo rendering algorithms." Thesis, Swansea University, 2018. https://cronfa.swan.ac.uk/Record/cronfa40271.
Full textStockbridge, Rebecca. "Bias and Variance Reduction in Assessing Solution Quality for Stochastic Programs." Diss., The University of Arizona, 2013. http://hdl.handle.net/10150/301665.
Full textFlaspoehler, Timothy Michael. "FW-CADIS variance reduction in MAVRIC shielding analysis of the VHTR." Thesis, Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45743.
Full textAdewunmi, Adrian. "Selection of simulation variance reduction techniques through a fuzzy expert system." Thesis, University of Nottingham, 2010. http://eprints.nottingham.ac.uk/11260/.
Full textYang, Wei-Ning. "Multivariate estimation and variance reduction for terminating and steady-state simulation /." The Ohio State University, 1989. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487672631602302.
Full textSong, Chenxiao. "Monte Carlo Variance Reduction Methods with Applications in Structural Reliability Analysis." Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/29801.
Full textBalasubramanian, Vijay. "Variance reduction and outlier identification for IDDQ testing of integrated chips using principal component analysis." Texas A&M University, 2006. http://hdl.handle.net/1969.1/4766.
Full textCaples, Jerry Joseph. "Variance reduction and variable selection methods for Alho's logistic capture recapture model with applications to census data /." Full text (PDF) from UMI/Dissertation Abstracts International, 2000. http://wwwlib.umi.com/cr/utexas/fullcit?p9992762.
Full textNASRO-ALLAH, ABDELAZIZ. "Simulation de chaines de markov et techniques de reduction de la variance." Rennes 1, 1991. http://www.theses.fr/1991REN10025.
Full textPupashenko, Mykhailo [Verfasser], and Ralf [Akademischer Betreuer] Korn. "Variance Reduction Procedures for Market Risk Estimation / Mykhailo Pupashenko. Betreuer: Ralf Korn." Kaiserslautern : Technische Universität Kaiserslautern, 2014. http://d-nb.info/1059109360/34.
Full textLouvin, Henri. "Development of an adaptive variance reduction technique for Monte Carlo particle transport." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS351/document.
Full textThe Adaptive Multilevel Splitting algorithm (AMS) has recently been introduced to the field of applied mathematics as a variance reduction scheme for Monte Carlo Markov chains simulation. This Ph.D. work intends to implement this adaptative variance reduction method in the particle transport Monte Carlo code TRIPOLI-4, dedicated among others to radiation shielding and nuclear instrumentation studies. Those studies are characterized by strong radiation attenuation in matter, so that they fall within the scope of rare events analysis. In addition to its unprecedented implementation in the field of particle transport, two new features were developed for the AMS. The first is an on-the-fly scoring procedure, designed to optimize the estimation of multiple scores in a single AMS simulation. The second is an extension of the AMS to branching processes, which are common in radiation shielding simulations. For example, in coupled neutron-photon simulations, the neutrons have to be transported alongside the photons they produce. The efficiency and robustness of AMS in this new framework have been demonstrated in physically challenging configurations (particle flux attenuations larger than 10 orders of magnitude), which highlights the promising advantages of the AMS algorithm over existing variance reduction techniques
Åberg, K. Magnus. "Variance Reduction in Analytical Chemistry : New Numerical Methods in Chemometrics and Molecular Simulation." Doctoral thesis, Stockholm University, Department of Analytical Chemistry, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-283.
Full textThis thesis is based on five papers addressing variance reduction in different ways. The papers have in common that they all present new numerical methods.
Paper I investigates quantitative structure-retention relationships from an image processing perspective, using an artificial neural network to preprocess three-dimensional structural descriptions of the studied steroid molecules.
Paper II presents a new method for computing free energies. Free energy is the quantity that determines chemical equilibria and partition coefficients. The proposed method may be used for estimating, e.g., chromatographic retention without performing experiments.
Two papers (III and IV) deal with correcting deviations from bilinearity by so-called peak alignment. Bilinearity is a theoretical assumption about the distribution of instrumental data that is often violated by measured data. Deviations from bilinearity lead to increased variance, both in the data and in inferences from the data, unless invariance to the deviations is built into the model, e.g., by the use of the method proposed in paper III and extended in paper IV.
Paper V addresses a generic problem in classification; namely, how to measure the goodness of different data representations, so that the best classifier may be constructed.
Variance reduction is one of the pillars on which analytical chemistry rests. This thesis considers two aspects on variance reduction: before and after experiments are performed. Before experimenting, theoretical predictions of experimental outcomes may be used to direct which experiments to perform, and how to perform them (papers I and II). After experiments are performed, the variance of inferences from the measured data are affected by the method of data analysis (papers III-V).
Åberg, K. Magnus. "Variance reduction in analytical chemistry : new numerical methods in chemometrics and molecular simulation /." Stockholm : Institutionen för analytisk kemi, Univ, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-283.
Full textJärnberg, Emelie. "Dynamic Credit Models : An analysis using Monte Carlo methods and variance reduction techniques." Thesis, KTH, Matematisk statistik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-197322.
Full textI den här uppsatsen modelleras kreditvärdigheten hos ett företag med hjälp av en stokastisk process. Två kreditmodeller betraktas; Merton's modell, som modellerar värdet av ett företags tillgångar med geometrisk Brownsk rörelse, och "distance to default", som drivs av en två-dimensionell stokastisk process med både diffusion och hopp. Sannolikheten för konkurs och den förväntade tidpunkten för konkurs simuleras med hjälp av Monte Carlo och antalet scenarion som behövs för konvergens i simuleringarna undersöks. Vid simuleringen används metoden "probability matrix method", där en övergångssannolikhetsmatris som beskriver processen används. Dessutom undersöks två metoder för variansreducering; viktad simulering (importance sampling) och antitetiska variabler (antithetic variates).
Besirevic, Edin, and Anders Dahl. "Variance reduction of product parameters in wire rope production by optimisation of process parameters." Thesis, Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-63634.
Full textDet senaste årtiondet har användningen av statistiska metoder inom tillverkningsindustrin resulterat i kvalitetsförbättringar för flera organisationer men dessa metoder är fortfarande undervärderade och utnyttjas ej till fullo inom program och projekt för kvalitetsförbättringar. Därför är det av intresse att undersöka hur dessa metoder kan användas för kvalitetsförbättringar inom tillverkningsindustrin. Vid en av Teufelbergers produktionsanläggningar av stålvajrar i Wels, Österrike, har en fallstudie med kvalitetsförbättringsmetodiken DMAIC genomförts. Stålvajer typ BS 909 har studerats genom att använda den arsenal av verktyg och metoder som Six Sigma innefattar, med betoning på statistiska metoder och särskilt försöksplanering. Teufelberger hade för tillfället problem med främst diametern av stålvajern. Det har visat sig genom kundreklamationer och kvalitetskontroller i produktionen att variationen i en produktionsserie kan vara betydande. Dessutom finns det ej några dokumenterade optimala inställningar av processparametrar så varje maskinoperatör har sitt eget sätt att ställa in och justera processparametrarna. Detta är möjligt då det finns olika kombinationer av parameterinställningar som kan ge en produkt som är inom givna toleranser. Syftet med denna studie är att undersöka hur statistikverktyg kan användas för att minimera variansen i en tillverkningsprocess av stålvajer hos företaget Teufelberger, detta genom att utföra en fallstudie med kvalitetsförbättringsmetodiken DMAIC Experiment utfördes i följande fyra processer som producerar ingående komponenter som används vid tillverkningen av BS909: KL-A, KL-B, IL och Al. I processen för KL-A identifierades följande huvudeffekter som aktiva; Postformers-Spin och Postformers-Diameter. Den enda huvudeffekt som identifierades vara aktiv för KL-B var Postformers-Spin. För IL var följande huvudeffekter aktiva: Compacting device, Postformers-Spin och Postformers-Diameter. I processen AL var endast huvudeffekten Compacting device aktiv. Baserat på det resultat som framkom vid analysen av dessa experiment har nya teoretiskt optimala inställningar beräknats, som förväntas minska variationen i responsvariabeln diameter. De nya rekommenderade inställningarna bör tills vidare kunna fungera som ny standard för produktionen, men verifieringsförsök bör ändå utföras för att bekräfta och finjustera inställningarna.
Hall, Nathan E. "A Radiographic Analysis of Variance in Lower Incisor Enamel Thickness." VCU Scholars Compass, 2005. http://scholarscompass.vcu.edu/etd/887.
Full textVan, der Walt de Kock Marisa. "Variance reduction techniques for MCNP applied to PBMR / by Marisa van der Walt de Kock." Thesis, North-West University, 2009. http://hdl.handle.net/10394/3841.
Full textThesis (M.Sc. (Nuclear Engineering)--North-West University, Potchefstroom Campus, 2009.
Ressler, Richard L. "An investigation of nonlinear controls and regression-adjusted estimators for variance reduction in computer simulation." Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/26602.
Full textLi, Zhijian. "On Applications of Semiparametric Methods." Ohio University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1534258291194968.
Full textVasiloglou, Nikolaos. "Isometry and convexity in dimensionality reduction." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/28120.
Full textCommittee Chair: David Anderson; Committee Co-Chair: Alexander Gray; Committee Member: Anthony Yezzi; Committee Member: Hongyuan Zha; Committee Member: Justin Romberg; Committee Member: Ronald Schafer.
Solomon, Clell J. Jr. "Discrete-ordinates cost optimization of weight-dependent variance reduction techniques for Monte Carlo neutral particle transport." Diss., Kansas State University, 2010. http://hdl.handle.net/2097/7014.
Full textDepartment of Mechanical and Nuclear Engineering
J. Kenneth Shultis
A method for deterministically calculating the population variances of Monte Carlo particle transport calculations involving weight-dependent variance reduction has been developed. This method solves a set of equations developed by Booth and Cashwell [1979], but extends them to consider the weight-window variance reduction technique. Furthermore, equations that calculate the duration of a single history in an MCNP5 (RSICC version 1.51) calculation have been developed as well. The calculation cost, defined as the inverse figure of merit, of a Monte Carlo calculation can be deterministically minimized from calculations of the expected variance and expected calculation time per history.The method has been applied to one- and two-dimensional multi-group and mixed material problems for optimization of weight-window lower bounds. With the adjoint (importance) function as a basis for optimization, an optimization mesh is superimposed on the geometry. Regions of weight-window lower bounds contained within the same optimization mesh element are optimized together with a scaling parameter. Using this additional optimization mesh restricts the size of the optimization problem, thereby eliminating the need to optimize each individual weight-window lower bound. Application of the optimization method to a one-dimensional problem, designed to replicate the variance reduction iron-window effect, obtains a gain in efficiency by a factor of 2 over standard deterministically generated weight windows. The gain in two dimensional problems varies. For a 2-D block problem and a 2-D two-legged duct problem, the efficiency gain is a factor of about 1.2. The top-hat problem sees an efficiency gain of 1.3, while a 2-D 3-legged duct problem sees an efficiency gain of only 1.05. This work represents the first attempt at deterministic optimization of Monte Carlo calculations with weight-dependent variance reduction. However, the current work is limited in the size of problems that can be run by the amount of computer memory available in computational systems. This limitation results primarily from the added discretization of the Monte Carlo particle weight required to perform the weight-dependent analyses. Alternate discretization methods for the Monte Carlo weight should be a topic of future investigation. Furthermore, the accuracy with which the MCNP5 calculation times can be calculated deterministically merits further study.
Ramström, Alexander. "Pricing of European and Asian options with Monte Carlo simulations : Variance reduction and low-discrepancy techniques." Thesis, Umeå universitet, Nationalekonomi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-145942.
Full textLandon, Colin Donald. "Weighted particle variance reduction of Direct Simulation Monte Carlo for the Bhatnagar-Gross-Krook collision operator." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61882.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 67-69).
Direct Simulation Monte Carlo (DSMC)-the prevalent stochastic particle method for high-speed rarefied gas flows-simulates the Boltzmann equation using distributions of representative particles. Although very efficient in producing samples of the distribution function, the slow convergence associated with statistical sampling makes DSMC simulation of low-signal situations problematic. In this thesis, we present a control-variate-based approach to obtain a variance-reduced DSMC method that dramatically enhances statistical convergence for lowsignal problems. Here we focus on the Bhatnagar-Gross-Krook (BGK) approximation, which as we show, exhibits special stability properties. The BGK collision operator, an approximation common in a variety of fields involving particle mediated transport, drives the system towards a local equilibrium at a prescribed relaxation rate. Variance reduction is achieved by formulating desired (non-equilibrium) simulation results in terms of the difference between a non-equilibrium and a correlated equilibrium simulation. Subtracting the two simulations results in substantial variance reduction, because the two simulations are correlated. Correlation is achieved using likelihood weights which relate the relative probability of occurrence of an equilibrium particle compared to a non-equilibrium particle. The BGK collision operator lends itself naturally to the development of unbiased, stable weight evaluation rules. Our variance-reduced solutions are compared with good agreement to simple analytical solutions, and to solutions obtained using a variance-reduced BGK based particle method that does not resemble DSMC as strongly. A number of algorithmic options are explored and our final simulation method, (VR)2-BGK-DSMC, emerges as a simple and stable version of DSMC that can efficiently resolve arbitrarily low-signal flows.
by Colin Donald Landon.
S.M.
Loggins, William Conley 1953. "The development and evaluation of an expert system for identification of variance reduction techniques in simulation." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/277131.
Full textLeblond, Rémi. "Asynchronous optimization for machine learning." Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEE057/document.
Full textThe impressive breakthroughs of the last two decades in the field of machine learning can be in large part attributed to the explosion of computing power and available data. These two limiting factors have been replaced by a new bottleneck: algorithms. The focus of this thesis is thus on introducing novel methods that can take advantage of high data quantity and computing power. We present two independent contributions. First, we develop and analyze novel fast optimization algorithms which take advantage of the advances in parallel computing architecture and can handle vast amounts of data. We introduce a new framework of analysis for asynchronous parallel incremental algorithms, which enable correct and simple proofs. We then demonstrate its usefulness by performing the convergence analysis for several methods, including two novel algorithms. Asaga is a sparse asynchronous parallel variant of the variance-reduced algorithm Saga which enjoys fast linear convergence rates on smooth and strongly convex objectives. We prove that it can be linearly faster than its sequential counterpart, even without sparsity assumptions. ProxAsaga is an extension of Asaga to the more general setting where the regularizer can be non-smooth. We prove that it can also achieve a linear speedup. We provide extensive experiments comparing our new algorithms to the current state-of-art. Second, we introduce new methods for complex structured prediction tasks. We focus on recurrent neural networks (RNNs), whose traditional training algorithm for RNNs – based on maximum likelihood estimation (MLE) – suffers from several issues. The associated surrogate training loss notably ignores the information contained in structured losses and introduces discrepancies between train and test times that may hurt performance. To alleviate these problems, we propose SeaRNN, a novel training algorithm for RNNs inspired by the “learning to search” approach to structured prediction. SeaRNN leverages test-alike search space exploration to introduce global-local losses that are closer to the test error than the MLE objective. We demonstrate improved performance over MLE on three challenging tasks, and provide several subsampling strategies to enable SeaRNN to scale to large-scale tasks, such as machine translation. Finally, after contrasting the behavior of SeaRNN models to MLE models, we conduct an in-depth comparison of our new approach to the related work
Schaden, Daniel [Verfasser], Elisabeth [Akademischer Betreuer] Ullmann, Elisabeth [Gutachter] Ullmann, Stefan [Gutachter] Vandewalle, and Benjamin [Gutachter] Peherstorfer. "Variance reduction with multilevel estimators / Daniel Schaden ; Gutachter: Elisabeth Ullmann, Stefan Vandewalle, Benjamin Peherstorfer ; Betreuer: Elisabeth Ullmann." München : Universitätsbibliothek der TU München, 2021. http://d-nb.info/1234149133/34.
Full textPark, Misook. "Design and Analysis Methods for Cluster Randomized Trials with Pair-Matching on Baseline Outcome: Reduction of Treatment Effect Variance." VCU Scholars Compass, 2006. http://hdl.handle.net/10156/2195.
Full textChin, Philip Allen. "Sensor Networks: Studies on the Variance of Estimation, Improving Event/Anomaly Detection, and Sensor Reduction Techniques Using Probabilistic Models." Thesis, Virginia Tech, 2012. http://hdl.handle.net/10919/33645.
Full textMaster of Science
Sampson, Andrew. "Principled Variance Reduction Techniques for Real Time Patient-Specific Monte Carlo Applications within Brachytherapy and Cone-Beam Computed Tomography." VCU Scholars Compass, 2013. http://scholarscompass.vcu.edu/etd/3063.
Full textAbbas-Turki, Lokman. "Calcul parallèle pour les problèmes linéaires, non-linéaires et linéaires inverses en finance." Thesis, Paris Est, 2012. http://www.theses.fr/2012PEST1055/document.
Full textHandling multidimensional parabolic linear, nonlinear and linear inverse problems is the main objective of this work. It is the multidimensional word that makes virtually inevitable the use of simulation methods based on Monte Carlo. This word also makes necessary the use of parallel architectures. Indeed, the problems dealing with a large number of assets are major resources consumers, and only parallelization is able to reduce their execution times. Consequently, the first goal of our work is to propose "appropriate" random number generators to parallel and massively parallel architecture implemented on CPUs/GPUs cluster. We quantify the speedup and the energy consumption of the parallel execution of a European pricing. The second objective is to reformulate the nonlinear problem of pricing American options in order to get the same parallelization gains as those obtained for linear problems. In addition to its parallelization suitability, the proposed method based on Malliavin calculus has other practical advantages. Continuing with parallel algorithms, the last point of this work is dedicated to the uniqueness of the solution of some linear inverse problems in finance. This theoretical study enables the use of simple methods based on Monte Carlo
Chen, Quan. "Risk Management of Cascading Failure in Composite Reliability of a Deregulated Power System with Microgrids." Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/52980.
Full textPh. D.
Costaouec, Ronan. "Numerical methods for homogenization : applications to random media." Thesis, Paris Est, 2011. http://www.theses.fr/2011PEST1012/document.
Full textIn this thesis we investigate numerical methods for the homogenization of materials the structures of which, at fine scales, are characterized by random heterogenities. Under appropriate hypotheses, the effective properties of such materials are given by closed formulas. However, in practice the computation of these properties is a difficult task because it involves solving partial differential equations with stochastic coefficients that are additionally posed on the whole space. In this work, we address this difficulty in two different ways. The standard discretization techniques lead to random approximate effective properties. In Part I, we aim at reducing their variance, using a well-known variance reduction technique that has already been used successfully in other domains. The works of Part II focus on the case when the material can be seen as a small random perturbation of a periodic material. We then show both numerically and theoretically that, in this case, computing the effective properties is much less costly than in the general case
Dehaye, Benjamin. "Accélération de la convergence dans le code de transport de particules Monte-Carlo TRIPOLI-4® en criticité." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112332/document.
Full textFields such as criticality studies need to compute some values of interest in neutron physics. Two kind of codes may be used : deterministic ones and stochastic ones. The stochastic codes do not require approximation and are thus more exact. However, they may require a lot of time to converge with a sufficient precision.The work carried out during this thesis aims to build an efficient acceleration strategy in the TRIPOLI-4®. We wish to implement the zero variance game. To do so, the method requires to compute the adjoint flux. The originality of this work is to directly compute the adjoint flux directly from a Monte-Carlo simulation without using external codes thanks to the fission matrix method. This adjoint flux is then used as an importance map to bias the simulation
Liu, Liu. "Stochastic Optimization in Machine Learning." Thesis, The University of Sydney, 2019. http://hdl.handle.net/2123/19982.
Full textXu, Yushun. "Asymptotique suramortie de la dynamique de Langevin et réduction de variance par repondération." Thesis, Paris Est, 2019. http://www.theses.fr/2019PESC2024/document.
Full textThis dissertation is devoted to studying two different problems: the over-damped asymp- totics of Langevin dynamics and a new variance reduction technique based on an optimal reweighting of samples.In the first problem, the convergence in distribution of Langevin processes in the over- damped asymptotic is proven. The proof relies on the classical perturbed test function (or corrector) method, which is used (i) to show tightness in path space, and (ii) to identify the extracted limit with a martingale problem. The result holds assuming the continuity of the gradient of the potential energy, and a mild control of the initial kinetic energy. In the second problem, we devise methods of variance reduction for the Monte Carlo estimation of an expectation of the type E [φ(X, Y )], when the distribution of X is exactly known. The key general idea is to give each individual sample a weight, so that the resulting weighted empirical distribution has a marginal with respect to the variable X as close as possible to its target. We prove several theoretical results on the method, identifying settings where the variance reduction is guaranteed, and also illustrate the use of the weighting method in Langevin stochastic differential equation. We perform numerical tests comparing the methods and demonstrating their efficiency
Larsson, Stefan. "Mixing Processes for Ground Improvement by Deep Mixing." Doctoral thesis, KTH, Civil and Architectural Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3667.
Full textThe thesis is dealing with mixing processes havingapplication to ground improvement by deep mixing. The mainobjectives of the thesis is to make a contribution to knowledgeof the basic mechanisms in mixing binding agents into soil andimprove the knowledge concerning factors that influence theuniformity of stabilised soil.
A great part of the work consists of a literature surveywith particular emphasis on literature on the processindustries. This review forms a basis for a profounddescription and discussion of the mixing process and factorsaffecting the process in connection with deep mixingmethods.
The thesis presents a method for a simple field test for thestudy of influential factors in the mixing process. A number offactors in the installation process of lime-cement columns havebeen studied in two field tests using statistical multifactorexperiment design. The effects of retrieval rate, number ofmixing blades, rotation speed, air pressure in the storagetank, and diameter of the binder outlet on the stabilisationeffect and the coefficient of variation determined byhand-operated penetrometer tests for excavated lime-cementcolumns, were studied.
The literature review, the description of the mixingprocess, and the results from the field tests provide a morebalanced picture of the mixing process and are expected to beuseful in connection to ground improvement projects and thedevelopment of mixing equipments.
The concept of sufficient mixture quality, i.e. theinteraction between the mixing process and the mechanicalsystem, is discussed in the last section. By means ofgeostatistical methods, the analysis considers thevolume-variability relationship with reference to strengthproperties. According to the analysis, the design values forstrength properties depends on the mechanical system, the scaleof scrutiny, the spatial correlation structure, and the conceptof safety, i.e. the concept of sufficient mixture quality isproblem specific.
Key words:Deep Mixing, Lime cement columns, Mixingmechanisms, Mixture quality, Field test, ANOVA, Variancereduction.
Luna, René Edgardo. "Mathematical modelling andsimulation for tumour growth andangiogenesis." Thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-85077.
Full textBurešová, Jana. "Oceňování derivátů pomocí Monte Carlo simulací." Master's thesis, Vysoká škola ekonomická v Praze, 2009. http://www.nusl.cz/ntk/nusl-11476.
Full textFarah, Jad. "Amélioration des mesures anthroporadiamétriques personnalisées assistées par calcul Monte Carlo : optimisation des temps de calculs et méthodologie de mesure pour l’établissement de la répartition d’activité." Thesis, Paris 11, 2011. http://www.theses.fr/2011PA112183/document.
Full textTo optimize the monitoring of female workers using in vivo spectrometry measurements, it is necessary to correct the typical calibration coefficients obtained with the Livermore male physical phantom. To do so, numerical calibrations based on the use of Monte Carlo simulations combined with anthropomorphic 3D phantoms were used. Such computational calibrations require on the one hand the development of representative female phantoms of different size and morphologies and on the other hand rapid and reliable Monte Carlo calculations. A library of female torso models was hence developed by fitting the weight of internal organs and breasts according to the body height and to relevant plastic surgery recommendations. This library was next used to realize a numerical calibration of the AREVA NC La Hague in vivo counting installation. Moreover, the morphology-induced counting efficiency variations with energy were put into equation and recommendations were given to correct the typical calibration coefficients for any monitored female worker as a function of body height and breast size. Meanwhile, variance reduction techniques and geometry simplification operations were considered to accelerate simulations.Furthermore, to determine the activity mapping in the case of complex contaminations, a method that combines Monte Carlo simulations with in vivo measurements was developed. This method consists of realizing several spectrometry measurements with different detector positioning. Next, the contribution of each contaminated organ to the count is assessed from Monte Carlo calculations. The in vivo measurements realized at LEDI, CIEMAT and KIT have demonstrated the effectiveness of the method and highlighted the valuable contribution of Monte Carlo simulations for a more detailed analysis of spectrometry measurements. Thus, a more precise estimate of the activity distribution is given in the case of an internal contamination