To see the other types of publications on this topic, follow the link: Echantillonnage et estimation Monte Carlo.

Journal articles on the topic 'Echantillonnage et estimation Monte Carlo'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Echantillonnage et estimation Monte Carlo.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zhang, Kaixuan, Jingxian Wang, Daizong Tian, and Thrasyvoulos N. Pappas. "Film Grain Rendering and Parameter Estimation." ACM Transactions on Graphics 42, no. 4 (July 26, 2023): 1–14. http://dx.doi.org/10.1145/3592127.

Full text
Abstract:
We propose a realistic film grain rendering algorithm based on statistics derived analytically from a physics-based Boolean model that Newson et al. adopted for Monte Carlo simulations of film grain. We also propose formulas for estimation of the model parameters from scanned film grain images. The proposed rendering is computationally efficient and can be used for real-time film grain simulation for a wide range of film grain parameters when the individual film grains are not visible. Experimental results demonstrate the effectiveness of the proposed approach for both constant and real-world images, for a six orders of magnitude speed-up compared with the Monte Carlo simulations of the Newson et al. approach.
APA, Harvard, Vancouver, ISO, and other styles
2

Filinov, V. S., P. R. Levashov, and A. S. Larkin. "The density–temperature range of exchange–correlation exciton existence by the fermionic path integral Monte Carlo method." Physics of Plasmas 29, no. 5 (May 2022): 052106. http://dx.doi.org/10.1063/5.0089836.

Full text
Abstract:
A recently developed fermionic path integral Monte Carlo approach [Filinov et al., Phys. Rev. E 102, 033203 (2020) and Filinov et al., J. Phys. A 55, 035001 (2021)] has been applied for the estimation of the density–temperature range of exchange–correlation exciton existence in a strongly coupled degenerate uniform electron gas. The approach allows us to reduce the “fermionic sign problem” taking into account the interference effects of the Coulomb and exchange interaction of electrons in the basic Monte Carlo cell and its periodic images. Our results for radial distribution functions demonstrate the formation and decay of a short-range quantum ordering of electrons associated with exchange–correlation excitons in the literature. Such excitons have never been observed earlier in standard path integral Monte Carlo simulations.
APA, Harvard, Vancouver, ISO, and other styles
3

Richard, Jean-François. "Conférence François-Albert Angers (1999). Enchères : théorie économique et réalité." Articles 76, no. 2 (February 5, 2009): 173–98. http://dx.doi.org/10.7202/602320ar.

Full text
Abstract:
RÉSUMÉ Cet article présente une synthèse de travaux relatifs aux modèles empiriques de la théorie des jeux. Les principaux sujets abordés sont : modèles structurels, identification, solutions d’équilibre, résolution par simulation de Monte-Carlo, estimation et applications.
APA, Harvard, Vancouver, ISO, and other styles
4

Abubakar Sadiq, Ibrahim, S. I. S. Doguwa, Abubakar Yahaya, and Abubakar Usman. "Development of New Generalized Odd Fréchet-Exponentiated-G Family of Distribution." UMYU Scientifica 2, no. 4 (December 30, 2023): 169–78. http://dx.doi.org/10.56919/usci.2324.021.

Full text
Abstract:
The study examines the limitations of existing parametric distributional models in accommodating various real-world datasets and proposes an extension termed the New Generalized Odd Fréchet-Exponentiated-G (NGOF-Et-G) family. Building upon prior work, this new distribution model aims to enhance flexibility across datasets by employing the direct substitution method. Mathematical properties including moments, entropy, moment generating function (mgf), and order statistics of the NGOF-Et-G family are analyzed, while parameters are estimated using the maximum likelihood technique. Furthermore, the study introduces the NGOF-Et-Rayleigh and NGOF-Et-Weibull models, evaluating their performance using lifetime datasets. A Monte Carlo simulation is employed to assess the consistency and accuracy of parameter estimation methods, comparing maximum likelihood estimation (MLE) and maximum product spacing (MPS). Results indicate the superiority of MLE in estimating parameters for the introduced distribution, alongside the enhanced flexibility of the new models in fitting positive data compared to existing distributions. In conclusion, the research establishes the potential of the proposed NGOF-Et-G family and its variants as promising alternatives in modelling positive data, offering greater flexibility and improved parameter estimation accuracy, as evidenced by Monte Carlo simulations and real-world dataset applications.
APA, Harvard, Vancouver, ISO, and other styles
5

Xu, Xiaosi, and Ying Li. "Quantum-assisted Monte Carlo algorithms for fermions." Quantum 7 (August 3, 2023): 1072. http://dx.doi.org/10.22331/q-2023-08-03-1072.

Full text
Abstract:
Quantum computing is a promising way to systematically solve the longstanding computational problem, the ground state of a many-body fermion system. Many efforts have been made to realise certain forms of quantum advantage in this problem, for instance, the development of variational quantum algorithms. A recent work by Huggins et al. [1] reports a novel candidate, i.e. a quantum-classical hybrid Monte Carlo algorithm with a reduced bias in comparison to its fully-classical counterpart. In this paper, we propose a family of scalable quantum-assisted Monte Carlo algorithms where the quantum computer is used at its minimal cost and still can reduce the bias. By incorporating a Bayesian inference approach, we can achieve this quantum-facilitated bias reduction with a much smaller quantum-computing cost than taking empirical mean in amplitude estimation. Besides, we show that the hybrid Monte Carlo framework is a general way to suppress errors in the ground state obtained from classical algorithms. Our work provides a Monte Carlo toolkit for achieving quantum-enhanced calculation of fermion systems on near-term quantum devices.
APA, Harvard, Vancouver, ISO, and other styles
6

Al-Nasse, Amjad D. "An information-theoretic approach to the measurement error model." Statistics in Transition new series 11, no. 1 (July 16, 2010): 9–24. http://dx.doi.org/10.59170/stattrans-2010-001.

Full text
Abstract:
In this paper, the idea of generalized maximum entropy estimation approach (Golan et al. 1996) is used to fit the general linear measurement error model. A Monte Carlo comparison is made with the classical maximum likelihood estimation (MLE) method. The results showed that, the GME is outperformed the MLE estimators in terms of mean squared error. A real data analysis is also presented.
APA, Harvard, Vancouver, ISO, and other styles
7

Plekhanov, Kirill, Matthias Rosenkranz, Mattia Fiorentini, and Michael Lubasch. "Variational quantum amplitude estimation." Quantum 6 (March 17, 2022): 670. http://dx.doi.org/10.22331/q-2022-03-17-670.

Full text
Abstract:
We propose to perform amplitude estimation with the help of constant-depth quantum circuits that variationally approximate states during amplitude amplification. In the context of Monte Carlo (MC) integration, we numerically show that shallow circuits can accurately approximate many amplitude amplification steps. We combine the variational approach with maximum likelihood amplitude estimation [Y. Suzuki et al., Quantum Inf. Process. 19, 75 (2020)] in variational quantum amplitude estimation (VQAE). VQAE typically has larger computational requirements than classical MC sampling. To reduce the variational cost, we propose adaptive VQAE and numerically show in 6 to 12 qubit simulations that it can outperform classical MC sampling.
APA, Harvard, Vancouver, ISO, and other styles
8

Vida, Denis, Peter S. Gural, Peter G. Brown, Margaret Campbell-Brown, and Paul Wiegert. "Estimating trajectories of meteors: an observational Monte Carlo approach – I. Theory." Monthly Notices of the Royal Astronomical Society 491, no. 2 (November 15, 2019): 2688–705. http://dx.doi.org/10.1093/mnras/stz3160.

Full text
Abstract:
ABSTRACT It has recently been shown by Egal et al. that some types of existing meteor in-atmosphere trajectory estimation methods may be less accurate than others, particularly when applied to high-precision optical measurements. The comparative performance of trajectory solution methods has previously only been examined for a small number of cases. Besides the radiant, orbital accuracy depends on the estimation of pre-atmosphere velocities, which have both random and systematic biases. Thus, it is critical to understand the uncertainty in velocity measurement inherent to each trajectory estimation method. In this first of a series of two papers, we introduce a novel meteor trajectory estimation method that uses the observed dynamics of meteors across stations as a global optimization function and that does not require either a theoretical or an empirical flight model to solve for velocity. We also develop a 3D observational meteor trajectory simulator that uses a meteor ablation model to replicate the dynamics of meteoroid flight, as a means to validate different trajectory solvers. We both test this new method and compare it to other methods, using synthetic meteors from three major showers spanning a wide range of velocities and geometries (Draconids, Geminids, and Perseids). We determine which meteor trajectory solving algorithm performs better for all-sky, moderate field-of-view, and high-precision narrow-field optical meteor detection systems. The results are presented in the second paper in this series. Finally, we give detailed equations for estimating meteor trajectories and analytically computing meteoroid orbits, and provide the python code of the methodology as open-source software.
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Jau-er, Chien-Hsun Huang, and Jia-Jyun Tien. "Debiased/Double Machine Learning for Instrumental Variable Quantile Regressions." Econometrics 9, no. 2 (April 2, 2021): 15. http://dx.doi.org/10.3390/econometrics9020015.

Full text
Abstract:
In this study, we investigate the estimation and inference on a low-dimensional causal parameter in the presence of high-dimensional controls in an instrumental variable quantile regression. Our proposed econometric procedure builds on the Neyman-type orthogonal moment conditions of a previous study (Chernozhukov et al. 2018) and is thus relatively insensitive to the estimation of the nuisance parameters. The Monte Carlo experiments show that the estimator copes well with high-dimensional controls. We also apply the procedure to empirically reinvestigate the quantile treatment effect of 401(k) participation on accumulated wealth.
APA, Harvard, Vancouver, ISO, and other styles
10

Takeuchi, Tsutomu T., Kohji Yoshikawa, and Takako T. Ishii. "Galaxy Luminosity Function: Applications and Cosmological Implications." Symposium - International Astronomical Union 201 (2005): 519–20. http://dx.doi.org/10.1017/s007418090021694x.

Full text
Abstract:
We studied the statistical methods for the estimation of the luminosity function (LF) of galaxies by Monte Carlo simulations. After examining the performance of these methods, we analyzed the photometric redshift data of the Hubble Deep Field prepared by Fernández-Soto et al. (1999). We also derived luminosity density ρL at B- and I-band. Our B-band estimation is roughly consistent with that of Sawicki, Lin, & Yee (1997), but a few times lower at 2.0 < z < 3.0. The evolution of ρL(I) is found to be less prominent.
APA, Harvard, Vancouver, ISO, and other styles
11

Sambou, S. "Comparaison par simulation de Monte-Carlo des propriétés de deux estimateurs du paramètre d'échelle de la loi exponentielle : méthode du maximum de vraisemblance (MV) et méthode des moindres carrés (MC)." Revue des sciences de l'eau 17, no. 1 (April 12, 2005): 23–47. http://dx.doi.org/10.7202/705521ar.

Full text
Abstract:
La loi exponentielle est très répandue en hydrologie : elle est faiblement paramétrée, de mise en œuvre aisée. Deux méthodes sont fréquemment utilisées pour estimer son paramètre : la méthode du maximum de vraisemblance et la méthode des moments, qui fournissent la même estimation. A côté de ces deux méthodes, il y a celle des moindres carrés qui est très rarement utilisée pour cette loi. Dans cet article, nous comparons le comportement asymptotique de l'estimateur de la méthode des moindres carrés avec celui de la méthode du maximum de vraisemblance en partant d'une loi exponentielle à un seul paramètre a connu, puis en généralisant les résultats obtenus à partir de la dérivation des expressions analytiques. L'échantillon historique disponible en pratique étant unique, et de longueur généralement courte par rapport à l'information que l'on désire en tirer, l'étude des propriétés statistiques des estimateurs ne pourra se faire qu'à partir d'échantillons de variables aléatoires représentant des réalisations virtuelles du phénomène hydrologique concerné obtenus par simulations de Monte Carlo. L'étude par simulation de Monte Carlo montre que pour de faibles échantillons, l'espérance mathématique des deux estimateurs tend vers le paramètre réel, et que la variance de l'estimateur des moindres carrés est supérieure à celle de l'estimateur du maximum de vraisemblance.
APA, Harvard, Vancouver, ISO, and other styles
12

Sen, Subhradev, Morad Alizadeh, Mohamed Aboraya, M. Masoom Ali, Haitham M. Yousof, and Mohamed Ibrahim. "On Truncated Versions of Xgamma Distribution: Various Estimation Methods and Statistical modelling." Statistics, Optimization & Information Computing 12, no. 4 (December 19, 2023): 943–61. http://dx.doi.org/10.19139/soic-2310-5070-1660.

Full text
Abstract:
In this article, we introduced the truncated versions (lower, upper and double) of xgamma distribution (Sen et al. 2016). In particular, different structural and distributional properties such as moments, popular entropy measures, order statistics and survival characteristics of the upper truncated xgamma distribution are discussed in detail. We briefly describe different estimation methods, namely the maximum likelihood, ordinary least squares, weighted least square and L-Moments. Monte Carlo simulation experiments are performed for comparing the performances of the proposed methods of estimation for both small and large samples under the lower, upper and double versions. Two applications are provided, the first one comparing estimationmethods and the other for illustrating the applicability of the new model.
APA, Harvard, Vancouver, ISO, and other styles
13

Stanley, Michael, Mikael Kuusela, Brendan Byrne, and Junjie Liu. "Technical note: Posterior uncertainty estimation via a Monte Carlo procedure specialized for 4D-Var data assimilation." Atmospheric Chemistry and Physics 24, no. 16 (August 28, 2024): 9419–33. http://dx.doi.org/10.5194/acp-24-9419-2024.

Full text
Abstract:
Abstract. Through the Bayesian lens of four-dimensional variational (4D-Var) data assimilation, uncertainty in model parameters is traditionally quantified through the posterior covariance matrix. However, in modern settings involving high-dimensional and computationally expensive forward models, posterior covariance knowledge must be relaxed to deterministic or stochastic approximations. In the carbon flux inversion literature, (Chevallier et al., 2007) proposed a stochastic method capable of approximating posterior variances of linear functionals of the model parameters that is particularly well suited for large-scale Earth-system 4D-Var data assimilation tasks. This note formalizes this algorithm and clarifies its properties. We provide a formal statement of the algorithm, demonstrate why it converges to the desired posterior variance quantity of interest, and provide additional uncertainty quantification allowing incorporation of the Monte Carlo sampling uncertainty into the method's Bayesian credible intervals. The methodology is demonstrated using toy simulations and a realistic carbon flux inversion observing system simulation experiment.
APA, Harvard, Vancouver, ISO, and other styles
14

Nik, A. Saadati, A. Asgharzadeh, and A. Baklizi. "Inference Based on New Pareto-Type Records With Applications to Precipitation and Covid-19 Data." Statistics, Optimization & Information Computing 11, no. 2 (February 26, 2023): 243–57. http://dx.doi.org/10.19139/soic-2310-5070-1591.

Full text
Abstract:
We consider estimation and prediction of future records based on observed records from the new Pareto type distribution proposed recently by Bourguignon et al. (2016), “M. Bourguignon, H. Saulo, R. N. Fernandez, A new Pareto-type distribution with applications in reliability and income data, Physica A, 457 (2016), 166-175.”. We derived several point predictors for a future record on the basis of the first n records. Two real data sets on precipitation and Covid 19 are analysed and a Monte Carlo simulation study has been performed to evaluate the statistical performance of point predictors presented in this paper.
APA, Harvard, Vancouver, ISO, and other styles
15

Ala-aho, P., P. M. Rossi, and B. Kløve. "Estimation of temporal and spatial variations in groundwater recharge in unconfined sand aquifers using Scots pine inventories." Hydrology and Earth System Sciences 19, no. 4 (April 23, 2015): 1961–76. http://dx.doi.org/10.5194/hess-19-1961-2015.

Full text
Abstract:
Abstract. Climate change and land use are rapidly changing the amount and temporal distribution of recharge in northern aquifers. This paper presents a novel method for distributing Monte Carlo simulations of 1-D sandy sediment profile spatially to estimate transient recharge in an unconfined esker aquifer. The modelling approach uses data-based estimates for the most important parameters controlling the total amount (canopy cover) and timing (thickness of the unsaturated zone) of groundwater recharge. Scots pine canopy was parameterized to leaf area index (LAI) using forestry inventory data. Uncertainty in the parameters controlling sediment hydraulic properties and evapotranspiration (ET) was carried over from the Monte Carlo runs to the final recharge estimates. Different mechanisms for lake, soil, and snow evaporation and transpiration were used in the model set-up. Finally, the model output was validated with independent recharge estimates using the water table fluctuation (WTF) method and baseflow estimation. The results indicated that LAI is important in controlling total recharge amount. Soil evaporation (SE) compensated for transpiration for areas with low LAI values, which may be significant in optimal management of forestry and recharge. Different forest management scenarios tested with the model showed differences in annual recharge of up to 100 mm. The uncertainty in recharge estimates arising from the simulation parameters was lower than the interannual variation caused by climate conditions. It proved important to take unsaturated thickness and vegetation cover into account when estimating spatially and temporally distributed recharge in sandy unconfined aquifers.
APA, Harvard, Vancouver, ISO, and other styles
16

Bernardelli, Michał, and Barbara Kowalczyk. "Optimal Allocation of the Sample in the Poisson Item Count Technique." Acta Universitatis Lodziensis. Folia Oeconomica 3, no. 335 (May 16, 2018): 35–47. http://dx.doi.org/10.18778/0208-6018.335.03.

Full text
Abstract:
Indirect methods of questioning are of utmost importance when dealing with sensitive questions. This paper refers to the new indirect method introduced by Tian et al. (2014) and examines the optimal allocation of the sample to control and treatment groups. If determining the optimal allocation is based on the variance formula for the method of moments (difference in means) estimator of the sensitive proportion, the solution is quite straightforward and was given in Tian et al. (2014). However, maximum likelihood (ML) estimation is known from much better properties, therefore determining the optimal allocation based on ML estimators has more practical importance. This problem is nontrivial because in the Poisson item count technique the study sensitive variable is a latent one and is not directly observable. Thus ML estimation is carried out by using the expectation‑maximisation (EM) algorithm and therefore an explicit analytical formula for the variance of the ML estimator of the sensitive proportion is not obtained. To determine the optimal allocation of the sample based on ML estimation, comprehensive Monte Carlo simulations and the EM algorithm have been employed.
APA, Harvard, Vancouver, ISO, and other styles
17

Jasra, Ajay, and Fangyuan Yu. "Central limit theorems for coupled particle filters." Advances in Applied Probability 52, no. 3 (September 2020): 942–1001. http://dx.doi.org/10.1017/apr.2020.27.

Full text
Abstract:
AbstractIn this article we prove new central limit theorems (CLTs) for several coupled particle filters (CPFs). CPFs are used for the sequential estimation of the difference of expectations with respect to filters which are in some sense close. Examples include the estimation of the filtering distribution associated to different parameters (finite difference estimation) and filters associated to partially observed discretized diffusion processes (PODDP) and the implementation of the multilevel Monte Carlo (MLMC) identity. We develop new theory for CPFs, and based upon several results, we propose a new CPF which approximates the maximal coupling (MCPF) of a pair of predictor distributions. In the context of ML estimation associated to PODDP with time-discretization $\Delta_l=2^{-l}$ , $l\in\{0,1,\dots\}$ , we show that the MCPF and the approach of Jasra, Ballesio, et al. (2018) have, under certain assumptions, an asymptotic variance that is bounded above by an expression that is of (almost) the order of $\Delta_l$ ( $\mathcal{O}(\Delta_l)$ ), uniformly in time. The $\mathcal{O}(\Delta_l)$ bound preserves the so-called forward rate of the diffusion in some scenarios, which is not the case for the CPF in Jasra et al. (2017).
APA, Harvard, Vancouver, ISO, and other styles
18

Sraj, Ihab, Mohamed Iskandarani, W. Carlisle Thacker, Ashwanth Srinivasan, and Omar M. Knio. "Drag Parameter Estimation Using Gradients and Hessian from a Polynomial Chaos Model Surrogate." Monthly Weather Review 142, no. 2 (January 24, 2014): 933–41. http://dx.doi.org/10.1175/mwr-d-13-00087.1.

Full text
Abstract:
Abstract A variational inverse problem is solved using polynomial chaos expansions to infer several critical variables in the Hybrid Coordinate Ocean Model’s (HYCOM’s) wind drag parameterization. This alternative to the Bayesian inference approach in Sraj et al. avoids the complications of constructing the full posterior with Markov chain Monte Carlo sampling. It focuses instead on identifying the center and spread of the posterior distribution. The present approach leverages the polynomial chaos series to estimate, at very little extra cost, the gradients and Hessian of the cost function during minimization. The Hessian’s inverse yields an estimate of the uncertainty in the solution when the latter’s probability density is approximately Gaussian. The main computational burden is an ensemble of realizations to build the polynomial chaos expansion; no adjoint code or additional forward model runs are needed once the series is available. The ensuing optimal parameters are compared to those obtained in Sraj et al. where the full posterior distribution was constructed. The similarities and differences between the new methodology and a traditional adjoint-based calculation are discussed.
APA, Harvard, Vancouver, ISO, and other styles
19

DIEBOLT, J., M. A. EL-AROUI, V. DURBEC, and B. VILLAIN. "ESTIMATION OF EXTREME QUANTILES: EMPIRICAL TOOLS FOR METHODS ASSESSMENT AND COMPARISON." International Journal of Reliability, Quality and Safety Engineering 07, no. 01 (March 2000): 75–94. http://dx.doi.org/10.1142/s0218539300000079.

Full text
Abstract:
When extreme quantiles have to be estimated from a given data set, the classical parametric approach can lead to very poor estimations. This has led to the introduction of specific methods for estimating extreme quantiles (MEEQ's) in a nonparametric spirit, e.g., Pickands excess method, methods based on Hill's estimate of the Pareto index, exponential tail (ET) and quadratic tail (QT) methods. However, no practical technique for assessing and comparing these MEEQ's when they are to be used on a given data set is available. This paper is a first attempt to provide such techniques. We first compare the estimations given by the main MEEQ's on several simulated data sets. Then we suggest goodness-of-fit (Gof) tests to assess the MEEQ's by measuring the quality of their underlying approximations. It is shown that Gof techniques bring very relevant tools to assess and compare ET and excess methods. Other empirical criterions for comparing MEEQ's are also proposed and studied through Monte-Carlo analyses. Finally, these assessment and comparison techniques are experimented on real-data sets issued from an industrial context where extreme quantiles are needed to define maintenance policies.
APA, Harvard, Vancouver, ISO, and other styles
20

Pasha-Zanoosi, Hossein, Ahmad Pourdarvish, and Akbar Asgharzadeh. "Multicomponent Stress-strength Reliability with Exponentiated Teissier Distribution." Austrian Journal of Statistics 51, no. 4 (August 26, 2022): 35–59. http://dx.doi.org/10.17713/ajs.v51i4.1327.

Full text
Abstract:
This article deals with the problem of reliability in a multicomponent stress-strength (MSS) model when both stress and strength variables are from exponentiated Teissier (ET) distributions. The reliability of the system is determined using both classical and Bayesian methods, based on two scenarios where the common scale parameter is unknown or known. In the first scenario, where the common scale parameter is unknown, the maximum likelihood estimation (MLE) and the approximate Bayes estimation are derived. In the second scenario, where the scale parameter is known, the MLE, the uniformly minimum variance unbiased estimator (UMVUE) and the exact Bayes estimation are obtained. In the both scenarios, the asymptotic confidence interval and the highest probability density credible interval are established. Furthermore, two other asymptotic confidence intervals are computed based on the Logit and Arcsin transformations. Monte Carlo simulations are implemented to compare the different proposed methods. Finally, one real example is presented in support of suggested procedures.
APA, Harvard, Vancouver, ISO, and other styles
21

Jun, Sung Jae, Joris Pinkse, and Yuanyuan Wan. "INTEGRATED SCORE ESTIMATION." Econometric Theory 33, no. 6 (December 5, 2016): 1418–56. http://dx.doi.org/10.1017/s0266466616000463.

Full text
Abstract:
We study the properties of the integrated score estimator (ISE), which is the Laplace version of Manski’s maximum score estimator (MMSE). The ISE belongs to a class of estimators whose basic asymptotic properties were studied in Jun, Pinkse, and Wan (2015, Journal of Econometrics 187(1), 201–216). Here, we establish that the MMSE, or more precisely $$\root 3 \of n |\hat \theta _M - \theta _0 |$$, (locally first order) stochastically dominates the ISE under the conditions necessary for the MMSE to attain its $\root 3 \of n $ convergence rate and that the ISE has the same convergence rate as Horowitz’s smoothed maximum score estimator (SMSE) under somewhat weaker conditions. An implication of the stochastic dominance result is that the confidence intervals of the MMSE are for any given coverage rate wider than those of the ISE, provided that the input parameter αn is not chosen too large. Further, we introduce an inference procedure that is not only rate adaptive as established in Jun et al. (2015), but also uniform in the choice of αn. We propose three different first order bias elimination procedures and we discuss the choice of input parameters. We develop a computational algorithm for the ISE based on the Gibbs sampler and we examine implementational issues in detail. We argue in favor of normalizing the norm of the parameter vector as opposed to fixing one of the coefficients. Finally, we evaluate the computational efficiency of the ISE and the performance of the ISE and the proposed inference procedure in an extensive Monte Carlo study.
APA, Harvard, Vancouver, ISO, and other styles
22

Duchaine, Jasmine, Daniel Markel, and Hugo Bouchard. "A probabilistic approach for determining Monte Carlo beam source parameters: I. Modeling of a CyberKnife M6 unit." Physics in Medicine & Biology 67, no. 4 (February 10, 2022): 045007. http://dx.doi.org/10.1088/1361-6560/ac4ef7.

Full text
Abstract:
Abstract Objective. During Monte Carlo modeling of external radiotherapy beams, models must be adjusted to reproduce the experimental measurements of the linear accelerator being considered. The aim of this work is to propose a new method for the determination of the energy and spot size of the electron beam incident on the target of a linear accelerator using a maximum likelihood estimation. Approach. For that purpose, the method introduced by Francescon et al (2008 Med. Phys. 35 504–13) is expanded upon in this work. Simulated tissue-phantom ratios and uncorrected output factors using a set of different detector models are compared to experimental measurements. A probabilistic formalism is developed and a complete uncertainty budget, which includes a detailed simulation of positioning errors, is evaluated. The method is applied to a CyberKnife M6 unit using four detectors (PTW 60012, PTW 60019, Exradin A1SL and IBA CC04), with simulations being performed using the EGSnrc suite. Main results. The likelihood distributions of the electron beam energy and spot size are evaluated, leading to E ˆ = 7.42 ± 0.17 MeV and F ˆ = 2.15 ± 0.06 mm . Using these results and a 95% confidence region, simulations reproduce measurements in 13 out of the 14 considered setups. Significance. The proposed method allows an accurate beam parameter optimization and uncertainty evaluation during the Monte Carlo modeling of a radiotherapy unit.
APA, Harvard, Vancouver, ISO, and other styles
23

Chen, Xianbin, and Juliang Yin. "Simultaneous variable selection and estimation for longitudinal ordinal data with a diverging number of covariates." AIMS Mathematics 7, no. 4 (2022): 7199–211. http://dx.doi.org/10.3934/math.2022402.

Full text
Abstract:
<abstract><p>In this paper, we study the problem of simultaneous variable selection and estimation for longitudinal ordinal data with high-dimensional covariates. Using the penalized generalized estimation equation (GEE) method, we obtain some asymptotic properties for these types of data in the case that the dimension of the covariates $ p_n $ tends to infinity as the number of cluster $ n $ approaches to infinity. More precisely, under appropriate regular conditions, all the covariates with zero coefficients can be examined simultaneously with probability tending to 1, and the estimator of the non-zero coefficients exhibits the asymptotic Oracle properties. Finally, we also perform some Monte Carlo studies to illustrate the theoretical analysis. The main result in this paper extends the elegant work of Wang et al. <sup>[<xref ref-type="bibr" rid="b1">1</xref>]</sup> to the multinomial response variable case.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
24

Li, Ying, Chunhui Zou, Liping Feng, Xiaochuang Yao, Tongxiao Li, Xianguan Chen, and Jin Zhao. "Estimation of winter wheat yield in Hebi city based on crop models and remote sensing data assimilation." Journal of Physics: Conference Series 2791, no. 1 (July 1, 2024): 012079. http://dx.doi.org/10.1088/1742-6596/2791/1/012079.

Full text
Abstract:
Abstract Assimilating remote sensing data with crop models is an effective approach to improve the accuracy of crop model applications at the regional scale. Sobol global sensitivity analysis and Markov Chain Monte Carlo methods were used to calibrate parameters in the WOFOST and WheatSM models within a gridded model framework in the study area. Time series data reconstruction techniques were employed to correct MODIS LAI and ET data. Four assimilation scenarios were tested, including assimilation of only leaf area index (LAI), assimilation of only evapotranspiration (ET), simultaneous assimilation of LAI and ET and a control scenario without assimilation. These scenarios were applied to model winter wheat yields in Hebi City, China, from 2013 to 2018. Statistical validation demonstrated that assimilating ET or LAI individually significantly improved model accuracy compared to the control scenario with similar levels of improvement. The highest model accuracy was achieved when assimilating both ET and LAI simultaneously, showing the highest correlation coefficients and the lowest root mean square errors among the four scenarios. This research provides a basis for selecting assimilation variables when applying crop models at the regional scale. The coupling of crop growth models with remote sensing data empowers governments and agricultural producers to devise more effective agricultural strategies, allocate resources efficiently, and implement disaster response measures. This enhances scientific management within the agricultural sector, promotes increased food production and elevates farmers’ incomes.
APA, Harvard, Vancouver, ISO, and other styles
25

Barbosa, Josino José, Tiago Martins Pereira, and Fernando Luiz Pereira de Oliveira. "Uma proposta para identificação de outliers multivariados." Ciência e Natura 40 (July 6, 2018): 40. http://dx.doi.org/10.5902/2179460x29535.

Full text
Abstract:
In the last years several probability distributions have been proposed in the literature, especially with the aim of obtaining models that are more flexible relative to the behaviors of the density and hazard rate functions. For instance, Ghitany et al. (2013) proposed a new generalization of the Lindley distribution, called power Lindley distribution, whereas Sharma et al. (2015a) proposed the inverse Lindley distribution. From these two generalizations Barco et al. (2017) studied the inverse power Lindley distribution, also called by Sharma et al. (2015b) as generalized inverse Lindley distribution. Considering the inverse power Lindley distribution, in this paper is evaluate the performance, through Monte Carlo simulations, with respect to the bias and consistency of nine different methods of estimations (the maximum likelihood method and eight others based on the distance between the empirical and theoretical cumulative distribution function). The numerical results showed a better performance of the estimation method based on the Anderson-Darling test statistic. This conclusion is also observed in the analysis of two real data sets.
APA, Harvard, Vancouver, ISO, and other styles
26

Canitz, Felix, Panagiotis Ballis-Papanastasiou, Christian Fieberg, Kerstin Lopatta, Armin Varmaz, and Thomas Walker. "Estimates and inferences in accounting panel data sets: comparing approaches." Journal of Risk Finance 18, no. 3 (May 15, 2017): 268–83. http://dx.doi.org/10.1108/jrf-11-2016-0145.

Full text
Abstract:
Purpose The purpose of this paper is to review and evaluate the methods commonly used in accounting literature to correct for cointegrated data and data that are neither stationary nor cointegrated. Design/methodology/approach The authors conducted Monte Carlo simulations according to Baltagi et al. (2011), Petersen (2009) and Gow et al. (2010), to analyze how regression results are affected by the possible nonstationarity of the variables of interest. Findings The results of this study suggest that biases in regression estimates can be reduced and valid inferences can be obtained by using robust standard errors clustered by firm, clustered by firm and time or Fama–MacBeth t-statistics based on the mean and standard errors of the cross section of coefficients from time-series regressions. Originality/value The findings of this study are suited to guide future researchers regarding which estimation methods are the most reliable given the possible nonstationarity of the variables of interest.
APA, Harvard, Vancouver, ISO, and other styles
27

Xing, Jiankai, Zengyu Li, Fujun Luan, and Kun Xu. "Differentiable Photon Mapping using Generalized Path Gradients." ACM Transactions on Graphics 43, no. 6 (November 19, 2024): 1–15. http://dx.doi.org/10.1145/3687958.

Full text
Abstract:
Photon mapping is a fundamental and practical Monte Carlo rendering technique for efficiently simulating global illumination effects, especially for caustics and specular-diffuse-specular (SDS) paths. In this paper, we present the first differentiable rendering method for photon mapping. The core of our method is a newly introduced concept named generalized path gradients. Based on the extended path space manifolds (EPSMs) [Xing et al. 2023], the generalized path gradients define the derivatives of the vertex positions and color contributions of a path with respect to scene parameters under given geometric constraints. By formalizing photon mapping as a path sampling technique through vertex merging [Georgiev et al. 2012] and incorporating a smooth differentiable density estimation kernel, we enable the differentiation of the photon mapping algorithms based on the theoretical results of generalized path gradients. Experiments demonstrate that our method is more effective than state-of-the-art physics-based differentiable rendering methods in inverse rendering applications involving difficult illumination paths, especially SDS paths.
APA, Harvard, Vancouver, ISO, and other styles
28

Watanabe, Masaru, and Naoki Yasuda. "Estimation of the Internal Extinction of Spiral Galaxies for Multi-color Tully-Fisher Relations." Symposium - International Astronomical Union 183 (1999): 73. http://dx.doi.org/10.1017/s0074180900132188.

Full text
Abstract:
We calculate B-, R- and I-band internal extinction A(λ)i (absorption + scattering) for spirals consisting of an exponential dust layer, a stellar disk and a bulge. The result is applied to local calibrators and cluster spirals (Virgo and Ursa Major) to examine whether or not the wavelength dependence of the relative zero point difference of Tully-Fisher (TF) relations between for local calibrators and for cluster spirals (Pierce & Tully, 1992) could be accounted for by a variation of A(λ)i on the optical depth of galaxies. The extinction is calculated using Monte-Carlo simulations prescribed by Bianchi et al. (1996). For the extinction curve we adopted the one of Cardelli et al. (1989). It is found that a differential extinction A(R or I)i – A(B)i as a function of the optical depth σ(B) has finite upper limits of ∼ 0.3–0.5 mag, depending on an inclination of the spiral. These limits are generally smaller than the offset of the TF relative zero point difference. This indicates that the offset may be fully due to an intrinsic color difference between local calibrators and cluster galaxies, or else that the current extinction model is yet to realize a practical extinction process or geometrical configuration of spirals.
APA, Harvard, Vancouver, ISO, and other styles
29

Duchesne, Sophie, and Jean-Pierre Villeneuve. "Estimation du coût total associé à la production d’eau potable : cas d’application de la ville de Québec." Revue des sciences de l'eau 19, no. 2 (June 9, 2006): 69–85. http://dx.doi.org/10.7202/013042ar.

Full text
Abstract:
Résumé Une gamme de coûts probables pour l’eau potable distribuée par la ville de Québec, Canada, est déterminée en sommant les coûts annualisés des investissements nécessaires à la reconstruction à l’état neuf des infrastructures d’eau de la ville (conduites d’aqueduc et d’égout, stations de production d’eau potable et stations de traitement des eaux usées) et les coûts annuels d’opération et d’entretien associés à ces infrastructures, puis en divisant le coût total par la production annuelle moyenne d’eau potable sur le territoire de la ville de Québec (106 Mm3/an). La gamme de coûts est obtenue par 50 000 simulations Monte Carlo, en tenant compte des incertitudes sur le coût des divers éléments composant le coût total de l’eau. De cette façon, on calcule un coût total moyen de 2,85 $/m3 et d’écart-type 0,47 $/m3. Globalement, 0,70 $/m3 et 2,15 $/m3 sont respectivement liés, en moyenne, aux coûts d’exploitation et aux dépenses d’immobilisation. Une analyse de sensibilité des résultats montre que le taux d’intérêt et le coût de construction des conduites sont les paramètres ayant le plus d’impact sur le coût calculé. Ce coût s’avère d’ailleurs beaucoup plus élevé que le prix moyen chargé pour l’eau au Canada et au Québec, qui était respectivement de 1,00 $/m3 et 0,49 $/m3 en 1999, mais s’approche du prix moyen chargé en France pour l’eau potable en 2000 (environ 3,33 $/m3 hors taxes), pays où la facture d’eau inclut l’intégralité des dépenses des services d’eau et d’assainissement. La récupération par les municipalités, sous quelle forme que ce soit, de 2,85 $ pour chaque m3 d’eau produit permettrait d’assurer un entretien et un renouvellement adéquats des infrastructures d’eau municipales.
APA, Harvard, Vancouver, ISO, and other styles
30

Coenen, M., F. Rottensteiner, and C. Heipke. "RECOVERING THE 3D POSE AND SHAPE OF VEHICLES FROM STEREO IMAGES." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2 (May 28, 2018): 73–80. http://dx.doi.org/10.5194/isprs-annals-iv-2-73-2018.

Full text
Abstract:
The precise reconstruction and pose estimation of vehicles plays an important role, e.g. for autonomous driving. We tackle this problem on the basis of street level stereo images obtained from a moving vehicle. Starting from initial vehicle detections, we use a deformable vehicle shape prior learned from CAD vehicle data to fully reconstruct the vehicles in 3D and to recover their 3D pose and shape. To fit a deformable vehicle model to each detection by inferring the optimal parameters for pose and shape, we define an energy function leveraging reconstructed 3D data, image information, the vehicle model and derived scene knowledge. To minimise the energy function, we apply a robust model fitting procedure based on iterative Monte Carlo model particle sampling. We evaluate our approach using the object detection and orientation estimation benchmark of the KITTI dataset (Geiger et al., 2012). Our approach can deal with very coarse pose initialisations and we achieve encouraging results with up to 82&amp;thinsp;% correct pose estimations. Moreover, we are able to deliver very precise orientation estimation results with an average absolute error smaller than 4&amp;deg;.
APA, Harvard, Vancouver, ISO, and other styles
31

de Oliveira Peres, Marcos Vinicius, Ricardo Puziol de Oliveira, Edson Zangiacomi Martinez, and Jorge Alberto Achcar. "Different inference approaches for the estimators of the sushila distribution." Model Assisted Statistics and Applications 16, no. 4 (December 20, 2021): 251–60. http://dx.doi.org/10.3233/mas-210539.

Full text
Abstract:
In this paper, we order to evaluate via Monte Carlo simulations the performance of sample properties of the estimates of the estimates for Sushila distribution, introduced by Shanker et al. (2013). We consider estimates obtained by six estimation methods, the known approaches of maximum likelihood, moments and Bayesian method, and other less traditional methods: L-moments, ordinary least-squares and weighted least-squares. As a comparison criterion, the biases and the roots of mean-squared errors were used through nine scenarios with samples ranging from 30 to 300 (every 30rd). In addition, we also considered a simulation and a real data application to illustrate the applicability of the proposed estimators as well as the computation time to get the estimates. In this case, the Bayesian method was also considered. The aim of the study was to find an estimation method to be considered as a better alternative or at least interchangeable with the traditional maximum likelihood method considering small or large sample sizes and with low computational cost.
APA, Harvard, Vancouver, ISO, and other styles
32

Cappa, Eduardo P., and Rodolfo JC Cantet. "Bayesian inference for normal multiple-trait individual-tree models with missing records via full conjugate Gibbs." Canadian Journal of Forest Research 36, no. 5 (May 1, 2006): 1276–85. http://dx.doi.org/10.1139/x06-024.

Full text
Abstract:
In forest genetics, restricted maximum likelihood (REML) estimation of (co)variance components from normal multiple-trait individual-tree models is affected by the absence of observations in any trait and individual. Missing records affect the form of the distribution of REML estimates of genetics parameters, or of functions of them, and the estimating equations are computationally involved when several traits are analysed. An alternative to REML estimation is a fully Bayesian approach through Markov chain Monte Carlo. The present research describes the use of the full conjugate Gibbs algorithm proposed by Cantet et al. (R.J.C. Cantet, A.N. Birchmeier, and J.P. Steibel. 2004. Genet. Sel. Evol. 36: 49–64) to estimate (co)variance components in multiple-trait individual-tree models. This algorithm converges faster to the marginal posterior densities of the parameters than regular data augmentation from multivariate normal data with missing records. An expression to calculate the deviance information criterion for the selection of linear parameters in normal multiple-trait models is also given. The developments are illustrated by means of data from different crosses of two species of Pinus.
APA, Harvard, Vancouver, ISO, and other styles
33

Hahn, ChangHoon, and Peter Melchior. "Accelerated Bayesian SED Modeling Using Amortized Neural Posterior Estimation." Astrophysical Journal 938, no. 1 (October 1, 2022): 11. http://dx.doi.org/10.3847/1538-4357/ac7b84.

Full text
Abstract:
Abstract State-of-the-art spectral energy distribution (SED) analyses use a Bayesian framework to infer the physical properties of galaxies from observed photometry or spectra. They require sampling from a high-dimensional space of SED model parameters and take >10–100 CPU hr per galaxy, which renders them practically infeasible for analyzing the billions of galaxies that will be observed by upcoming galaxy surveys (e.g., the Dark Energy Spectroscopic Instrument, the Prime Focus Spectrograph, the Vera C. Rubin Observatory, the James Webb Space Telescope, and the Roman Space Telescope). In this work, we present an alternative scalable approach to rigorous Bayesian inference using Amortized Neural Posterior Estimation (ANPE). ANPE is a simulation-based inference method that employs neural networks to estimate posterior probability distributions over the full range of observations. Once trained, it requires no additional model evaluations to estimate the posterior. We present, and publicly release, SEDflow, an ANPE method for producing the posteriors of the recent Hahn et al. SED model from optical photometry and redshift. SEDflow takes ∼1 s per galaxy to obtain the posterior distributions of 12 model parameters, all of which are in excellent agreement with traditional Markov Chain Monte Carlo sampling results. We also apply SEDflow to 33,884 galaxies in the NASA–Sloan Atlas and publicly release their posteriors.
APA, Harvard, Vancouver, ISO, and other styles
34

Speagle, Joshua S. "dynesty: a dynamic nested sampling package for estimating Bayesian posteriors and evidences." Monthly Notices of the Royal Astronomical Society 493, no. 3 (February 3, 2020): 3132–58. http://dx.doi.org/10.1093/mnras/staa278.

Full text
Abstract:
ABSTRACT We present dynesty, a public, open-source, python package to estimate Bayesian posteriors and evidences (marginal likelihoods) using the dynamic nested sampling methods developed by Higson et al. By adaptively allocating samples based on posterior structure, dynamic nested sampling has the benefits of Markov chain Monte Carlo (MCMC) algorithms that focus exclusively on posterior estimation while retaining nested sampling’s ability to estimate evidences and sample from complex, multimodal distributions. We provide an overview of nested sampling, its extension to dynamic nested sampling, the algorithmic challenges involved, and the various approaches taken to solve them in this and previous work. We then examine dynesty’s performance on a variety of toy problems along with several astronomical applications. We find in particular problems dynesty can provide substantial improvements in sampling efficiency compared to popular MCMC approaches in the astronomical literature. More detailed statistical results related to nested sampling are also included in the appendix.
APA, Harvard, Vancouver, ISO, and other styles
35

England, P. D., and R. J. Verrall. "Predictive Distributions of Outstanding Liabilities in General Insurance." Annals of Actuarial Science 1, no. 2 (September 2006): 221–70. http://dx.doi.org/10.1017/s1748499500000142.

Full text
Abstract:
ABSTRACTThis paper extends the methods introduced in England & Verrall (2002), and shows how predictive distributions of outstanding liabilities in general insurance can be obtained using bootstrap or Bayesian techniques for clearly defined statistical models. A general procedure for bootstrapping is described, by extending the methods introduced in England & Verrall (1999), England (2002) and Pinheiro et al. (2003). The analogous Bayesian estimation procedure is implemented using Markov-chain Monte Carlo methods, where the models are constructed as Bayesian generalised linear models using the approach described by Dellaportas & Smith (1993). In particular, this paper describes a way of obtaining a predictive distribution from recursive claims reserving models, including the well known model introduced by Mack (1993). Mack's model is useful, since it can be used with data sets which exhibit negative incremental amounts. The techniques are illustrated with examples, and the resulting predictive distributions from both the bootstrap and Bayesian methods are compared.
APA, Harvard, Vancouver, ISO, and other styles
36

Venkatraman, Padma, and Wynn Jacobson-Galán. "shock_cooling_curve: A Python-based Package for Extensive and Efficient Modeling of Shock Cooling Emission in Supernovae." Research Notes of the AAS 8, no. 1 (January 30, 2024): 33. http://dx.doi.org/10.3847/2515-5172/ad2265.

Full text
Abstract:
Abstract The light-curve evolution of a supernova contains information on the exploding star. Early-time photometry of a variety of explosive transients, including Calcium-rich transients and type IIb/Ibc and IIP supernovae shows evidence for an early light curve peak as a result of the explosion’s shock wave passing through extended material (i.e., shock cooling emission (SCE)). Analytic modeling of the SCE allows us to estimate progenitor properties such as the radius and mass of extended material (e.g., the stellar envelope) as well as the shock velocity. In this work, we present a Python-based open-source code that implements four analytic models originally developed in Piro, Piro et al. and Sapir & Waxman applied to photometric data to obtain progenitor parameter properties via different modeling techniques (including nonlinear optimization, Markov Chain Monte Carlo sampling). Our software is easily extendable to other analytic models for SCE and different methods of parameter estimation.
APA, Harvard, Vancouver, ISO, and other styles
37

Ganss, R., J. L. Pledger, A. E. Sansom, P. A. James, J. Puls, and S. M. Habergham-Mawson. "Metallicity estimation of core-collapse Supernova H ii regions in galaxies within 30 Mpc." Monthly Notices of the Royal Astronomical Society 512, no. 1 (March 9, 2022): 1541–56. http://dx.doi.org/10.1093/mnras/stac625.

Full text
Abstract:
ABSTRACT This work presents measurements of the local H ii environment metallicities of core-collapse supernovae (SNe) within a luminosity distance of 30 Mpc. 76 targets were observed at the Isaac Newton Telescope and environment metallicities could be measured for 65 targets using the N2 and O3N2 strong emission line method. The cumulative distribution functions (CDFs) of the environment metallicities of Type Ib and Ic SNe tend to higher metallicity than Type IIP, however Type Ic are also present at lower metallicities whereas Type Ib are not. The Type Ib frequency distribution is narrower (standard deviation ∼0.06 dex) than the Ic and IIP distributions (∼0.15 dex) giving some evidence for a significant fraction of single massive progenitor stars; the low metallicity of Type Ic suggests a significant fraction of compact binary progenitors. However, both the Kolmogorov–Smirnov test and the Anderson–Darling test indicate no statistical significance for a difference in the local metallicities of the three SN types. Monte Carlo simulations reveal a strong sensitivity of these tests to the uncertainties of the derived metallicities. Given the uncertainties of the strong emission methods, the applicability of the tests seems limited. We extended our analysis with the data of the Type Ib/Ic/IIP SN sample from Galbany et al. The CDFs created with their sample confirm our CDFs very well. The statistical tests, combining our sample and the Galbany et al. sample, indicate a significant difference between Type Ib and Type IIP with &lt;5 per cent probability that they are drawn from the same parent population.
APA, Harvard, Vancouver, ISO, and other styles
38

Zhao, Gang, Paul Bates, Jeffrey Neal, and Bo Pang. "Design flood estimation for global river networks based on machine learning models." Hydrology and Earth System Sciences 25, no. 11 (November 22, 2021): 5981–99. http://dx.doi.org/10.5194/hess-25-5981-2021.

Full text
Abstract:
Abstract. Design flood estimation is a fundamental task in hydrology. In this research, we propose a machine-learning-based approach to estimate design floods globally. This approach involves three stages: (i) estimating at-site flood frequency curves for global gauging stations using the Anderson–Darling test and a Bayesian Markov chain Monte Carlo (MCMC) method; (ii) clustering these stations into subgroups using a K-means model based on 12 globally available catchment descriptors; and (iii) developing a regression model in each subgroup for regional design flood estimation using the same descriptors. A total of 11 793 stations globally were selected for model development, and three widely used regression models were compared for design flood estimation. The results showed that (1) the proposed approach achieved the highest accuracy for design flood estimation when using all 12 descriptors for clustering; and the performance of the regression was improved by considering more descriptors during training and validation; (2) a support vector machine regression provided the highest prediction performance amongst all regression models tested, with a root mean square normalised error of 0.708 for 100-year return period flood estimation; (3) 100-year design floods in tropical, arid, temperate, cold and polar climate zones could be reliably estimated (i.e. <±25 % error), with relative mean bias (RBIAS) values of −0.199, −0.233, −0.169, 0.179 and −0.091 respectively; (4) the machine-learning-based approach developed in this paper showed considerable improvement over the index-flood-based method introduced by Smith et al. (2015, https://doi.org/10.1002/2014WR015814) for design flood estimation at global scales; and (5) the average RBIAS in estimation is less than 18 % for 10-, 20-, 50- and 100-year design floods. We conclude that the proposed approach is a valid method to estimate design floods anywhere on the global river network, improving our prediction of the flood hazard, especially in ungauged areas.
APA, Harvard, Vancouver, ISO, and other styles
39

Vrugt, J. A. "DREAM<sub>(D)</sub>: an adaptive markov chain monte carlo simulation algorithm to solve discrete, noncontinuous, posterior parameter estimation problems." Hydrology and Earth System Sciences Discussions 8, no. 2 (April 26, 2011): 4025–52. http://dx.doi.org/10.5194/hessd-8-4025-2011.

Full text
Abstract:
Abstract. Formal and informal Bayesian approaches are increasingly being used to treat forcing, model structural, parameter and calibration data uncertainty, and summarize hydrologic prediction uncertainty. This requires posterior sampling methods that approximate the (evolving) posterior distribution. We recently introduced the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm, an adaptive Markov Chain Monte Carlo (MCMC) method that is especially designed to solve complex, high-dimensional and multimodal posterior probability density functions. The method runs multiple chains in parallel, and maintains detailed balance and ergodicity. Here, I present the latest algorithmic developments, and introduce a discrete sampling variant of DREAM that samples the parameter space at fixed points. The development of this new code, DREAM(D), has been inspired by the existing class of integer optimization problems, and emerging class of experimental design problems. Such non-continuous parameter estimation problems are of considerable theoretical and practical interest. The theory developed herein is applicable to DREAM(ZS) (Vrugt et al., 2011) and MT-DREAM(ZS) (Laloy and Vrugt, 2011) as well. Two case studies involving a sudoku puzzle and rainfall – runoff model calibration problem are used to illustrate DREAM(D).
APA, Harvard, Vancouver, ISO, and other styles
40

Wilson, P. David, Sung-Cheng Huang, and Randall A. Hawkins. "Single-Scan Bayes Estimation of Cerebral Glucose Metabolic Rate: Comparison with Bon-Bades Single-Scan Methods Using DFG Pet Scans in Stroke." Journal of Cerebral Blood Flow & Metabolism 8, no. 3 (June 1988): 418–25. http://dx.doi.org/10.1038/jcbfm.1988.78.

Full text
Abstract:
Three single-scan (SS) methods are currently available for estimating the local cerebral metabolic rate of glucose (LCMRG) from F-18 deoxyglucose (FDG) positron emission tomography (PET) scan data: SS(SPH), named for Sokoloff, Phelps, and Huang; SS(B), named for Brooks; and SS(H), named for Hutchins and Holden et al. All three of these SS methods make use of prior information in the form of mean values of rate constants from the normal population. We have developed a Bayes estimation (BE) method that uses prior information in the form of rate constant means, variances, and correlations in both the normal and ischemic tissue populations. The BE method selects, based only on the data, whether the LCMRG estimate should be computed using prior information from normal or ischemic tissue. The ability of BE to make this selection gives it an advantage over the other methods. The BE method can be used as a SS method or can use any number of PET scans. We conducted Monte Carlo studies comparing BE as a SS method with the other SS methods, all using a single scan at 60 min. We found SS(H) to be strongly superior to SS(SPH) and SS(B), and we found BE to be definitely superior to SS(H).
APA, Harvard, Vancouver, ISO, and other styles
41

Schneider-Zapp, K., O. Ippisch, and K. Roth. "Numerical study of the evaporation process and parameter estimation analysis of an evaporation experiment." Hydrology and Earth System Sciences Discussions 6, no. 6 (December 3, 2009): 7385–427. http://dx.doi.org/10.5194/hessd-6-7385-2009.

Full text
Abstract:
Abstract. Evaporation is an important process in soil-atmosphere interaction. The determination of hydraulic properties is one of the crucial parts in the simulation of water transport in porous media. Schneider et al. (2006) developed a new evaporation method to improve the estimation of hydraulic properties in the dry range. In this study we used numerical simulations of the experiment to study the physical dynamics in more detail, to optimise the boundary conditions and to choose the optimal combination of measurements. The physical analysis exposed, in accordance to experimental findings in the literature, two different evaporation regimes, a soil-atmosphere boundary layer dominated regime (regime I) in the saturated region and a hydraulically dominated regime (regime II). During this second regime a drying front forms which penetrates deeper into the soil as time passes. The sensitivity analysis showed that the result is especially sensitive at the transition between the two regimes. By using boundary condition changes it is possible to force the system to switch between the two regimes, e.g. from II back to I. Based on this findings a multistep experiment was developed. The response surfaces for all parameter combinations are flat and have a unique, localised minimum. Best parameter estimates are obtained if the evaporation flux and a potential measurement in 2 cm depth are used as target variables. Parameter estimation from simulated experiments with realistic measurement errors with a two-stage Monte-Carlo Levenberg-Marquardt procedure and manual rejection of obvious misfits lead to acceptable results for three different soil textures.
APA, Harvard, Vancouver, ISO, and other styles
42

Efroni, Yonathan, Gal Dalal, Bruno Scherrer, and Shie Mannor. "How to Combine Tree-Search Methods in Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3494–501. http://dx.doi.org/10.1609/aaai.v33i01.33013494.

Full text
Abstract:
Finite-horizon lookahead policies are abundantly used in Reinforcement Learning and demonstrate impressive empirical success. Usually, the lookahead policies are implemented with specific planning methods such as Monte Carlo Tree Search (e.g. in AlphaZero (Silver et al. 2017b)). Referring to the planning problem as tree search, a reasonable practice in these implementations is to back up the value only at the leaves while the information obtained at the root is not leveraged other than for updating the policy. Here, we question the potency of this approach. Namely, the latter procedure is non-contractive in general, and its convergence is not guaranteed. Our proposed enhancement is straightforward and simple: use the return from the optimal tree path to back up the values at the descendants of the root. This leads to a γh-contracting procedure, where γ is the discount factor and h is the tree depth. To establish our results, we first introduce a notion called multiple-step greedy consistency. We then provide convergence rates for two algorithmic instantiations of the above enhancement in the presence of noise injected to both the tree search stage and value estimation stage.
APA, Harvard, Vancouver, ISO, and other styles
43

Vrugt, J. A., and C. J. F. Ter Braak. "DREAM<sub>(D)</sub>: an adaptive Markov Chain Monte Carlo simulation algorithm to solve discrete, noncontinuous, and combinatorial posterior parameter estimation problems." Hydrology and Earth System Sciences 15, no. 12 (December 13, 2011): 3701–13. http://dx.doi.org/10.5194/hess-15-3701-2011.

Full text
Abstract:
Abstract. Formal and informal Bayesian approaches have found widespread implementation and use in environmental modeling to summarize parameter and predictive uncertainty. Successful implementation of these methods relies heavily on the availability of efficient sampling methods that approximate, as closely and consistently as possible the (evolving) posterior target distribution. Much of this work has focused on continuous variables that can take on any value within their prior defined ranges. Here, we introduce theory and concepts of a discrete sampling method that resolves the parameter space at fixed points. This new code, entitled DREAM(D) uses the recently developed DREAM algorithm (Vrugt et al., 2008, 2009a, b) as its main building block but implements two novel proposal distributions to help solve discrete and combinatorial optimization problems. This novel MCMC sampler maintains detailed balance and ergodicity, and is especially designed to resolve the emerging class of optimal experimental design problems. Three different case studies involving a Sudoku puzzle, soil water retention curve, and rainfall – runoff model calibration problem are used to benchmark the performance of DREAM(D). The theory and concepts developed herein can be easily integrated into other (adaptive) MCMC algorithms.
APA, Harvard, Vancouver, ISO, and other styles
44

Schneider-Zapp, K., O. Ippisch, and K. Roth. "Numerical study of the evaporation process and parameter estimation analysis of an evaporation experiment." Hydrology and Earth System Sciences 14, no. 5 (May 17, 2010): 765–81. http://dx.doi.org/10.5194/hess-14-765-2010.

Full text
Abstract:
Abstract. Evaporation is an important process in soil-atmosphere interaction. The determination of hydraulic properties is one of the crucial parts in the simulation of water transport in porous media. Schneider et al. (2006) developed a new evaporation method to improve the estimation of hydraulic properties in the dry range. In this study we used numerical simulations of the experiment to study the physical dynamics in more detail, to optimise the boundary conditions and to choose the optimal combination of measurements. The physical analysis exposed, in accordance to experimental findings in the literature, two different evaporation regimes: (i) a soil-atmosphere boundary layer dominated regime (regime I) close to saturation and (ii) a hydraulically dominated regime (regime II). During this second regime a drying front (interface between unsaturated and dry zone with very steep gradients) forms which penetrates deeper into the soil as time passes. The sensitivity analysis showed that the result is especially sensitive at the transition between the two regimes. By changing the boundary conditions it is possible to force the system to switch between the two regimes, e.g. from II back to I. Based on this findings a multistep experiment was developed. The response surfaces for all parameter combinations are flat and have a unique, localised minimum. Best parameter estimates are obtained if the evaporation flux and a potential measurement in 2 cm depth are used as target variables. Parameter estimation from simulated experiments with realistic measurement errors with a two-stage Monte-Carlo Levenberg-Marquardt procedure and manual rejection of obvious misfits lead to acceptable results for three different soil textures.
APA, Harvard, Vancouver, ISO, and other styles
45

Dahmen, G., and A. Ziegler. "Independence Estimating Equations for Controlled Clinical Trials with Small Sample Sizes." Methods of Information in Medicine 45, no. 04 (2006): 430–34. http://dx.doi.org/10.1055/s-0038-1634100.

Full text
Abstract:
Summary Objectives: The application of independence estimating equations (IEE) for controlled clinical trials (CCTs) has recently been discussed, and recommendations for its use have been derived for testing hypotheses. The robust estimator of variance has been shown to be liberal for small sample sizes. Therefore a series of modifications has been proposed. In this paper we systematically compare confidence intervals (CIs) proposed in the literature for situations that are common in CCTs. Methods: Using Monte-Carlo simulation studies, we compared the coverage probabilities of CIs and non-convergence probabilities for the parameters of the mean structure for small samples using modifications of the variance estimator proposed by Mancl and de Rouen [7], Morel et al. [8] and Pan [3]. Results: None of the proposed modifications behave well in each investigated situation. For parallel group designs with repeated measurements and binary response the method proposed by Pan maintains the nominal level. We observed non-convergence of the IEE algorithm in up to 10% of the replicates depending on response probabilities in the treatment groups. For comparing slopes with continuous responses, the approach of Morel et al. can be recommended. Conclusions: Results of non-convergence probabilities show that IEE should not be used in parallel group designs with binary endpoints and response probabilities close to 0 or 1. Modifications of the robust variance estimator should be used for sample sizes up to 100 clusters for CI estimation.
APA, Harvard, Vancouver, ISO, and other styles
46

Cho, Hee-Suk. "Improvement of the parameter measurement accuracy by the third-generation gravitational wave detector Einstein Telescope." Classical and Quantum Gravity 39, no. 8 (March 24, 2022): 085006. http://dx.doi.org/10.1088/1361-6382/ac5b31.

Full text
Abstract:
Abstract The Einstein Telescope (ET) has been proposed as one of the third-generation gravitational wave (GW) detectors. The sensitivity of ET would be a factor of 10 better than the second-generation GW detector, advanced LIGO (aLIGO); thus, the GW source parameters could be measured with much better accuracy. In this work, we show how the precision in parameter estimation can be improved between aLIGO and ET by comparing the measurement errors. We apply the TaylorF2 waveform model defined in the frequency domain to the Fisher matrix method which is a semi-analytic approach for estimating GW parameter measurement errors. We adopt as our sources low-mass binary black holes with the total masses of M ⩽ 16M ⊙ and the effective spins of −0.9 ⩽ χ eff ⩽ 0.9 and calculate the measurement errors of the mass and the spin parameters using 104 Monte-Carlo samples randomly distributed in our mass and spin parameter space. We find that for the same sources ET can achieve ∼ 14 times better signal-to-noise ratio than aLIGO and the error ratios (σ λ,ET/σ λ,aLIGO) for the chirp-mass, symmetric mass ratio, and effective spin parameters can be lower than 7% for all binaries. We also consider the equal-mass binary neutron stars with the component masses of 1, 1.4, and 2M ⊙ and find that the error ratios for the mass and the spin parameters can be lower than 1.5%. In particular, the measurement error of the tidal deformability Λ ~ can also be significantly reduced by ET, with the error ratio of 3.6%–6.1%. We investigate the effect of prior information by applying the Gaussian prior on the coalescence phase ϕ c to the Fisher matrix and find that the error of the intrinsic parameters can be reduced to ∼ 70 % of the original priorless error ( σ λ priorless ) if the standard deviation of the prior is similar to σ ϕ c priorless .
APA, Harvard, Vancouver, ISO, and other styles
47

Guo, Rui, Jiaoyue Wang, Longfei Bing, Dan Tong, Philippe Ciais, Steven J. Davis, Robbie M. Andrew, Fengming Xi, and Zhu Liu. "Global CO<sub>2</sub> uptake by cement from 1930 to 2019." Earth System Science Data 13, no. 4 (April 30, 2021): 1791–805. http://dx.doi.org/10.5194/essd-13-1791-2021.

Full text
Abstract:
Abstract. Because of the alkaline nature and high calcium content of cements in general, they serve as a CO2-absorbing agent through carbonation processes, resembling silicate weathering in nature. This carbon uptake capacity of cements could abate some of the CO2 emitted during their production. Given the scale of cement production worldwide (4.10 Gt in 2019), a life-cycle assessment is necessary to determine the actual net carbon impacts of this industry. We adopted a comprehensive analytical model to estimate the amount of CO2 that had been absorbed from 1930 to 2019 in four types of cement materials, including concrete, mortar, construction waste, and cement kiln dust (CKD). In addition, the process CO2 emission during the same period based on the same datasets was also estimated. The results show that 21.02 Gt CO2 (95 % confidence interval, CI: 18.01–24.41 Gt CO2) had been absorbed in the cements produced from 1930 to 2019, with the 2019 annual figure mounting up to 0.89 Gt CO2 yr−1 (95 % CI: 0.76–1.06 Gt CO2). The cumulative uptake is equivalent to approximately 55 % of the process emission based on our estimation. In particular, China's dominant position in cement production or consumption in recent decades also gives rise to its uptake being the greatest, with a cumulative sink of 6.21 Gt CO2 (95 % CI: 4.59–8.32 Gt CO2) since 1930. Among the four types of cement materials, mortar is estimated to be the greatest contributor (approximately 59 %) to the total uptake. Potentially, our cement emission and uptake estimation system can be updated annually and modified when necessary for future low-carbon transitions in the cement industry. All the data described in this study, including the Monte Carlo uncertainty analysis results, are accessible at https://doi.org/10.5281/zenodo.4459729 (Wang et al., 2021).
APA, Harvard, Vancouver, ISO, and other styles
48

Xu, Jay J., Jarvis T. Chen, Thomas R. Belin, Ronald S. Brookmeyer, Marc A. Suchard, and Christina M. Ramirez. "Racial and Ethnic Disparities in Years of Potential Life Lost Attributable to COVID-19 in the United States: An Analysis of 45 States and the District of Columbia." International Journal of Environmental Research and Public Health 18, no. 6 (March 12, 2021): 2921. http://dx.doi.org/10.3390/ijerph18062921.

Full text
Abstract:
The coronavirus disease 2019 (COVID-19) epidemic in the United States has disproportionately impacted communities of color across the country. Focusing on COVID-19-attributable mortality, we expand upon a national comparative analysis of years of potential life lost (YPLL) attributable to COVID-19 by race/ethnicity (Bassett et al., 2020), estimating percentages of total YPLL for non-Hispanic Whites, non-Hispanic Blacks, Hispanics, non-Hispanic Asians, and non-Hispanic American Indian or Alaska Natives, contrasting them with their respective percent population shares, as well as age-adjusted YPLL rate ratios—anchoring comparisons to non-Hispanic Whites—in each of 45 states and the District of Columbia using data from the National Center for Health Statistics as of 30 December 2020. Using a novel Monte Carlo simulation procedure to perform estimation, our results reveal substantial racial/ethnic disparities in COVID-19-attributable YPLL across states, with a prevailing pattern of non-Hispanic Blacks and Hispanics experiencing disproportionately high and non-Hispanic Whites experiencing disproportionately low COVID-19-attributable YPLL. Furthermore, estimated disparities are generally more pronounced when measuring mortality in terms of YPLL compared to death counts, reflecting the greater intensity of the disparities at younger ages. We also find substantial state-to-state variability in the magnitudes of the estimated racial/ethnic disparities, suggesting that they are driven in large part by social determinants of health whose degree of association with race/ethnicity varies by state.
APA, Harvard, Vancouver, ISO, and other styles
49

Bocher, Marie, Alexandre Fournier, and Nicolas Coltice. "Ensemble Kalman filter for the reconstruction of the Earth's mantle circulation." Nonlinear Processes in Geophysics 25, no. 1 (February 16, 2018): 99–123. http://dx.doi.org/10.5194/npg-25-99-2018.

Full text
Abstract:
Abstract. Recent advances in mantle convection modeling led to the release of a new generation of convection codes, able to self-consistently generate plate-like tectonics at their surface. Those models physically link mantle dynamics to surface tectonics. Combined with plate tectonic reconstructions, they have the potential to produce a new generation of mantle circulation models that use data assimilation methods and where uncertainties in plate tectonic reconstructions are taken into account. We provided a proof of this concept by applying a suboptimal Kalman filter to the reconstruction of mantle circulation (Bocher et al., 2016). Here, we propose to go one step further and apply the ensemble Kalman filter (EnKF) to this problem. The EnKF is a sequential Monte Carlo method particularly adapted to solve high-dimensional data assimilation problems with nonlinear dynamics. We tested the EnKF using synthetic observations consisting of surface velocity and heat flow measurements on a 2-D-spherical annulus model and compared it with the method developed previously. The EnKF performs on average better and is more stable than the former method. Less than 300 ensemble members are sufficient to reconstruct an evolution. We use covariance adaptive inflation and localization to correct for sampling errors. We show that the EnKF results are robust over a wide range of covariance localization parameters. The reconstruction is associated with an estimation of the error, and provides valuable information on where the reconstruction is to be trusted or not.
APA, Harvard, Vancouver, ISO, and other styles
50

White, Jeremy, Victoria Stengel, Samuel Rendon, and John Banta. "The importance of parameterization when simulating the hydrologic response of vegetative land-cover change." Hydrology and Earth System Sciences 21, no. 8 (August 4, 2017): 3975–89. http://dx.doi.org/10.5194/hess-21-3975-2017.

Full text
Abstract:
Abstract. Computer models of hydrologic systems are frequently used to investigate the hydrologic response of land-cover change. If the modeling results are used to inform resource-management decisions, then providing robust estimates of uncertainty in the simulated response is an important consideration. Here we examine the importance of parameterization, a necessarily subjective process, on uncertainty estimates of the simulated hydrologic response of land-cover change. Specifically, we applied the soil water assessment tool (SWAT) model to a 1.4 km2 watershed in southern Texas to investigate the simulated hydrologic response of brush management (the mechanical removal of woody plants), a discrete land-cover change. The watershed was instrumented before and after brush-management activities were undertaken, and estimates of precipitation, streamflow, and evapotranspiration (ET) are available; these data were used to condition and verify the model. The role of parameterization in brush-management simulation was evaluated by constructing two models, one with 12 adjustable parameters (reduced parameterization) and one with 1305 adjustable parameters (full parameterization). Both models were subjected to global sensitivity analysis as well as Monte Carlo and generalized likelihood uncertainty estimation (GLUE) conditioning to identify important model inputs and to estimate uncertainty in several quantities of interest related to brush management. Many realizations from both parameterizations were identified as behavioral in that they reproduce daily mean streamflow acceptably well according to Nash–Sutcliffe model efficiency coefficient, percent bias, and coefficient of determination. However, the total volumetric ET difference resulting from simulated brush management remains highly uncertain after conditioning to daily mean streamflow, indicating that streamflow data alone are not sufficient to inform the model inputs that influence the simulated outcomes of brush management the most. Additionally, the reduced-parameterization model grossly underestimates uncertainty in the total volumetric ET difference compared to the full-parameterization model; total volumetric ET difference is a primary metric for evaluating the outcomes of brush management. The failure of the reduced-parameterization model to provide robust uncertainty estimates demonstrates the importance of parameterization when attempting to quantify uncertainty in land-cover change simulations.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography