Littérature scientifique sur le sujet « Bayesian Moment Matching »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Bayesian Moment Matching ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Bayesian Moment Matching"

1

Zhang, Qiong, et Yongjia Song. « Moment-Matching-Based Conjugacy Approximation for Bayesian Ranking and Selection ». ACM Transactions on Modeling and Computer Simulation 27, no 4 (20 décembre 2017) : 1–23. http://dx.doi.org/10.1145/3149013.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Franke, Reiner, Tae-Seok Jang et Stephen Sacht. « Moment matching versus Bayesian estimation : Backward-looking behaviour in a New-Keynesian baseline model ». North American Journal of Economics and Finance 31 (janvier 2015) : 126–54. http://dx.doi.org/10.1016/j.najef.2014.11.001.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Cao, Zhixing, et Ramon Grima. « Accuracy of parameter estimation for auto-regulatory transcriptional feedback loops from noisy data ». Journal of The Royal Society Interface 16, no 153 (3 avril 2019) : 20180967. http://dx.doi.org/10.1098/rsif.2018.0967.

Texte intégral
Résumé :
Bayesian and non-Bayesian moment-based inference methods are commonly used to estimate the parameters defining stochastic models of gene regulatory networks from noisy single cell or population snapshot data. However, a systematic investigation of the accuracy of the predictions of these methods remains missing. Here, we present the results of such a study using synthetic noisy data of a negative auto-regulatory transcriptional feedback loop, one of the most common building blocks of complex gene regulatory networks. We study the error in parameter estimation as a function of (i) number of cells in each sample; (ii) the number of time points; (iii) the highest-order moment of protein fluctuations used for inference; (iv) the moment-closure method used for likelihood approximation. We find that for sample sizes typical of flow cytometry experiments, parameter estimation by maximizing the likelihood is as accurate as using Bayesian methods but with a much reduced computational time. We also show that the choice of moment-closure method is the crucial factor determining the maximum achievable accuracy of moment-based inference methods. Common likelihood approximation methods based on the linear noise approximation or the zero cumulants closure perform poorly for feedback loops with large protein–DNA binding rates or large protein bursts; this is exacerbated for highly heterogeneous cell populations. By contrast, approximating the likelihood using the linear-mapping approximation or conditional derivative matching leads to highly accurate parameter estimates for a wide range of conditions.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Nakagawa, Tomoyuki, et Shintaro Hashimoto. « On Default Priors for Robust Bayesian Estimation with Divergences ». Entropy 23, no 1 (27 décembre 2020) : 29. http://dx.doi.org/10.3390/e23010029.

Texte intégral
Résumé :
This paper presents objective priors for robust Bayesian estimation against outliers based on divergences. The minimum γ-divergence estimator is well-known to work well in estimation against heavy contamination. The robust Bayesian methods by using quasi-posterior distributions based on divergences have been also proposed in recent years. In the objective Bayesian framework, the selection of default prior distributions under such quasi-posterior distributions is an important problem. In this study, we provide some properties of reference and moment matching priors under the quasi-posterior distribution based on the γ-divergence. In particular, we show that the proposed priors are approximately robust under the condition on the contamination distribution without assuming any conditions on the contamination ratio. Some simulation studies are also presented.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Yiu, A., R. J. B. Goudie et B. D. M. Tom. « Inference under unequal probability sampling with the Bayesian exponentially tilted empirical likelihood ». Biometrika 107, no 4 (21 mai 2020) : 857–73. http://dx.doi.org/10.1093/biomet/asaa028.

Texte intégral
Résumé :
Summary Fully Bayesian inference in the presence of unequal probability sampling requires stronger structural assumptions on the data-generating distribution than frequentist semiparametric methods, but offers the potential for improved small-sample inference and convenient evidence synthesis. We demonstrate that the Bayesian exponentially tilted empirical likelihood can be used to combine the practical benefits of Bayesian inference with the robustness and attractive large-sample properties of frequentist approaches. Estimators defined as the solutions to unbiased estimating equations can be used to define a semiparametric model through the set of corresponding moment constraints. We prove Bernstein–von Mises theorems which show that the posterior constructed from the resulting exponentially tilted empirical likelihood becomes approximately normal, centred at the chosen estimator with matching asymptotic variance; thus, the posterior has properties analogous to those of the estimator, such as double robustness, and the frequentist coverage of any credible set will be approximately equal to its credibility. The proposed method can be used to obtain modified versions of existing estimators with improved properties, such as guarantees that the estimator lies within the parameter space. Unlike existing Bayesian proposals, our method does not prescribe a particular choice of prior or require posterior variance correction, and simulations suggest that it provides superior performance in terms of frequentist criteria.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Dimas, Christos, Vassilis Alimisis, Nikolaos Uzunoglu et Paul P. Sotiriadis. « A Point-Matching Method of Moment with Sparse Bayesian Learning Applied and Evaluated in Dynamic Lung Electrical Impedance Tomography ». Bioengineering 8, no 12 (25 novembre 2021) : 191. http://dx.doi.org/10.3390/bioengineering8120191.

Texte intégral
Résumé :
Dynamic lung imaging is a major application of Electrical Impedance Tomography (EIT) due to EIT’s exceptional temporal resolution, low cost and absence of radiation. EIT however lacks in spatial resolution and the image reconstruction is very sensitive to mismatches between the actual object’s and the reconstruction domain’s geometries, as well as to the signal noise. The non-linear nature of the reconstruction problem may also be a concern, since the lungs’ significant conductivity changes due to inhalation and exhalation. In this paper, a recently introduced method of moment is combined with a sparse Bayesian learning approach to address the non-linearity issue, provide robustness to the reconstruction problem and reduce image artefacts. To evaluate the proposed methodology, we construct three CT-based time-variant 3D thoracic structures including the basic thoracic tissues and considering 5 different breath states from end-expiration to end-inspiration. The Graz consensus reconstruction algorithm for EIT (GREIT), the correlation coefficient (CC), the root mean square error (RMSE) and the full-reference (FR) metrics are applied for the image quality assessment. Qualitative and quantitative comparison with traditional and more advanced reconstruction techniques reveals that the proposed method shows improved performance in the majority of cases and metrics. Finally, the approach is applied to single-breath online in-vivo data to qualitatively verify its applicability.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Heath, Anna, Ioanna Manolopoulou et Gianluca Baio. « Estimating the Expected Value of Sample Information across Different Sample Sizes Using Moment Matching and Nonlinear Regression ». Medical Decision Making 39, no 4 (mai 2019) : 347–59. http://dx.doi.org/10.1177/0272989x19837983.

Texte intégral
Résumé :
Background. The expected value of sample information (EVSI) determines the economic value of any future study with a specific design aimed at reducing uncertainty about the parameters underlying a health economic model. This has potential as a tool for trial design; the cost and value of different designs could be compared to find the trial with the greatest net benefit. However, despite recent developments, EVSI analysis can be slow, especially when optimizing over a large number of different designs. Methods. This article develops a method to reduce the computation time required to calculate the EVSI across different sample sizes. Our method extends the moment-matching approach to EVSI estimation to optimize over different sample sizes for the underlying trial while retaining a similar computational cost to a single EVSI estimate. This extension calculates the posterior variance of the net monetary benefit across alternative sample sizes and then uses Bayesian nonlinear regression to estimate the EVSI across these sample sizes. Results. A health economic model developed to assess the cost-effectiveness of interventions for chronic pain demonstrates that this EVSI calculation method is fast and accurate for realistic models. This example also highlights how different trial designs can be compared using the EVSI. Conclusion. The proposed estimation method is fast and accurate when calculating the EVSI across different sample sizes. This will allow researchers to realize the potential of using the EVSI to determine an economically optimal trial design for reducing uncertainty in health economic models. Limitations. Our method involves rerunning the health economic model, which can be more computationally expensive than some recent alternatives, especially in complex models.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Browning, Alexander P., Christopher Drovandi, Ian W. Turner, Adrianne L. Jenner et Matthew J. Simpson. « Efficient inference and identifiability analysis for differential equation models with random parameters ». PLOS Computational Biology 18, no 11 (28 novembre 2022) : e1010734. http://dx.doi.org/10.1371/journal.pcbi.1010734.

Texte intégral
Résumé :
Heterogeneity is a dominant factor in the behaviour of many biological processes. Despite this, it is common for mathematical and statistical analyses to ignore biological heterogeneity as a source of variability in experimental data. Therefore, methods for exploring the identifiability of models that explicitly incorporate heterogeneity through variability in model parameters are relatively underdeveloped. We develop a new likelihood-based framework, based on moment matching, for inference and identifiability analysis of differential equation models that capture biological heterogeneity through parameters that vary according to probability distributions. As our novel method is based on an approximate likelihood function, it is highly flexible; we demonstrate identifiability analysis using both a frequentist approach based on profile likelihood, and a Bayesian approach based on Markov-chain Monte Carlo. Through three case studies, we demonstrate our method by providing a didactic guide to inference and identifiability analysis of hyperparameters that relate to the statistical moments of model parameters from independent observed data. Our approach has a computational cost comparable to analysis of models that neglect heterogeneity, a significant improvement over many existing alternatives. We demonstrate how analysis of random parameter models can aid better understanding of the sources of heterogeneity from biological data.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Habibi, Reza. « Conditional Beta Approximation : Two Applications ». Indonesian Journal of Mathematics and Applications 2, no 1 (31 mars 2024) : 9–23. http://dx.doi.org/10.21776/ub.ijma.2024.002.01.2.

Texte intégral
Résumé :
Suppose that X,Y are two independent positive continuous random variables. Let P=\frac{X}{X+Y} and Z=X+Y. If X, Y have gamma distributions with the same scale parameter, then P distribution will be beta and P,\ Z are independent. In the case that the distributions of these two variables are not gamma, the P distribution is well approximated by the beta distribution. However, P,\ Z are dependent. According to matching moment method, it is necessary to compute the moments of conditional distribution for beta fitting. In this paper, some new methods for computing moments of conditional distribution of P given Z are proposed. First of all, it is suggested to consider the regression method. Then Monte Carlo simulation is advised. The Bayesian posterior distribution of P is suggested. Applications of differential equations are also reviewed. These results are applied in two applications namely variance change point detection and winning percentage of gambling game are proposed. The probability of change in variance in a sequence of variables, as a leading indicator of possible change, is proposed. Similarly, the probability of winning in a sequential gambling framework is proposed. The optimal time to exit of gambling game is proposed. A game theoretic approach to problem of optimal exit time is proposed. In all cases, beta approximations are proposed. Finally, a conclusion section is also given.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Lu, Chi-Ken, et Patrick Shafto. « Conditional Deep Gaussian Processes : Empirical Bayes Hyperdata Learning ». Entropy 23, no 11 (23 octobre 2021) : 1387. http://dx.doi.org/10.3390/e23111387.

Texte intégral
Résumé :
It is desirable to combine the expressive power of deep learning with Gaussian Process (GP) in one expressive Bayesian learning model. Deep kernel learning showed success as a deep network used for feature extraction. Then, a GP was used as the function model. Recently, it was suggested that, albeit training with marginal likelihood, the deterministic nature of a feature extractor might lead to overfitting, and replacement with a Bayesian network seemed to cure it. Here, we propose the conditional deep Gaussian process (DGP) in which the intermediate GPs in hierarchical composition are supported by the hyperdata and the exposed GP remains zero mean. Motivated by the inducing points in sparse GP, the hyperdata also play the role of function supports, but are hyperparameters rather than random variables. It follows our previous moment matching approach to approximate the marginal prior for conditional DGP with a GP carrying an effective kernel. Thus, as in empirical Bayes, the hyperdata are learned by optimizing the approximate marginal likelihood which implicitly depends on the hyperdata via the kernel. We show the equivalence with the deep kernel learning in the limit of dense hyperdata in latent space. However, the conditional DGP and the corresponding approximate inference enjoy the benefit of being more Bayesian than deep kernel learning. Preliminary extrapolation results demonstrate expressive power from the depth of hierarchy by exploiting the exact covariance and hyperdata learning, in comparison with GP kernel composition, DGP variational inference and deep kernel learning. We also address the non-Gaussian aspect of our model as well as way of upgrading to a full Bayes inference.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Bayesian Moment Matching"

1

Heath, A. « Bayesian computations for Value of Information measures using Gaussian processes, INLA and Moment Matching ». Thesis, University College London (University of London), 2018. http://discovery.ucl.ac.uk/10050229/.

Texte intégral
Résumé :
Value of Information measures quantify the economic benefit of obtaining additional information about the underlying model parameters of a health economic model. Theoretically, these measures can be used to understand the impact of model uncertainty on health economic decision making. Specifically, the Expected Value of Partial Perfect Information (EVPPI) can be used to determine which model parameters are driving decision uncertainty. This is useful as a tool to perform sensitivity analysis to model assumptions and to determine where future research should be targeted to reduce model uncertainty. Even more importantly, the Value of Information measure known as the Expected Value of Sample Information (EVSI) quantifies the economic value of undertaking a proposed scheme of research. This has clear applications in research prioritisation and trial design, where economically valuable studies should be funded. Despite these useful properties, these two measures have rarely been used in practice due to the large computational burden associated with estimating them in practical scenarios. Therefore, this thesis develops novel methodology to allow these two measures to be calculated in practice. For the EVPPI, the method is based on non-parametric regression using the fast Bayesian computation method INLA (Integrated Nested Laplace Approximations). This novel calculation method is fast, especially for high dimensional problems, greatly reducing the computational time for calculating the EVPPI in many practical settings. For the EVSI, the approximation is based on Moment Matching and using properties of the distribution of the preposterior mean. An extension to this method also uses Bayesian non-linear regression to calculate the EVSI quickly across different trial designs. All these methods have been developed and implemented in R packages to aid implementation by practitioners and allow Value of Information measures to inform both health economic evaluations and trial design.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Vallade, Vincent. « Contributions à la résolution parallèle du problème SAT ». Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS260.

Texte intégral
Résumé :
Cette thèse présente des contributions multiples et orthogonales à l'amélioration de la résolution parallèle du problème de satisfiabilité booléenne (ou problème SAT). Une instance du problème SAT est une formule propositionnelle de forme particulière (la forme normale conjonctive est la plus courante) représentant, en général, les variables et les contraintes d'un problème du monde réel, tel que la planification multi-contraintes, la vérification matérielle et logicielle ou la cryptographie. La résolution du problème SAT consiste à déterminer s'il existe une affectation des variables qui permet de satisfaire la formule. Un algorithme capable de fournir une réponse à ce problème est appelé un solveur SAT. Une vision simplifiée d'un solveur SAT est un algorithme qui va parcourir l'ensemble des combinaisons de valeurs possibles pour chaque variables jusqu'à trouver une combinaison rendant la formule vrai (la formule est SAT). Si le solveur a parcouru l'ensemble des combinaisons possibles sans trouver de solution la formule est donc UNSAT. Évidemment, cet algorithme a une complexité exponentielle, en effet le problème SAT est le premier problème à avoir été déterminé NP-complet. De nombreux algorithmes et heuristiques ont été développés pour accélérer la capacité de résolution de ce problème, principalement dans un contexte séquentiel. L’omniprésence de machines multi-cœurs a encouragé des efforts considérables dans la résolution parallèle du problème SAT. Cette thèse s’inscrit dans le prolongement de ces efforts. Les contributions apportées par cette thèse se concentrent sur la qualité du partage de l'information entre les différents travailleurs d'un solveur SAT parallèle. Une première contribution présente une méthode efficace pour mettre en œuvre un algorithme asynchrone de réduction de la taille de l'information partagées. Une deuxième contribution combine les informations extraites de la structure particulière de la formule propositionnelle avec les informations extraites dynamiquement pendant la résolution du problème par le solveur afin de créer un filtre qui maximise la qualité de l'information partagée. Enfin, une dernière contribution porte sur l'intégration d'un composant permettant de déterminer de manière probabiliste la valeur de vérité des variables permettant de rendre une formule satisfaisable. L'appel de ce composant lors du processus de résolution permet de guider plus rapidement le solveur vers une solution (si une solution existe)
This thesis presents multiple and orthogonal contributions to the improvement of the parallel resolution of the Boolean satisfiability problem (or SAT problem). An instance of the SAT problem is a propositional formula of a particular form (the conjunctive normal form is the most common) representing, in general, the variables and constraints of a real-world problem, such as multi-constraint planning, hardware and software verification or cryptography. Solving the SAT problem involves determining whether there is an assignment of variables that satisfies the formula. An algorithm capable of providing an answer to this problem is called a SAT solver. A simplified view of a SAT solver is an algorithm that will traverse the set of possible combinations of values for each variable until it finds a combination that makes the formula true (the formula is SAT). If the solver has gone through all the possible combinations without finding a solution, the formula is UNSAT. Obviously, this algorithm has an exponential complexity, indeed the SAT problem is the first problem to have been determined NP-complete. Many algorithms and heuristics have been developed to accelerate the solving capacity of this problem, mainly in a sequential context. The ubiquity of multi-core machines has encouraged considerable efforts in the parallel resolution of the SAT problem. This thesis is a continuation of these efforts. The contributions made by this thesis focus on the quality of information sharing between the different workers of a parallel SAT solver. A first contribution presents an efficient method to implement an asynchronous algorithm for reducing the size of the shared information. A second contribution combines the information extracted from the particular structure of the propositional formula with the information extracted dynamically during the resolution of the problem by the solver in order to create a filter that maximizes the quality of the shared information. Finally, a last contribution deals with the integration of a component allowing to determine in a probabilistic way the truth value of the variables allowing to make a formula satisfiable. The call of this component during the solving process allows to guide the solver more quickly towards a solution (if a solution exists)
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Bayesian Moment Matching"

1

Vallade, Vincent, Saeed Nejati, Julien Sopena, Souheib Baarir et Vijay Ganesh. « Diversifying a Parallel SAT Solver with Bayesian Moment Matching ». Dans Dependable Software Engineering. Theories, Tools, and Applications, 227–33. Cham : Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-21213-0_14.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Cowell*, R. G., A. P. Dawid* et P. Sebastiani**. « A Comparison of Sequential Learning Methods for Incomplete Data ». Dans Bayesian Statistics 5, 533–42. Oxford University PressOxford, 1996. http://dx.doi.org/10.1093/oso/9780198523567.003.0031.

Texte intégral
Résumé :
Abstract Deterministic and stochastic methods to approximate a mixture distribution which arises in learning with incomplete data are compared in a simple problem. Simulation results suggest that a simple deterministic method based on moment matching gives a very good approximation of the exact mixture distribution. This also works well when combined with an initial stochastic updating.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Donovan, Therese M., et Ruth M. Mickey. « The Shark Attack Problem Revisited : MCMC with the Metropolis Algorithm ». Dans Bayesian Statistics for Beginners, 193–211. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198841296.003.0013.

Texte intégral
Résumé :
In this chapter, the “Shark Attack Problem” (Chapter 11) is revisited. Markov Chain Monte Carlo (MCMC) is introduced as another way to determine a posterior distribution of λ‎, the mean number of shark attacks per year. The MCMC approach is so versatile that it can be used to solve almost any kind of parameter estimation problem. The chapter highlights the Metropolis algorithm in detail and illustrates its application, step by step, for the “Shark Attack Problem.” The posterior distribution generated in Chapter 11 using the gamma-Poisson conjugate is compared with the MCMC posterior distribution to show how successful the MCMC method can be. By the end of the chapter, the reader should also understand the following concepts: tuning parameter, MCMC inference, traceplot, and moment matching.
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Bayesian Moment Matching"

1

Li, Ximing, Changchun Li, Jinjin Chi et Jihong Ouyang. « Variance Reduction in Black-box Variational Inference by Adaptive Importance Sampling ». Dans Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California : International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/333.

Texte intégral
Résumé :
Overdispersed black-box variational inference employs importance sampling to reduce the variance of the Monte Carlo gradient in black-box variational inference. A simple overdispersed proposal distribution is used. This paper aims to investigate how to adaptively obtain better proposal distribution for lower variance. To this end, we directly approximate the optimal proposal in theory using a Monte Carlo moment matching step at each variational iteration. We call this adaptive proposal moment matching proposal (MMP). Experimental results on two Bayesian models show that the MMP can effectively reduce variance in black-box learning, and perform better than baseline inference algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie