Littérature scientifique sur le sujet « MCMC optimization »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « MCMC optimization ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "MCMC optimization"

1

Rong, Teng Zhong, et Zhi Xiao. « MCMC Sampling Statistical Method to Solve the Optimization ». Applied Mechanics and Materials 121-126 (octobre 2011) : 937–41. http://dx.doi.org/10.4028/www.scientific.net/amm.121-126.937.

Texte intégral
Résumé :
This paper designs a class of generalized density function and from which proposed a solution method for the multivariable nonlinear optimization problem based on MCMC statistical sampling. Theoretical analysis proved that the maximum statistic converge to the maximum point of probability density which establishing links between the optimization and MCMC sampling. This statistical computation algorithm demonstrates convergence property of maximum statistics in large samples and it is global search design to avoid on local optimal solution restrictions. The MCMC optimization algorithm has less iterate variables reserved so that the computing speed is relatively high. Finally, the MCMC sampling optimization algorithm is applied to solve TSP problem and compared with genetic algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Zhang, Lihao, Zeyang Ye et Yuefan Deng. « Parallel MCMC methods for global optimization ». Monte Carlo Methods and Applications 25, no 3 (1 septembre 2019) : 227–37. http://dx.doi.org/10.1515/mcma-2019-2043.

Texte intégral
Résumé :
Abstract We introduce a parallel scheme for simulated annealing, a widely used Markov chain Monte Carlo (MCMC) method for optimization. Our method is constructed and analyzed under the classical framework of MCMC. The benchmark function for optimization is used for validation and verification of the parallel scheme. The experimental results, along with the proof based on statistical theory, provide us with insights into the mechanics of the parallelization of simulated annealing for high parallel efficiency or scalability for large parallel computers.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Martino, L., V. Elvira, D. Luengo, J. Corander et F. Louzada. « Orthogonal parallel MCMC methods for sampling and optimization ». Digital Signal Processing 58 (novembre 2016) : 64–84. http://dx.doi.org/10.1016/j.dsp.2016.07.013.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Yin, Long, Sheng Zhang, Kun Xiang, Yongqiang Ma, Yongzhen Ji, Ke Chen et Dongyu Zheng. « A New Stochastic Process of Prestack Inversion for Rock Property Estimation ». Applied Sciences 12, no 5 (25 février 2022) : 2392. http://dx.doi.org/10.3390/app12052392.

Texte intégral
Résumé :
In order to enrich the current prestack stochastic inversion theory, we propose a prestack stochastic inversion method based on adaptive particle swarm optimization combined with Markov chain Monte Carlo (MCMC). The MCMC could provide a stochastic optimization approach, and, with the APSO, have a better performance in global optimization methods. This method uses logging data to define a preprocessed model space. It also uses Bayesian statistics and Markov chains with a state transition matrix to update and evolve each generation population in the data domain, then adaptive particle swarm optimization is used to find the global optimal value in the finite model space. The method overcomes the problem of over-fitting deterministic inversion and improves the efficiency of stochastic inversion. Meanwhile, the fusion of multiple sources of information can reduce the non-uniqueness of solutions and improve the inversion accuracy. We derive the APSO algorithm in detail, give the specific workflow of prestack stochastic inversion, and verify the validity of the inversion theory through the inversion test of two-dimensional prestack data in real areas.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Yang, Fan, et Jianwei Ren. « Reliability Analysis Based on Optimization Random Forest Model and MCMC ». Computer Modeling in Engineering & ; Sciences 125, no 2 (2020) : 801–14. http://dx.doi.org/10.32604/cmes.2020.08889.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Glynn, Peter W., Andrey Dolgin, Reuven Y. Rubinstein et Radislav Vaisman. « HOW TO GENERATE UNIFORM SAMPLES ON DISCRETE SETS USING THE SPLITTING METHOD ». Probability in the Engineering and Informational Sciences 24, no 3 (23 avril 2010) : 405–22. http://dx.doi.org/10.1017/s0269964810000057.

Texte intégral
Résumé :
The goal of this work is twofold. We show the following:1.In spite of the common consensus on the classic Markov chain Monte Carlo (MCMC) as a universal tool for generating samples on complex sets, it fails to generate points uniformly distributed on discrete ones, such as that defined by the constraints of integer programming. In fact, we will demonstrate empirically that not only does it fail to generate uniform points on the desired set, but typically it misses some of the points of the set.2.Thesplitting, also called thecloningmethod – originally designed for combinatorial optimization and for counting on discrete sets and presenting a combination of MCMC, like the Gibbs sampler, with a specially designed splitting mechanism—can also be efficiently used for generating uniform samples on these sets. Without introducing the appropriate splitting mechanism, MCMC fails. Although we do not have a formal proof, we guess (conjecture) that the main reason that the classic MCMC is not working is that its resulting chain is not irreducible. We provide valid statistical tests supporting the uniformity of generated samples by the splitting method and present supportive numerical results.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Li, Chunyuan, Changyou Chen, Yunchen Pu, Ricardo Henao et Lawrence Carin. « Communication-Efficient Stochastic Gradient MCMC for Neural Networks ». Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 juillet 2019) : 4173–80. http://dx.doi.org/10.1609/aaai.v33i01.33014173.

Texte intégral
Résumé :
Learning probability distributions on the weights of neural networks has recently proven beneficial in many applications. Bayesian methods such as Stochastic Gradient Markov Chain Monte Carlo (SG-MCMC) offer an elegant framework to reason about model uncertainty in neural networks. However, these advantages usually come with a high computational cost. We propose accelerating SG-MCMC under the masterworker framework: workers asynchronously and in parallel share responsibility for gradient computations, while the master collects the final samples. To reduce communication overhead, two protocols (downpour and elastic) are developed to allow periodic interaction between the master and workers. We provide a theoretical analysis on the finite-time estimation consistency of posterior expectations, and establish connections to sample thinning. Our experiments on various neural networks demonstrate that the proposed algorithms can greatly reduce training time while achieving comparable (or better) test accuracy/log-likelihood levels, relative to traditional SG-MCMC. When applied to reinforcement learning, it naturally provides exploration for asynchronous policy optimization, with encouraging performance improvement.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Yamaguchi, Kazuhiro, et Kensuke Okada. « Variational Bayes Inference for the DINA Model ». Journal of Educational and Behavioral Statistics 45, no 5 (31 mars 2020) : 569–97. http://dx.doi.org/10.3102/1076998620911934.

Texte intégral
Résumé :
In this article, we propose a variational Bayes (VB) inference method for the deterministic input noisy AND gate model of cognitive diagnostic assessment. The proposed method, which applies the iterative algorithm for optimization, is derived based on the optimal variational posteriors of the model parameters. The proposed VB inference enables much faster computation than the existing Markov chain Monte Carlo (MCMC) method, while still offering the benefits of a full Bayesian framework. A simulation study revealed that the proposed VB estimation adequately recovered the parameter values. Moreover, an example using real data revealed that the proposed VB inference method provided similar estimates to MCMC estimation with much faster computation.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Xu, Haoyu, Tao Zhang, Yiqi Luo, Xin Huang et Wei Xue. « Parameter calibration in global soil carbon models using surrogate-based optimization ». Geoscientific Model Development 11, no 7 (27 juillet 2018) : 3027–44. http://dx.doi.org/10.5194/gmd-11-3027-2018.

Texte intégral
Résumé :
Abstract. Soil organic carbon (SOC) has a significant effect on carbon emissions and climate change. However, the current SOC prediction accuracy of most models is very low. Most evaluation studies indicate that the prediction error mainly comes from parameter uncertainties, which can be improved by parameter calibration. Data assimilation techniques have been successfully employed for the parameter calibration of SOC models. However, data assimilation algorithms, such as the sampling-based Bayesian Markov chain Monte Carlo (MCMC), generally have high computation costs and are not appropriate for complex global land models. This study proposes a new parameter calibration method based on surrogate optimization techniques to improve the prediction accuracy of SOC. Experiments on three types of soil carbon cycle models, including the Community Land Model with the Carnegie–Ames–Stanford Approach biogeochemistry submodel (CLM-CASA') and two microbial models show that the surrogate-based optimization method is effective and efficient in terms of both accuracy and cost. Compared to predictions using the tuned parameter values through Bayesian MCMC, the root mean squared errors (RMSEs) between the predictions using the calibrated parameter values with surrogate-base optimization and the observations could be reduced by up to 12 % for different SOC models. Meanwhile, the corresponding computational cost is lower than other global optimization algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Kitchen, James L., Jonathan D. Moore, Sarah A. Palmer et Robin G. Allaby. « MCMC-ODPR : Primer design optimization using Markov Chain Monte Carlo sampling ». BMC Bioinformatics 13, no 1 (2012) : 287. http://dx.doi.org/10.1186/1471-2105-13-287.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "MCMC optimization"

1

Mahendran, Nimalan. « Bayesian optimization for adaptive MCMC ». Thesis, University of British Columbia, 2011. http://hdl.handle.net/2429/30636.

Texte intégral
Résumé :
A new randomized strategy for adaptive Markov chain Monte Carlo (MCMC) using Bayesian optimization, called Bayesian-optimized MCMC, is proposed. This approach can handle non-differentiable objective functions and trades off exploration and exploitation to reduce the number of function evaluations. Bayesian-optimized MCMC is applied to the complex setting of sampling from constrained, discrete and densely connected probabilistic graphical models where, for each variation of the problem, one needs to adjust the parameters of the proposal mechanism automatically to ensure efficient mixing of the Markov chains. It is found that Bayesian-optimized MCMC is able to match or surpass manual tuning of the proposal mechanism by a domain expert.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Karimi, Belhal. « Non-Convex Optimization for Latent Data Models : Algorithms, Analysis and Applications ». Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX040/document.

Texte intégral
Résumé :
De nombreux problèmes en Apprentissage Statistique consistent à minimiser une fonction non convexe et non lisse définie sur un espace euclidien. Par exemple, les problèmes de maximisation de la vraisemblance et la minimisation du risque empirique en font partie.Les algorithmes d'optimisation utilisés pour résoudre ce genre de problèmes ont été largement étudié pour des fonctions convexes et grandement utilisés en pratique.Cependant, l'accrudescence du nombre d'observation dans l'évaluation de ce risque empirique ajoutée à l'utilisation de fonctions de perte de plus en plus sophistiquées représentent des obstacles.Ces obstacles requièrent d'améliorer les algorithmes existants avec des mis à jour moins coûteuses, idéalement indépendantes du nombre d'observations, et d'en garantir le comportement théorique sous des hypothèses moins restrictives, telles que la non convexité de la fonction à optimiser.Dans ce manuscrit de thèse, nous nous intéressons à la minimisation de fonctions objectives pour des modèles à données latentes, ie, lorsque les données sont partiellement observées ce qui inclut le sens conventionnel des données manquantes mais est un terme plus général que cela.Dans une première partie, nous considérons la minimisation d'une fonction (possiblement) non convexe et non lisse en utilisant des mises à jour incrémentales et en ligne. Nous proposons et analysons plusieurs algorithmes à travers quelques applications.Dans une seconde partie, nous nous concentrons sur le problème de maximisation de vraisemblance non convexe en ayant recourt à l'algorithme EM et ses variantes stochastiques. Nous en analysons plusieurs versions rapides et moins coûteuses et nous proposons deux nouveaux algorithmes du type EM dans le but d'accélérer la convergence des paramètres estimés
Many problems in machine learning pertain to tackling the minimization of a possibly non-convex and non-smooth function defined on a Many problems in machine learning pertain to tackling the minimization of a possibly non-convex and non-smooth function defined on a Euclidean space.Examples include topic models, neural networks or sparse logistic regression.Optimization methods, used to solve those problems, have been widely studied in the literature for convex objective functions and are extensively used in practice.However, recent breakthroughs in statistical modeling, such as deep learning, coupled with an explosion of data samples, require improvements of non-convex optimization procedure for large datasets.This thesis is an attempt to address those two challenges by developing algorithms with cheaper updates, ideally independent of the number of samples, and improving the theoretical understanding of non-convex optimization that remains rather limited.In this manuscript, we are interested in the minimization of such objective functions for latent data models, ie, when the data is partially observed which includes the conventional sense of missing data but is much broader than that.In the first part, we consider the minimization of a (possibly) non-convex and non-smooth objective function using incremental and online updates.To that end, we propose several algorithms exploiting the latent structure to efficiently optimize the objective and illustrate our findings with numerous applications.In the second part, we focus on the maximization of non-convex likelihood using the EM algorithm and its stochastic variants.We analyze several faster and cheaper algorithms and propose two new variants aiming at speeding the convergence of the estimated parameters
Styles APA, Harvard, Vancouver, ISO, etc.
3

Chaari, Lotfi. « Parallel magnetic resonance imaging reconstruction problems using wavelet representations ». Phd thesis, Université Paris-Est, 2010. http://tel.archives-ouvertes.fr/tel-00587410.

Texte intégral
Résumé :
Pour réduire le temps d'acquisition ou bien améliorer la résolution spatio-temporelle dans certaines application en IRM, de puissantes techniques parallèles utilisant plusieurs antennes réceptrices sont apparues depuis les années 90. Dans ce contexte, les images d'IRM doivent être reconstruites à partir des données sous-échantillonnées acquises dans le " k-space ". Plusieurs approches de reconstruction ont donc été proposées dont la méthode SENSitivity Encoding (SENSE). Cependant, les images reconstruites sont souvent entâchées par des artéfacts dus au bruit affectant les données observées, ou bien à des erreurs d'estimation des profils de sensibilité des antennes. Dans ce travail, nous présentons de nouvelles méthodes de reconstruction basées sur l'algorithme SENSE, qui introduisent une régularisation dans le domaine transformé en ondelettes afin de promouvoir la parcimonie de la solution. Sous des conditions expérimentales dégradées, ces méthodes donnent une bonne qualité de reconstruction contrairement à la méthode SENSE et aux autres techniques de régularisation classique (e.g. Tikhonov). Les méthodes proposées reposent sur des algorithmes parallèles d'optimisation permettant de traiter des critères convexes, mais non nécessairement différentiables contenant des a priori parcimonieux. Contrairement à la plupart des méthodes de reconstruction qui opèrent coupe par coupe, l'une des méthodes proposées permet une reconstruction 4D (3D + temps) en exploitant les corrélations spatiales et temporelles. Le problème d'estimation d'hyperparamètres sous-jacent au processus de régularisation a aussi été traité dans un cadre bayésien en utilisant des techniques MCMC. Une validation sur des données réelles anatomiques et fonctionnelles montre que les méthodes proposées réduisent les artéfacts de reconstruction et améliorent la sensibilité/spécificité statistique en IRM fonctionnelle
Styles APA, Harvard, Vancouver, ISO, etc.
4

Park, Jee Hyuk. « On the separation of preferences among marked point process wager alternatives ». [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-2757.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Bardenet, Rémi. « Towards adaptive learning and inference : applications to hyperparameter tuning and astroparticle physics ». Thesis, Paris 11, 2012. http://www.theses.fr/2012PA112307.

Texte intégral
Résumé :
Les algorithmes d'inférence ou d'optimisation possèdent généralement des hyperparamètres qu'il est nécessaire d'ajuster. Nous nous intéressons ici à l'automatisation de cette étape d'ajustement et considérons différentes méthodes qui y parviennent en apprenant en ligne la structure du problème considéré.La première moitié de cette thèse explore l'ajustement des hyperparamètres en apprentissage artificiel. Après avoir présenté et amélioré le cadre générique de l'optimisation séquentielle à base de modèles (SMBO), nous montrons que SMBO s'applique avec succès à l'ajustement des hyperparamètres de réseaux de neurones profonds. Nous proposons ensuite un algorithme collaboratif d'ajustement qui mime la mémoire qu'ont les humains d'expériences passées avec le même algorithme sur d'autres données.La seconde moitié de cette thèse porte sur les algorithmes MCMC adaptatifs, des algorithmes d'échantillonnage qui explorent des distributions de probabilité souvent complexes en ajustant leurs paramètres internes en ligne. Pour motiver leur étude, nous décrivons d'abord l'observatoire Pierre Auger, une expérience de physique des particules dédiée à l'étude des rayons cosmiques. Nous proposons une première partie du modèle génératif d'Auger et introduisons une procédure d'inférence des paramètres individuels de chaque événement d'Auger qui ne requiert que ce premier modèle. Ensuite, nous remarquons que ce modèle est sujet à un problème connu sous le nom de label switching. Après avoir présenté les solutions existantes, nous proposons AMOR, le premier algorithme MCMC adaptatif doté d'un réétiquetage en ligne qui résout le label switching. Nous présentons une étude empirique et des résultats théoriques de consistance d'AMOR, qui mettent en lumière des liens entre le réétiquetage et la quantification vectorielle
Inference and optimization algorithms usually have hyperparameters that require to be tuned in order to achieve efficiency. We consider here different approaches to efficiently automatize the hyperparameter tuning step by learning online the structure of the addressed problem. The first half of this thesis is devoted to hyperparameter tuning in machine learning. After presenting and improving the generic sequential model-based optimization (SMBO) framework, we show that SMBO successfully applies to the task of tuning the numerous hyperparameters of deep belief networks. We then propose an algorithm that performs tuning across datasets, mimicking the memory that humans have of past experiments with the same algorithm on different datasets. The second half of this thesis deals with adaptive Markov chain Monte Carlo (MCMC) algorithms, sampling-based algorithms that explore complex probability distributions while self-tuning their internal parameters on the fly. We start by describing the Pierre Auger observatory, a large-scale particle physics experiment dedicated to the observation of atmospheric showers triggered by cosmic rays. The models involved in the analysis of Auger data motivated our study of adaptive MCMC. We derive the first part of the Auger generative model and introduce a procedure to perform inference on shower parameters that requires only this bottom part. Our model inherently suffers from label switching, a common difficulty in MCMC inference, which makes marginal inference useless because of redundant modes of the target distribution. After reviewing existing solutions to label switching, we propose AMOR, the first adaptive MCMC algorithm with online relabeling. We give both an empirical and theoretical study of AMOR, unveiling interesting links between relabeling algorithms and vector quantization
Styles APA, Harvard, Vancouver, ISO, etc.
6

Cheng, Yougan. « Computational Models of Brain Energy Metabolism at Different Scales ». Case Western Reserve University School of Graduate Studies / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=case1396534897.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Thouvenin, Pierre-Antoine. « Modeling spatial and temporal variabilities in hyperspectral image unmixing ». Phd thesis, Toulouse, INPT, 2017. http://oatao.univ-toulouse.fr/19258/1/THOUVENIN_PierreAntoine.pdf.

Texte intégral
Résumé :
Acquired in hundreds of contiguous spectral bands, hyperspectral (HS) images have received an increasing interest due to the significant spectral information they convey about the materials present in a given scene. However, the limited spatial resolution of hyperspectral sensors implies that the observations are mixtures of multiple signatures corresponding to distinct materials. Hyperspectral unmixing is aimed at identifying the reference spectral signatures composing the data -- referred to as endmembers -- and their relative proportion in each pixel according to a predefined mixture model. In this context, a given material is commonly assumed to be represented by a single spectral signature. This assumption shows a first limitation, since endmembers may vary locally within a single image, or from an image to another due to varying acquisition conditions, such as declivity and possibly complex interactions between the incident light and the observed materials. Unless properly accounted for, spectral variability can have a significant impact on the shape and the amplitude of the acquired signatures, thus inducing possibly significant estimation errors during the unmixing process. A second limitation results from the significant size of HS data, which may preclude the use of batch estimation procedures commonly used in the literature, i.e., techniques exploiting all the available data at once. Such computational considerations notably become prominent to characterize endmember variability in multi-temporal HS (MTHS) images, i.e., sequences of HS images acquired over the same area at different time instants. The main objective of this thesis consists in introducing new models and unmixing procedures to account for spatial and temporal endmember variability. Endmember variability is addressed by considering an explicit variability model reminiscent of the total least squares problem, and later extended to account for time-varying signatures. The variability is first estimated using an unsupervised deterministic optimization procedure based on the Alternating Direction Method of Multipliers (ADMM). Given the sensitivity of this approach to abrupt spectral variations, a robust model formulated within a Bayesian framework is introduced. This formulation enables smooth spectral variations to be described in terms of spectral variability, and abrupt changes in terms of outliers. Finally, the computational restrictions induced by the size of the data is tackled by an online estimation algorithm. This work further investigates an asynchronous distributed estimation procedure to estimate the parameters of the proposed models.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Diabaté, Modibo. « Modélisation stochastique et estimation de la croissance tumorale ». Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM040.

Texte intégral
Résumé :
Cette thèse porte sur la modélisation mathématique de la dynamique du cancer ; elle se divise en deux projets de recherche.Dans le premier projet, nous estimons les paramètres de la limite déterministe d'un processus stochastique modélisant la dynamique du mélanome (cancer de la peau) traité par immunothérapie. L'estimation est réalisée à l'aide d'un modèle statistique non-linéaire à effets mixtes et l'algorithme SAEM, à partir des données réelles de taille tumorale mesurée au cours du temps chez plusieurs patients. Avec ce modèle mathématique qui ajuste bien les données, nous évaluons la probabilité de rechute du mélanome (à l'aide de l'algorithme Importance Splitting), et proposons une optimisation du protocole de traitement (doses et instants du traitement).Nous proposons dans le second projet, une méthode d'approximation de vraisemblance basée sur une approximation de l'algorithme Belief Propagation à l'aide de l'algorithme Expectation-Propagation, pour une approximation diffusion du modèle stochastique de mélanome observée chez un seul individu avec du bruit gaussien. Cette approximation diffusion (définie par une équation différentielle stochastique) n'ayant pas de solution analytique, nous faisons recours à une méthode d'Euler pour approcher sa solution (après avoir testé la méthode d'Euler sur le processus de diffusion d'Ornstein Uhlenbeck). Par ailleurs, nous utilisons une méthode d'approximation de moments pour faire face à la multidimensionnalité et la non-linéarité de notre modèle. A l'aide de la méthode d'approximation de vraisemblance, nous abordons l'estimation de paramètres dans des Modèles de Markov Cachés
This thesis is about mathematical modeling of cancer dynamics ; it is divided into two research projects.In the first project, we estimate the parameters of the deterministic limit of a stochastic process modeling the dynamics of melanoma (skin cancer) treated by immunotherapy. The estimation is carried out with a nonlinear mixed-effect statistical model and the SAEM algorithm, using real data of tumor size. With this mathematical model that fits the data well, we evaluate the relapse probability of melanoma (using the Importance Splitting algorithm), and we optimize the treatment protocol (doses and injection times).We propose in the second project, a likelihood approximation method based on an approximation of the Belief Propagation algorithm by the Expectation-Propagation algorithm, for a diffusion approximation of the melanoma stochastic model, noisily observed in a single individual. This diffusion approximation (defined by a stochastic differential equation) having no analytical solution, we approximate its solution by using an Euler method (after testing the Euler method on the Ornstein Uhlenbeck diffusion process). Moreover, a moment approximation method is used to manage the multidimensionality and the non-linearity of the melanoma mathematical model. With the likelihood approximation method, we tackle the problem of parameter estimation in Hidden Markov Models
Styles APA, Harvard, Vancouver, ISO, etc.
9

Kim, Tae Seon. « Modeling, optimization, and control of via formation by photosensitive polymers for MCM-D applications ». Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/15017.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Al-Hasani, Firas Ali Jawad. « Multiple Constant Multiplication Optimization Using Common Subexpression Elimination and Redundant Numbers ». Thesis, University of Canterbury. Electrical and Computer Engineering, 2014. http://hdl.handle.net/10092/9054.

Texte intégral
Résumé :
The multiple constant multiplication (MCM) operation is a fundamental operation in digital signal processing (DSP) and digital image processing (DIP). Examples of the MCM are in finite impulse response (FIR) and infinite impulse response (IIR) filters, matrix multiplication, and transforms. The aim of this work is minimizing the complexity of the MCM operation using common subexpression elimination (CSE) technique and redundant number representations. The CSE technique searches and eliminates common digit patterns (subexpressions) among MCM coefficients. More common subexpressions can be found by representing the MCM coefficients using redundant number representations. A CSE algorithm is proposed that works on a type of redundant numbers called the zero-dominant set (ZDS). The ZDS is an extension over the representations of minimum number of non-zero digits called minimum Hamming weight (MHW). Using the ZDS improves CSE algorithms' performance as compared with using the MHW representations. The disadvantage of using the ZDS is it increases the possibility of overlapping patterns (digit collisions). In this case, one or more digits are shared between a number of patterns. Eliminating a pattern results in losing other patterns because of eliminating the common digits. A pattern preservation algorithm (PPA) is developed to resolve the overlapping patterns in the representations. A tree and graph encoders are proposed to generate a larger space of number representations. The algorithms generate redundant representations of a value for a given digit set, radix, and wordlength. The tree encoder is modified to search for common subexpressions simultaneously with generating of the representation tree. A complexity measure is proposed to compare between the subexpressions at each node. The algorithm terminates generating the rest of the representation tree when it finds subexpressions with maximum sharing. This reduces the search space while minimizes the hardware complexity. A combinatoric model of the MCM problem is proposed in this work. The model is obtained by enumerating all the possible solutions of the MCM that resemble a graph called the demand graph. Arc routing on this graph gives the solutions of the MCM problem. A similar arc routing is found in the capacitated arc routing such as the winter salting problem. Ant colony optimization (ACO) meta-heuristics is proposed to traverse the demand graph. The ACO is simulated on a PC using Python programming language. This is to verify the model correctness and the work of the ACO. A parallel simulation of the ACO is carried out on a multi-core super computer using C++ boost graph library.
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "MCMC optimization"

1

Bhatnagar, Nayantara, Andrej Bogdanov et Elchanan Mossel. « The Computational Complexity of Estimating MCMC Convergence Time ». Dans Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, 424–35. Berlin, Heidelberg : Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-22935-0_36.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Szirányi, Tamás, et Zoltán Tóth. « Optimization of Paintbrush Rendering of Images by Dynamic MCMC Methods ». Dans Lecture Notes in Computer Science, 201–15. Berlin, Heidelberg : Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44745-8_14.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Fox, Colin. « Polynomial Accelerated MCMC and Other Sampling Algorithms Inspired by Computational Optimization ». Dans Monte Carlo and Quasi-Monte Carlo Methods 2012, 349–66. Berlin, Heidelberg : Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-41095-6_15.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Sommer, Katrin, et Claus Weihs. « Using MCMC as a Stochastic Optimization Procedure for Monophonic and Polyphonic Sound ». Dans Studies in Classification, Data Analysis, and Knowledge Organization, 645–52. Berlin, Heidelberg : Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-70981-7_74.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Nijkamp, Erik, Bo Pang, Tian Han, Linqi Zhou, Song-Chun Zhu et Ying Nian Wu. « Learning Multi-layer Latent Variable Model via Variational Optimization of Short Run MCMC for Approximate Inference ». Dans Computer Vision – ECCV 2020, 361–78. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58539-6_22.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Peeters, Joris, et Eric Beyne. « Analysis and optimization of circuit interconnect performance ». Dans MCM C/Mixed Technologies and Thick Film Sensors, 29–34. Dordrecht : Springer Netherlands, 1995. http://dx.doi.org/10.1007/978-94-011-0079-3_4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Kumm, Martin. « Optimally Solving MCM Related Problems Using Integer Linear Programming ». Dans Multiple Constant Multiplication Optimizations for Field Programmable Gate Arrays, 87–111. Wiesbaden : Springer Fachmedien Wiesbaden, 2016. http://dx.doi.org/10.1007/978-3-658-13323-8_5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Saini, Suman, Jyoti Chawla, Rajeev Kumar et Inderpreet Kaur. « Optimization of Lead Ions Adsorption onto C16-6-16 Incorporated Mesoporous MCM-41 Using Box-Behnken Design ». Dans Environmental Biotechnology For Soil and Wastewater Implications on Ecosystems, 61–67. Singapore : Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-6846-2_9.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Diekmann, Odo, Hans Heesterbeek et Tom Britton. « A brief guide to computer intensive statistics ». Dans Mathematical Tools for Understanding Infectious Disease Dynamics. Princeton University Press, 2012. http://dx.doi.org/10.23943/princeton/9780691155395.003.0015.

Texte intégral
Résumé :
Chapters 5, 13 and 14 presented methods for making inference about infectious diseases from available data. This is of course one of the main motivations for modeling: learning about important features, such as R₀, the initial growth rate, potential outbreak sizes and what effect different control measures might have in the context of specific infections. The models considered in these chapters have all been simple enough to obtain more or less explicit estimates of just a few relevant parameters. In more complicated and parameter-rich models, and/or when analyzing large data sets, it is usually impossible to estimate key model parameters explicitly. In such situations there are (at least) two ways to proceed. One uses Bayesian statistical inference by means of Markov chain Monte Carlo methods (MCMC), and the other uses large scale simulations along with numerical optimization to fit parameters to data. This chapter mainly describes Bayesian inference using MCMC and only briefly some large simulation methods.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Diekmann, Odo, Hans Heesterbeek et Tom Britton. « Elaborations for Part I ». Dans Mathematical Tools for Understanding Infectious Disease Dynamics. Princeton University Press, 2012. http://dx.doi.org/10.23943/princeton/9780691155395.003.0016.

Texte intégral
Résumé :
Chapters 5, 13 and 14 presented methods for making inference about. infectious diseases from available data. This is of course one of the main motivations for modeling: learning about important features, such as R, the initial growth rate, potential outbreak sizes and what effect different control measures might have in the context of specific infections. The models considered in these chapters have all been simple enough to obtain more or less explicit estimates. of just a few relevant parameters. In more complicated and parameter-rich models, and/or when analyzing large data sets, it is usually impossible to estimate key model parameters explicitly. In such situations there are (at least) two ways to proceed. One uses Bayesian statistical inference by means of Markov chain Monte Carlo methods (MCMC), and the other uses large scale simulations along with numerical optimization to fit parameters to data. This chapter mainly describes Bayesian inference using MCMC and only briefly some large simulation methods.
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "MCMC optimization"

1

Acar, Erdem, et Gamze Bayrak. « Reliability Estimation Using MCMC Based Tail Modeling ». Dans 17th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference. Reston, Virginia : American Institute of Aeronautics and Astronautics, 2016. http://dx.doi.org/10.2514/6.2016-4412.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Cannella, Chris, et Vahid Tarokh. « Semi-Empirical Objective Functions for MCMC Proposal Optimization ». Dans 2022 26th International Conference on Pattern Recognition (ICPR). IEEE, 2022. http://dx.doi.org/10.1109/icpr56361.2022.9956603.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Zhao, Zinan, et Mrinal Kumar. « A Split-Bernstein/MCMC Approach to Probabilistically Constrained Optimization ». Dans AIAA Guidance, Navigation, and Control Conference. Reston, Virginia : American Institute of Aeronautics and Astronautics, 2015. http://dx.doi.org/10.2514/6.2015-1084.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Xiang, K., et L. Han. « Adaptive Particle Swarm Optimization Assisted MCMC for Stochastic Inversion ». Dans 81st EAGE Conference and Exhibition 2019. European Association of Geoscientists & Engineers, 2019. http://dx.doi.org/10.3997/2214-4609.201901305.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Shadbakht, Sormeh, et Babak Hassibi. « MCMC methods for entropy optimization and nonlinear network coding ». Dans 2010 IEEE International Symposium on Information Theory - ISIT. IEEE, 2010. http://dx.doi.org/10.1109/isit.2010.5513737.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Wu, Y.-T. (Justin). « MCMC-Based Simulation Method for Efficient Risk- Based Maintenance Optimization ». Dans SAE World Congress & Exhibition. 400 Commonwealth Drive, Warrendale, PA, United States : SAE International, 2009. http://dx.doi.org/10.4271/2009-01-0566.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Kim, Wonsik, et Kyoung Mu Lee. « Scanline Sampler without Detailed Balance : An Efficient MCMC for MRF Optimization ». Dans 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2014. http://dx.doi.org/10.1109/cvpr.2014.176.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Li Tang, Zheng Zhao, Xiu-Jun Gong et Hua-Peng Zeng. « Optimization of MCMC sampling algorithm for the calculation of PAC-Bayes bound ». Dans 2013 International Conference on Machine Learning and Cybernetics (ICMLC). IEEE, 2013. http://dx.doi.org/10.1109/icmlc.2013.6890745.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Santoso, Ryan, Xupeng He, Marwa Alsinan, Hyung Kwak et Hussein Hoteit. « Bayesian Long-Short Term Memory for History Matching in Reservoir Simulations ». Dans SPE Reservoir Simulation Conference. SPE, 2021. http://dx.doi.org/10.2118/203976-ms.

Texte intégral
Résumé :
Abstract History matching is critical in subsurface flow modeling. It is to align the reservoir model with the measured data. However, it remains challenging since the solution is not unique and the implementation is expensive. The traditional approach relies on trial and error, which are exhaustive and labor-intensive. In this study, we propose a new workflow utilizing Bayesian Markov Chain Monte Carlo (MCMC) to automatically and accurately perform history matching. We deliver four novelties within the workflow: 1) the use of multi-resolution low-fidelity models to guarantee high-quality matching, 2) updating the ranges of priors to assure convergence, 3) the use of Long-Short Term Memory (LSTM) network as a low-fidelity model to produce continuous time-response, and 4) the use of Bayesian optimization to obtain the optimum low-fidelity model for Bayesian MCMC runs. We utilize the first SPE comparative model as the physical and high-fidelity model. It is a gas injection into an oil reservoir case, which is the gravity-dominated process. The coarse low-fidelity model manages to provide updated priors that increase the precision of Bayesian MCMC. The Bayesian-optimized LSTM has successfully captured the physics in the high-fidelity model. The Bayesian-LSTM MCMC produces an accurate prediction with narrow uncertainties. The posterior prediction through the high-fidelity model ensures the robustness and precision of the workflow. This approach provides an efficient and high-quality history matching for subsurface flow modeling.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Matsumoto, Nobuyuki, Masafumi Fukuma et Naoya Umeda. « Distance between configurations in MCMC simulations and the geometrical optimization of the tempering algorithms ». Dans 37th International Symposium on Lattice Field Theory. Trieste, Italy : Sissa Medialab, 2020. http://dx.doi.org/10.22323/1.363.0168.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Rapports d'organisations sur le sujet "MCMC optimization"

1

Franzon, Paul D. Methodology, Tools and Demonstration of MCM System Optimization. Fort Belvoir, VA : Defense Technical Information Center, juillet 1997. http://dx.doi.org/10.21236/ada328732.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Brown, Richard B. Design Optimization of a GaAs RISC Microprocessor with Area-Interconnect MCM Packaging. Fort Belvoir, VA : Defense Technical Information Center, décembre 1999. http://dx.doi.org/10.21236/ada379011.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie