Gotowa bibliografia na temat „Markov chain”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Spis treści
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Markov chain”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Markov chain"
Verbeken, Brecht, i Marie-Anne Guerry. "Attainability for Markov and Semi-Markov Chains". Mathematics 12, nr 8 (19.04.2024): 1227. http://dx.doi.org/10.3390/math12081227.
Pełny tekst źródłaArkoubi, Khadija. "MARKOV CHAIN". International Journal of Scientific and Engineering Research 7, nr 3 (25.03.2016): 706–7. http://dx.doi.org/10.14299/ijser.2016.03.009.
Pełny tekst źródłaBarker, Richard J., i Matthew R. Schofield. "Putting Markov Chains Back into Markov Chain Monte Carlo". Journal of Applied Mathematics and Decision Sciences 2007 (30.10.2007): 1–13. http://dx.doi.org/10.1155/2007/98086.
Pełny tekst źródłaValenzuela, Mississippi. "Markov chains and applications". Selecciones Matemáticas 9, nr 01 (30.06.2022): 53–78. http://dx.doi.org/10.17268/sel.mat.2022.01.05.
Pełny tekst źródłaGuyo, X., i C. Hardouin†. "Markow chain markov field dynamics:models and statistics". Statistics 35, nr 4 (styczeń 2001): 593–627. http://dx.doi.org/10.1080/02331880108802756.
Pełny tekst źródłaXiang, Xuyan, Xiao Zhang i Xiaoyun Mo. "Statistical Identification of Markov Chain on Trees". Mathematical Problems in Engineering 2018 (2018): 1–13. http://dx.doi.org/10.1155/2018/2036248.
Pełny tekst źródłaTakemura, Akimichi, i Hisayuki Hara. "Markov chain Monte Carlo test of toric homogeneous Markov chains". Statistical Methodology 9, nr 3 (maj 2012): 392–406. http://dx.doi.org/10.1016/j.stamet.2011.10.004.
Pełny tekst źródłaQi-feng, Yao, Dong Yun i Wang Zhong-Zhi. "An Entropy Rate Theorem for a Hidden Inhomogeneous Markov Chain". Open Statistics & Probability Journal 8, nr 1 (30.09.2017): 19–26. http://dx.doi.org/10.2174/1876527001708010019.
Pełny tekst źródłaMasuyama, Hiroyuki. "Error Bounds for Augmented Truncations of Discrete-Time Block-Monotone Markov Chains under Geometric Drift Conditions". Advances in Applied Probability 47, nr 1 (marzec 2015): 83–105. http://dx.doi.org/10.1239/aap/1427814582.
Pełny tekst źródłaMasuyama, Hiroyuki. "Error Bounds for Augmented Truncations of Discrete-Time Block-Monotone Markov Chains under Geometric Drift Conditions". Advances in Applied Probability 47, nr 01 (marzec 2015): 83–105. http://dx.doi.org/10.1017/s0001867800007710.
Pełny tekst źródłaRozprawy doktorskie na temat "Markov chain"
Bakra, Eleni. "Aspects of population Markov chain Monte Carlo and reversible jump Markov chain Monte Carlo". Thesis, University of Glasgow, 2009. http://theses.gla.ac.uk/1247/.
Pełny tekst źródłaHolenstein, Roman. "Particle Markov chain Monte Carlo". Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/7319.
Pełny tekst źródłaByrd, Jonathan Michael Robert. "Parallel Markov Chain Monte Carlo". Thesis, University of Warwick, 2010. http://wrap.warwick.ac.uk/3634/.
Pełny tekst źródłaYildirak, Sahap Kasirga. "The Identificaton Of A Bivariate Markov Chain Market Model". Phd thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/1257898/index.pdf.
Pełny tekst źródłaMartin, Russell Andrew. "Paths, sampling, and markov chain decomposition". Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/29383.
Pełny tekst źródłaEstandia, Gonzalez Luna Antonio. "Stable approximations for Markov-chain filters". Thesis, Imperial College London, 1987. http://hdl.handle.net/10044/1/38303.
Pełny tekst źródłaZhang, Yichuan. "Scalable geometric Markov chain Monte Carlo". Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/20978.
Pełny tekst źródłaFang, Youhan. "Efficient Markov Chain Monte Carlo Methods". Thesis, Purdue University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10809188.
Pełny tekst źródłaGenerating random samples from a prescribed distribution is one of the most important and challenging problems in machine learning, Bayesian statistics, and the simulation of materials. Markov Chain Monte Carlo (MCMC) methods are usually the required tool for this task, if the desired distribution is known only up to a multiplicative constant. Samples produced by an MCMC method are real values in N-dimensional space, called the configuration space. The distribution of such samples converges to the target distribution in the limit. However, existing MCMC methods still face many challenges that are not well resolved. Difficulties for sampling by using MCMC methods include, but not exclusively, dealing with high dimensional and multimodal problems, high computation cost due to extremely large datasets in Bayesian machine learning models, and lack of reliable indicators for detecting convergence and measuring the accuracy of sampling. This dissertation focuses on new theory and methodology for efficient MCMC methods that aim to overcome the aforementioned difficulties.
One contribution of this dissertation is generalizations of hybrid Monte Carlo (HMC). An HMC method combines a discretized dynamical system in an extended space, called the state space, and an acceptance test based on the Metropolis criterion. The discretized dynamical system used in HMC is volume preserving—meaning that in the state space, the absolute Jacobian of a map from one point on the trajectory to another is 1. Volume preservation is, however, not necessary for the general purpose of sampling. A general theory allowing the use of non-volume preserving dynamics for proposing MCMC moves is proposed. Examples including isokinetic dynamics and variable mass Hamiltonian dynamics with an explicit integrator, are all designed with fewer restrictions based on the general theory. Experiments show improvement in efficiency for sampling high dimensional multimodal problems. A second contribution is stochastic gradient samplers with reduced bias. An in-depth analysis of the noise introduced by the stochastic gradient is provided. Two methods to reduce the bias in the distribution of samples are proposed. One is to correct the dynamics by using an estimated noise based on subsampled data, and the other is to introduce additional variables and corresponding dynamics to adaptively reduce the bias. Extensive experiments show that both methods outperform existing methods. A third contribution is quasi-reliable estimates of effective sample size. Proposed is a more reliable indicator—the longest integrated autocorrelation time over all functions in the state space—for detecting the convergence and measuring the accuracy of MCMC methods. The superiority of the new indicator is supported by experiments on both synthetic and real problems.
Minor contributions include a general framework of changing variables, and a numerical integrator for the Hamiltonian dynamics with fourth order accuracy. The idea of changing variables is to transform the potential energy function as a function of the original variable to a function of the new variable, such that undesired properties can be removed. Two examples are provided and preliminary experimental results are obtained for supporting this idea. The fourth order integrator is constructed by combining the idea of the simplified Takahashi-Imada method and a two-stage Hessian-based integrator. The proposed method, called two-stage simplified Takahashi-Imada method, shows outstanding performance over existing methods in high-dimensional sampling problems.
Chotard, Alexandre. "Markov chain Analysis of Evolution Strategies". Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112230/document.
Pełny tekst źródłaIn this dissertation an analysis of Evolution Strategies (ESs) using the theory of Markov chains is conducted. Proofs of divergence or convergence of these algorithms are obtained, and tools to achieve such proofs are developed.ESs are so called "black-box" stochastic optimization algorithms, i.e. information on the function to be optimized are limited to the values it associates to points. In particular, gradients are unavailable. Proofs of convergence or divergence of these algorithms can be obtained through the analysis of Markov chains underlying these algorithms. The proofs of log-linear convergence and of divergence obtained in this thesis in the context of a linear function with or without constraint are essential components for the proofs of convergence of ESs on wide classes of functions.This dissertation first gives an introduction to Markov chain theory, then a state of the art on ESs and on black-box continuous optimization, and present already established links between ESs and Markov chains.The contributions of this thesis are then presented:o General mathematical tools that can be applied to a wider range of problems are developed. These tools allow to easily prove specific Markov chain properties (irreducibility, aperiodicity and the fact that compact sets are small sets for the Markov chain) on the Markov chains studied. Obtaining these properties without these tools is a ad hoc, tedious and technical process, that can be of very high difficulty.o Then different ESs are analyzed on different problems. We study a (1,\lambda)-ES using cumulative step-size adaptation on a linear function and prove the log-linear divergence of the step-size; we also study the variation of the logarithm of the step-size, from which we establish a necessary condition for the stability of the algorithm with respect to the dimension of the search space. Then we study an ES with constant step-size and with cumulative step-size adaptation on a linear function with a linear constraint, using resampling to handle unfeasible solutions. We prove that with constant step-size the algorithm diverges, while with cumulative step-size adaptation, depending on parameters of the problem and of the ES, the algorithm converges or diverges log-linearly. We then investigate the dependence of the convergence or divergence rate of the algorithm with parameters of the problem and of the ES. Finally we study an ES with a sampling distribution that can be non-Gaussian and with constant step-size on a linear function with a linear constraint. We give sufficient conditions on the sampling distribution for the algorithm to diverge. We also show that different covariance matrices for the sampling distribution correspond to a change of norm of the search space, and that this implies that adapting the covariance matrix of the sampling distribution may allow an ES with cumulative step-size adaptation to successfully diverge on a linear function with any linear constraint.Finally, these results are summed-up, discussed, and perspectives for future work are explored
Neuhoff, Daniel. "Reversible Jump Markov Chain Monte Carlo". Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2016. http://dx.doi.org/10.18452/17461.
Pełny tekst źródłaThe four studies of this thesis are concerned predominantly with the dynamics of macroeconomic time series, both in the context of a simple DSGE model, as well as from a pure time series modeling perspective.
Książki na temat "Markov chain"
Liang, Faming, Chuanhai Liu i Raymond J. Carroll. Advanced Markov Chain Monte Carlo Methods. Chichester, UK: John Wiley & Sons, Ltd, 2010. http://dx.doi.org/10.1002/9780470669723.
Pełny tekst źródłaR, Gilks W., Richardson S i Spiegelhalter D. J, red. Markov chain Monte Carlo in practice. London: Chapman & Hall, 1996.
Znajdź pełny tekst źródłaR, Gilks W., Richardson S i Spiegelhalter D. J, red. Markov chain Monte Carlo in practice. Boca Raton, Fla: Chapman & Hall, 1998.
Znajdź pełny tekst źródłaY, Saad, Stewart William J. 1946- i Research Institute for Advanced Computer Science (U.S.), red. Numerical methods in Markov chain modeling. [Moffett Field, Calif.]: Research Institute for Advanced Computer Science, NASA Ames Research Center, 1989.
Znajdź pełny tekst źródłaBanisch, Sven. Markov Chain Aggregation for Agent-Based Models. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-24877-6.
Pełny tekst źródłaS, Kendall W., Liang F. 1970- i Wang J. S. 1960-, red. Markov chain Monte Carlo: Innovations and applications. Singapore: World Scientific, 2005.
Znajdź pełny tekst źródłaJoseph, Anosh. Markov Chain Monte Carlo Methods in Quantum Field Theories. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-46044-0.
Pełny tekst źródłaGamerman, Dani. Markov chain Monte Carlo: Stochastic simulation for Bayesian inference. London: Chapman & Hall, 1997.
Znajdź pełny tekst źródłaRamlau-Hansen, Henrik. Hattendorff's theorem: A Markov chain and counting process approach. Copenhagen: Laboratory of Actuarial Mathematics, University of Copenhagen, 1987.
Znajdź pełny tekst źródłaFreitas, Lopes Hedibert, red. Markov chain Monte Carlo: Stochastic simulation for Bayesian inference. Wyd. 2. Boca Raton: Taylor & Francis, 2006.
Znajdź pełny tekst źródłaCzęści książek na temat "Markov chain"
Camacho Olmedo, M. T., i J. F. Mas. "Markov Chain". W Geomatic Approaches for Modeling Land Change Scenarios, 441–45. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-60801-3_25.
Pełny tekst źródłaFürnkranz, Johannes, Philip K. Chan, Susan Craw, Claude Sammut, William Uther, Adwait Ratnaparkhi, Xin Jin i in. "Markov Chain". W Encyclopedia of Machine Learning, 639. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_510.
Pełny tekst źródłaHu, Fuyan. "Markov Chain". W Encyclopedia of Systems Biology, 1175. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4419-9863-7_443.
Pełny tekst źródłaWeik, Martin H. "Markov chain". W Computer Science and Communications Dictionary, 977. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/1-4020-0613-6_11086.
Pełny tekst źródłaZhang, Hao. "Markov Chain". W Models and Methods for Management Science, 383–403. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-1614-4_11.
Pełny tekst źródłaBandyopadhyay, Susmita. "Markov Chain". W Decision Support System, 109–24. Boca Raton: CRC Press, 2023. http://dx.doi.org/10.1201/9781003307655-8.
Pełny tekst źródłaRodrigues, Eliane Regina, i Jorge Alberto Achcar. "Markov Chain Models". W Applications of Discrete-time Markov Chains and Poisson Processes to Air Pollution Modeling and Studies, 11–23. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4614-4645-3_2.
Pełny tekst źródłaMøller, Jan Kloppenborg, Marcel Schweiker, Rune Korsholm Andersen, Burak Gunay, Selin Yilmaz, Verena Marie Barthelmes i Henrik Madsen. "Markov chain models". W Statistical Modelling of Occupant Behaviour, 253–88. Boca Raton: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003340812-9.
Pełny tekst źródłaStewart, William J. "Markov Chain Equations". W Encyclopedia of Operations Research and Management Science, 921–25. Boston, MA: Springer US, 2013. http://dx.doi.org/10.1007/978-1-4419-1153-7_578.
Pełny tekst źródłaChaudhari, Harshal A., Michael Mathioudakis i Evimaria Terzi. "Markov Chain Monitoring". W Proceedings of the 2018 SIAM International Conference on Data Mining, 441–49. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2018. http://dx.doi.org/10.1137/1.9781611975321.50.
Pełny tekst źródłaStreszczenia konferencji na temat "Markov chain"
Zhang, Yu, i Mitchell Bucklew. "Max Markov Chain". W Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/639.
Pełny tekst źródłaHARA, Hisayuki, Satoshi AOKI i Akimichi TAKEMURA. "Running Markov Chain without Markov Basis". W Harmony of Gröbner Bases and the Modern Industrial Society - The Second CREST-CSBM International Conference. Singapore: World Scientific Publishing Co. Pte. Ltd., 2012. http://dx.doi.org/10.1142/9789814383462_0005.
Pełny tekst źródłaAwiszus, Maren, i Bodo Rosenhahn. "Markov Chain Neural Networks". W 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2018. http://dx.doi.org/10.1109/cvprw.2018.00293.
Pełny tekst źródłaAsesh, Aishwarya. "Markov Chain Sequence Modeling". W 2022 3rd International Informatics and Software Engineering Conference (IISEC). IEEE, 2022. http://dx.doi.org/10.1109/iisec56263.2022.9998227.
Pełny tekst źródłaLuo, Jian-qiang, i Yan-ping Zhao. "Research on the Supply Chain Product Market Forecasting Based on Markov Chain". W 2010 International Conference on E-Product E-Service and E-Entertainment (ICEEE 2010). IEEE, 2010. http://dx.doi.org/10.1109/iceee.2010.5660723.
Pełny tekst źródłaVarshosaz, Mahsa, i Ramtin Khosravi. "Discrete time Markov chain families". W the 17th International Software Product Line Conference co-located workshops. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2499777.2500725.
Pełny tekst źródłaMcClymont, Kent, i Edward C. Keedwell. "Markov chain hyper-heuristic (MCHH)". W the 13th annual conference. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2001576.2001845.
Pełny tekst źródłaWaller, Ephraim Nii Kpakpo, Pamela Delali Adablah i Quist-Aphetsi Kester. "Markov Chain: Forecasting Economic Variables". W 2019 International Conference on Computing, Computational Modelling and Applications (ICCMA). IEEE, 2019. http://dx.doi.org/10.1109/iccma.2019.00026.
Pełny tekst źródłaCogill, Randy, i Erik Vargo. "The Poisson equation for reversible Markov chains: Analysis and application to Markov chain samplers". W 2012 IEEE 51st Annual Conference on Decision and Control (CDC). IEEE, 2012. http://dx.doi.org/10.1109/cdc.2012.6425978.
Pełny tekst źródłaGarcía, Jesús E., S. L. M. Londoño i Thainá Soares. "Optimal model for a Markov chain with Markov covariates". W INTERNATIONAL CONFERENCE OF NUMERICAL ANALYSIS AND APPLIED MATHEMATICS ICNAAM 2019. AIP Publishing, 2020. http://dx.doi.org/10.1063/5.0026429.
Pełny tekst źródłaRaporty organizacyjne na temat "Markov chain"
Gelfand, Alan E., i Sujit K. Sahu. On Markov Chain Monte Carlo Acceleration. Fort Belvoir, VA: Defense Technical Information Center, kwiecień 1994. http://dx.doi.org/10.21236/ada279393.
Pełny tekst źródłaKrebs, William B. Markov Chain Simulations of Binary Matrices. Fort Belvoir, VA: Defense Technical Information Center, styczeń 1992. http://dx.doi.org/10.21236/ada249265.
Pełny tekst źródłaKocherlakota, Narayana. Sluggish Inflation Expectations: A Markov Chain Analysis. Cambridge, MA: National Bureau of Economic Research, luty 2016. http://dx.doi.org/10.3386/w22009.
Pełny tekst źródłaCalvin, James M. Markov Chain Moment Formulas for Regenerative Simulation. Fort Belvoir, VA: Defense Technical Information Center, czerwiec 1989. http://dx.doi.org/10.21236/ada210684.
Pełny tekst źródłaWereley, Norman M., i Bruce K. Walker. Approximate Evaluation of Semi-Markov Chain Reliability Models,. Fort Belvoir, VA: Defense Technical Information Center, luty 1988. http://dx.doi.org/10.21236/ada194669.
Pełny tekst źródłaSafta, Cosmin, Mohammad Khalil i Habib N. Najm. Transitional Markov Chain Monte Carlo Sampler in UQTk. Office of Scientific and Technical Information (OSTI), marzec 2020. http://dx.doi.org/10.2172/1606084.
Pełny tekst źródłaDabrowski, Christopher, i Fern Hunt. Markov chain analysis for large-scale grid systems. Gaithersburg, MD: National Institute of Standards and Technology, 2009. http://dx.doi.org/10.6028/nist.ir.7566.
Pełny tekst źródłaResnick, Sidney I., i David Zeber. Asymptotics of Markov Kernels and the Tail Chain. Fort Belvoir, VA: Defense Technical Information Center, styczeń 2013. http://dx.doi.org/10.21236/ada585087.
Pełny tekst źródłaWarnes, Gregory R. HYDRA: A Java Library for Markov Chain Monte Carlo. Fort Belvoir, VA: Defense Technical Information Center, marzec 2002. http://dx.doi.org/10.21236/ada459649.
Pełny tekst źródłaReddy, S., i A. Crisp. Deep Neural Network Informed Markov Chain Monte Carlo Methods. Office of Scientific and Technical Information (OSTI), listopad 2023. http://dx.doi.org/10.2172/2283285.
Pełny tekst źródła