Academic literature on the topic 'Chaines de Markov'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Chaines de Markov.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Chaines de Markov"
Dies, Jacques-Edouard. "Transience des chaines de Markov lineaires sur les permutations." Journal of Applied Probability 24, no. 4 (December 1987): 899–907. http://dx.doi.org/10.2307/3214214.
Full textDies, Jacques-Edouard. "Transience des chaines de Markov lineaires sur les permutations." Journal of Applied Probability 24, no. 04 (December 1987): 899–907. http://dx.doi.org/10.1017/s0021900200116778.
Full textCórdoba y Ordóñez, Juan, and Javier Antón Burgos. "Aplicación de cadenas de Markov al análisis dinámico de la centralidad en un sistema de transporte." Estudios Geográficos 51, no. 198 (March 30, 1990): 33–64. http://dx.doi.org/10.3989/egeogr.1990.i198.33.
Full textDimou, Michel, Gabriel Figueiredo De Oliveira, and Alexandra Schaffar. "Entre convergence et disparité. Les SIDS face aux défis de la transition énergétique." Revue d'économie politique Vol. 134, no. 3 (June 21, 2024): 417–41. http://dx.doi.org/10.3917/redp.343.0417.
Full textLee, Shiowjen, and J. Lynch. "Total Positivity of Markov Chains and the Failure Rate Character of Some First Passage Times." Advances in Applied Probability 29, no. 3 (September 1997): 713–32. http://dx.doi.org/10.2307/1428083.
Full textLee, Shiowjen, and J. Lynch. "Total Positivity of Markov Chains and the Failure Rate Character of Some First Passage Times." Advances in Applied Probability 29, no. 03 (September 1997): 713–32. http://dx.doi.org/10.1017/s0001867800028317.
Full textLund, Robert, Ying Zhao, and Peter C. Kiessler. "A monotonicity in reversible Markov chains." Journal of Applied Probability 43, no. 2 (June 2006): 486–99. http://dx.doi.org/10.1239/jap/1152413736.
Full textLund, Robert, Ying Zhao, and Peter C. Kiessler. "A monotonicity in reversible Markov chains." Journal of Applied Probability 43, no. 02 (June 2006): 486–99. http://dx.doi.org/10.1017/s0021900200001777.
Full textGrewal, Jasleen K., Martin Krzywinski, and Naomi Altman. "Markov models—Markov chains." Nature Methods 16, no. 8 (July 30, 2019): 663–64. http://dx.doi.org/10.1038/s41592-019-0476-x.
Full textLindley, D. V., and J. R. Norris. "Markov Chains." Mathematical Gazette 83, no. 496 (March 1999): 188. http://dx.doi.org/10.2307/3618756.
Full textDissertations / Theses on the topic "Chaines de Markov"
Sudyko, Elena. "Dollarisation finançière en Russie." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLE032.
Full textThis thesis develops a portfolio model of financial dollarization (FD) and estimates it for Russia. The contribution of this work will be to construct the first theoretical meanvariance-skewness-kurtosis model of financial dollarization and to validate it empirically. The work builds on previous research which found that adding higher moments, as Skewness and Kurtosis, to the minimum variance portfolio (MVP) enables a better modelling of portfolio choice, and develops such a model for FD. We then use Markovswitching methods on monthly data for bank deposits in Russia since the late 1990s to document the dominant influence of inflation and currency depreciation and their moments as the main determinants of deposit dollarization in a mean-varianceskewness-kurtosis framework during crisis as opposed to normal periods
Nunzi, Francois. "Autour de quelques chaines de Markov combinatoires." Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCC270/document.
Full textWe consider two types of combinatoric Markov chains. We start with Juggling Markov chains, inspired from Warrington's model. We define multivariate generalizations of the existing models, for which we give stationary mesures and normalization factors with closed-form expressions. We also investigate the case where the maximum height at which the juggler may send balls tends to infinity. We then reformulate the Markov chain in terms of integer partitions, which allows us to consider the case where the juggler interacts with infinitely many balls. Our proofs are obtained through an enriched Markov chain on set partitions. We also show that one of the models has the ultrafast convergence property : the stationary mesure is reached after a finite number of steps. In the following Chapter, we consider multivariate generalizations of those models : the juggler now juggles with balls of different weights, and when a heavy ball collides with a lighter one, this light ball is bumped to a higher position, where it might collide with a lighter one, until a ball reaches the highest position. We give closed-form expressions for the stationary mesures and the normalization factors. The last Chapter is dedicated to the stochastic sandpile model, for which we give a proof for a conjecture set by Selig : the stationary mesure does not depend on the law governing sand grains additions
Aspandiiarov, Sanjar. "Quelques proprietes des chaines de markov et du mouvement brownien." Paris 6, 1994. http://www.theses.fr/1994PA066304.
Full textGiordana, Nathalie. "Segmentation non supervisee d'images multi-spectrales par chaines de markov cachees." Compiègne, 1996. http://www.theses.fr/1996COMP981S.
Full textVALLET, ROBERT. "Applications de l'identification des chaines de markov cachees aux communications numeriques." Paris, ENST, 1991. http://www.theses.fr/1991ENST0032.
Full textLADELLI, LUCIA. "Theoremes limites pour les chaines de markov : application aux algorithmes stochastiques." Paris 6, 1989. http://www.theses.fr/1989PA066283.
Full textBENMILOUD, BTISSAM. "Chaines de markov cachees et segmentation non supervisee de sequences d'images." Paris 7, 1994. http://www.theses.fr/1994PA077120.
Full textLezaud, Pascal. "Etude quantitative des chaines de markov par perturbation de leur noyau." Toulouse 3, 1998. https://hal-enac.archives-ouvertes.fr/tel-01084797.
Full textMeziane, Abdelouafi. "Quelques resultats nouveaux sur les mesures stationnaires de chaines de markov denombrables." Toulouse 3, 1987. http://www.theses.fr/1987TOU30252.
Full textEl, maalouf Joseph. "Méthodes de Monte Carlo stratifiées pour la simulation des chaines de Markov." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM089.
Full textMonte Carlo methods are probabilistic schemes that use computers for solving various scientific problems with random numbers. The main disadvantage to this approach is the slow convergence. Many scientists are working hard to find techniques that may accelerate Monte Carlo simulations. This is the aim of some deterministic methods called quasi-Monte Carlo, where random points are replaced with special sets of points with enhanced uniform distribution. These methods do not provide confidence intervals that permit to estimate the errordone. In the present work, we are interested with random methods that reduce the variance of a Monte Carlo estimator : the stratification techniques consist of splitting the sampling area into strata where random samples are chosen. We focus here on applications of stratified methods for approximating Markov chains, simulating diffusion in materials, or solving fragmentationequations.In the first chapter, we present Monte Carlo methods in the framework of numerical quadrature, and we introduce the stratification strategies. We focus on two techniques : the simple stratification (MCS) and the Sudoku stratification (SS), where the points repartitions are similar to Sudoku grids. We also present quasi-Monte Carlo methods, where quasi-random pointsshare common features with stratified points.The second chapter describes the use of stratified algorithms for the simulation of Markov chains. We consider time-homogeneous Markov chains with one-dimensional discrete or continuous state space. We establish theoretical bounds for the variance of some estimator, in the case of a discrete state space, that indicate a variance reduction with respect to usual MonteCarlo. The variance of MCS and SS methods is of order 3/2, instead of 1 for usual MC. The results of numerical experiments, for one-dimensional or multi-dimensional, discrete or continuous state spaces show improved variances ; the order is estimated using linear regression.In the third chapter, we investigate the interest of stratified Monte Carlo methods for simulating diffusion in various non-stationary physical processes. This is done by discretizing time and performing a random walk at every time-step. We propose algorithms for pure diffusion, for convection-diffusion, and reaction-diffusion (Kolmogorov equation or Nagumo equation) ; we finally solve Burgers equation. In each case, the results of numerical tests show an improvement of the variance due to the use of stratified Sudoku sampling.The fourth chapter describes a stratified Monte Carlo scheme for simulating fragmentation phenomena. Through several numerical comparisons, we can see that the stratified Sudoku sampling reduces the variance of Monte Carlo estimates. We finally test a method for solving an inverse problem : knowing the evolution of the mass distribution, it aims to find a fragmentation kernel. In this case quasi-random points are used for solving the direct problem
Books on the topic "Chaines de Markov"
Gagniuc, Paul A. Markov Chains. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2017. http://dx.doi.org/10.1002/9781119387596.
Full textBrémaud, Pierre. Markov Chains. New York, NY: Springer New York, 1999. http://dx.doi.org/10.1007/978-1-4757-3124-8.
Full textSericola, Bruno. Markov Chains. Hoboken, NJ USA: John Wiley & Sons, Inc., 2013. http://dx.doi.org/10.1002/9781118731543.
Full textDouc, Randal, Eric Moulines, Pierre Priouret, and Philippe Soulier. Markov Chains. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-97704-1.
Full textGraham, Carl. Markov Chains. Chichester, UK: John Wiley & Sons, Ltd, 2014. http://dx.doi.org/10.1002/9781118881866.
Full textBrémaud, Pierre. Markov Chains. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-45982-6.
Full textChing, Wai-Ki, Ximin Huang, Michael K. Ng, and Tak-Kuen Siu. Markov Chains. Boston, MA: Springer US, 2013. http://dx.doi.org/10.1007/978-1-4614-6312-2.
Full textHermanns, Holger. Interactive Markov Chains. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-45804-2.
Full textPrivault, Nicolas. Understanding Markov Chains. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-0659-4.
Full textHartfiel, Darald J. Markov Set-Chains. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0094586.
Full textBook chapters on the topic "Chaines de Markov"
Carlton, Matthew A., and Jay L. Devore. "Markov Chains." In Probability with Applications in Engineering, Science, and Technology, 423–87. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-52401-6_6.
Full textLindsey, James K. "Markov Chains." In The Analysis of Stochastic Processes using GLIM, 21–42. New York, NY: Springer New York, 1992. http://dx.doi.org/10.1007/978-1-4612-2888-2_2.
Full textGebali, Fayez. "Markov Chains." In Analysis of Computer and Communication Networks, 1–57. Boston, MA: Springer US, 2008. http://dx.doi.org/10.1007/978-0-387-74437-7_3.
Full textLakatos, László, László Szeidl, and Miklós Telek. "Markov Chains." In Introduction to Queueing Systems with Telecommunication Applications, 93–177. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15142-3_3.
Full textGordon, Hugh. "Markov Chains." In Discrete Probability, 209–49. New York, NY: Springer New York, 1997. http://dx.doi.org/10.1007/978-1-4612-1966-8_9.
Full textHermanns, Holger. "Markov Chains." In Interactive Markov Chains, 35–55. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-45804-2_3.
Full textSerfozo, Richard. "Markov Chains." In Probability and Its Applications, 1–98. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-540-89332-5_1.
Full textÖkten, Giray. "Markov Chains." In Probability and Simulation, 81–98. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-56070-6_4.
Full textRobert, Christian P., and George Casella. "Markov Chains." In Springer Texts in Statistics, 139–91. New York, NY: Springer New York, 1999. http://dx.doi.org/10.1007/978-1-4757-3071-5_4.
Full textHarris, Carl M. "Markov chains." In Encyclopedia of Operations Research and Management Science, 481–84. New York, NY: Springer US, 2001. http://dx.doi.org/10.1007/1-4020-0611-x_579.
Full textConference papers on the topic "Chaines de Markov"
Aminof, Benjamin, Linus Cooper, Sasha Rubin, Moshe Y. Vardi, and Florian Zuleger. "Probabilistic Synthesis and Verification for LTL on Finite Traces." In 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}, 27–37. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/3.
Full textHunter, Jeffrey J. "Perturbed Markov Chains." In Proceedings of the International Statistics Workshop. WORLD SCIENTIFIC, 2006. http://dx.doi.org/10.1142/9789812772466_0008.
Full textSaglam, Cenk Oguz, and Katie Byl. "Metastable Markov chains." In 2014 IEEE 53rd Annual Conference on Decision and Control (CDC). IEEE, 2014. http://dx.doi.org/10.1109/cdc.2014.7039847.
Full textMorris, Lloyd, Juan Luis Arias, Alonso Toro, Andrés Martínez, and Homero Murzi. "Information management for the projection of productive capacities articulated to export scenarios." In Human Interaction and Emerging Technologies (IHIET-AI 2022) Artificial Intelligence and Future Applications. AHFE International, 2022. http://dx.doi.org/10.54941/ahfe100924.
Full textCogill, Randy, and Erik Vargo. "The Poisson equation for reversible Markov chains: Analysis and application to Markov chain samplers." In 2012 IEEE 51st Annual Conference on Decision and Control (CDC). IEEE, 2012. http://dx.doi.org/10.1109/cdc.2012.6425978.
Full textZhang, Yu, and Mitchell Bucklew. "Max Markov Chain." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/639.
Full textBini, D. A., B. Meini, S. Steffé, and B. Van Houdt. "Structured Markov chains solver." In Proceeding from the 2006 workshop. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1190366.1190378.
Full textBini, D. A., B. Meini, S. Steffé, and B. Van Houdt. "Structured Markov chains solver." In Proceeding from the 2006 workshop. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1190366.1190379.
Full textKiefer, Stefan, and A. Prasad Sistla. "Distinguishing Hidden Markov Chains." In LICS '16: 31st Annual ACM/IEEE Symposium on Logic in Computer Science. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2933575.2933608.
Full textGarcía, Jesús E. "Combining multivariate Markov chains." In PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON NUMERICAL ANALYSIS AND APPLIED MATHEMATICS 2014 (ICNAAM-2014). AIP Publishing LLC, 2015. http://dx.doi.org/10.1063/1.4912373.
Full textReports on the topic "Chaines de Markov"
Ramezani, Vahid R., and Steven I. Marcus. Risk-Sensitive Probability for Markov Chains. Fort Belvoir, VA: Defense Technical Information Center, September 2002. http://dx.doi.org/10.21236/ada438509.
Full textMarie, Raymond, Andrew Reibman, and Kishor Trivedi. Transient Solution of Acyclic Markov Chains. Fort Belvoir, VA: Defense Technical Information Center, August 1985. http://dx.doi.org/10.21236/ada162314.
Full textDoerschuk, Peter C., Robert R. Tenney, and Alan S. Willsky. Modeling Electrocardiograms Using Interacting Markov Chains. Fort Belvoir, VA: Defense Technical Information Center, July 1985. http://dx.doi.org/10.21236/ada162758.
Full textMa, D.-J., A. M. Makowski, and A. Shwartz. Stochastic Approximations for Finite-State Markov Chains. Fort Belvoir, VA: Defense Technical Information Center, January 1987. http://dx.doi.org/10.21236/ada452264.
Full textСоловйов, Володимир Миколайович, V. Saptsin, and D. Chabanenko. Markov chains applications to the financial-economic time series predictions. Transport and Telecommunication Institute, 2011. http://dx.doi.org/10.31812/0564/1189.
Full textSoloviev, V., V. Saptsin, and D. Chabanenko. Financial time series prediction with the technology of complex Markov chains. Брама-Україна, 2014. http://dx.doi.org/10.31812/0564/1305.
Full textKrakowski, Martin. Models of Coin-Tossing for Markov Chains. Revision. Fort Belvoir, VA: Defense Technical Information Center, December 1987. http://dx.doi.org/10.21236/ada196572.
Full textDupuis, Paul, and Hui Wang. Adaptive Importance Sampling for Uniformly Recurrent Markov Chains. Fort Belvoir, VA: Defense Technical Information Center, January 2003. http://dx.doi.org/10.21236/ada461913.
Full textTsitsiklis, John N. Markov Chains with Rare Transitions and Simulated Annealing. Fort Belvoir, VA: Defense Technical Information Center, September 1985. http://dx.doi.org/10.21236/ada161598.
Full textHarris, Carl M. Rootfinding for Markov Chains with Quasi-Triangular Transition Matrices. Fort Belvoir, VA: Defense Technical Information Center, October 1988. http://dx.doi.org/10.21236/ada202468.
Full text