Artículos de revistas sobre el tema "Partially Observable Markov Decision Processes (POMDPs)"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Partially Observable Markov Decision Processes (POMDPs)".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
NI, YAODONG y ZHI-QIANG LIU. "BOUNDED-PARAMETER PARTIALLY OBSERVABLE MARKOV DECISION PROCESSES: FRAMEWORK AND ALGORITHM". International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 21, n.º 06 (diciembre de 2013): 821–63. http://dx.doi.org/10.1142/s0218488513500396.
Texto completoTennenholtz, Guy, Uri Shalit y Shie Mannor. "Off-Policy Evaluation in Partially Observable Environments". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 06 (3 de abril de 2020): 10276–83. http://dx.doi.org/10.1609/aaai.v34i06.6590.
Texto completoCarr, Steven, Nils Jansen y Ufuk Topcu. "Task-Aware Verifiable RNN-Based Policies for Partially Observable Markov Decision Processes". Journal of Artificial Intelligence Research 72 (18 de noviembre de 2021): 819–47. http://dx.doi.org/10.1613/jair.1.12963.
Texto completoKim, Sung-Kyun, Oren Salzman y Maxim Likhachev. "POMHDP: Search-Based Belief Space Planning Using Multiple Heuristics". Proceedings of the International Conference on Automated Planning and Scheduling 29 (25 de mayo de 2021): 734–44. http://dx.doi.org/10.1609/icaps.v29i1.3542.
Texto completoWang, Chenggang y Roni Khardon. "Relational Partially Observable MDPs". Proceedings of the AAAI Conference on Artificial Intelligence 24, n.º 1 (4 de julio de 2010): 1153–58. http://dx.doi.org/10.1609/aaai.v24i1.7742.
Texto completoHauskrecht, M. "Value-Function Approximations for Partially Observable Markov Decision Processes". Journal of Artificial Intelligence Research 13 (1 de agosto de 2000): 33–94. http://dx.doi.org/10.1613/jair.678.
Texto completoVictorio-Meza, Hermilo, Manuel Mejía-Lavalle, Alicia Martínez Rebollar, Andrés Blanco Ortega, Obdulia Pichardo Lagunas y Grigori Sidorov. "Searching for Cerebrovascular Disease Optimal Treatment Recommendations Applying Partially Observable Markov Decision Processes". International Journal of Pattern Recognition and Artificial Intelligence 32, n.º 01 (9 de octubre de 2017): 1860015. http://dx.doi.org/10.1142/s0218001418600157.
Texto completoZhang, N. L. y W. Liu. "A Model Approximation Scheme for Planning in Partially Observable Stochastic Domains". Journal of Artificial Intelligence Research 7 (1 de noviembre de 1997): 199–230. http://dx.doi.org/10.1613/jair.419.
Texto completoOmidshafiei, Shayegan, Ali–Akbar Agha–Mohammadi, Christopher Amato, Shih–Yuan Liu, Jonathan P. How y John Vian. "Decentralized control of multi-robot partially observable Markov decision processes using belief space macro-actions". International Journal of Robotics Research 36, n.º 2 (febrero de 2017): 231–58. http://dx.doi.org/10.1177/0278364917692864.
Texto completoRozek, Brandon, Junkyu Lee, Harsha Kokel, Michael Katz y Shirin Sohrabi. "Partially Observable Hierarchical Reinforcement Learning with AI Planning (Student Abstract)". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 21 (24 de marzo de 2024): 23635–36. http://dx.doi.org/10.1609/aaai.v38i21.30504.
Texto completoTheocharous, Georgios y Sridhar Mahadevan. "Compressing POMDPs Using Locality Preserving Non-Negative Matrix Factorization". Proceedings of the AAAI Conference on Artificial Intelligence 24, n.º 1 (4 de julio de 2010): 1147–52. http://dx.doi.org/10.1609/aaai.v24i1.7750.
Texto completoWalraven, Erwin y Matthijs T. J. Spaan. "Point-Based Value Iteration for Finite-Horizon POMDPs". Journal of Artificial Intelligence Research 65 (11 de julio de 2019): 307–41. http://dx.doi.org/10.1613/jair.1.11324.
Texto completoZhang, N. L. y W. Zhang. "Speeding Up the Convergence of Value Iteration in Partially Observable Markov Decision Processes". Journal of Artificial Intelligence Research 14 (1 de febrero de 2001): 29–51. http://dx.doi.org/10.1613/jair.761.
Texto completoWang, Erli, Hanna Kurniawati y Dirk Kroese. "An On-Line Planner for POMDPs with Large Discrete Action Space: A Quantile-Based Approach". Proceedings of the International Conference on Automated Planning and Scheduling 28 (15 de junio de 2018): 273–77. http://dx.doi.org/10.1609/icaps.v28i1.13906.
Texto completoZhang, Zongzhang, Michael Littman y Xiaoping Chen. "Covering Number as a Complexity Measure for POMDP Planning and Learning". Proceedings of the AAAI Conference on Artificial Intelligence 26, n.º 1 (20 de septiembre de 2021): 1853–59. http://dx.doi.org/10.1609/aaai.v26i1.8360.
Texto completoRoss, S., J. Pineau, S. Paquet y B. Chaib-draa. "Online Planning Algorithms for POMDPs". Journal of Artificial Intelligence Research 32 (29 de julio de 2008): 663–704. http://dx.doi.org/10.1613/jair.2567.
Texto completoKo, Li Ling, David Hsu, Wee Sun Lee y Sylvie Ong. "Structured Parameter Elicitation". Proceedings of the AAAI Conference on Artificial Intelligence 24, n.º 1 (4 de julio de 2010): 1102–7. http://dx.doi.org/10.1609/aaai.v24i1.7744.
Texto completoXIANG, YANG y FRANK HANSHAR. "MULTIAGENT EXPEDITION WITH GRAPHICAL MODELS". International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 19, n.º 06 (diciembre de 2011): 939–76. http://dx.doi.org/10.1142/s0218488511007416.
Texto completoSanner, Scott y Kristian Kersting. "Symbolic Dynamic Programming for First-order POMDPs". Proceedings of the AAAI Conference on Artificial Intelligence 24, n.º 1 (4 de julio de 2010): 1140–46. http://dx.doi.org/10.1609/aaai.v24i1.7747.
Texto completoCapitan, Jesus, Matthijs Spaan, Luis Merino y Anibal Ollero. "Decentralized Multi-Robot Cooperation with Auctioned POMDPs". Proceedings of the International Conference on Automated Planning and Scheduling 24 (11 de mayo de 2014): 515–18. http://dx.doi.org/10.1609/icaps.v24i1.13658.
Texto completoLin, Yong, Xingjia Lu y Fillia Makedon. "Approximate Planning in POMDPs with Weighted Graph Models". International Journal on Artificial Intelligence Tools 24, n.º 04 (agosto de 2015): 1550014. http://dx.doi.org/10.1142/s0218213015500141.
Texto completoAras, R. y A. Dutech. "An Investigation into Mathematical Programming for Finite Horizon Decentralized POMDPs". Journal of Artificial Intelligence Research 37 (26 de marzo de 2010): 329–96. http://dx.doi.org/10.1613/jair.2915.
Texto completoWen, Xian, Haifeng Huo y Jinhua Cui. "The optimal probability of the risk for finite horizon partially observable Markov decision processes". AIMS Mathematics 8, n.º 12 (2023): 28435–49. http://dx.doi.org/10.3934/math.20231455.
Texto completoItoh, Hideaki, Hisao Fukumoto, Hiroshi Wakuya y Tatsuya Furukawa. "Bottom-up learning of hierarchical models in a class of deterministic POMDP environments". International Journal of Applied Mathematics and Computer Science 25, n.º 3 (1 de septiembre de 2015): 597–615. http://dx.doi.org/10.1515/amcs-2015-0044.
Texto completoDressel, Louis y Mykel Kochenderfer. "Efficient Decision-Theoretic Target Localization". Proceedings of the International Conference on Automated Planning and Scheduling 27 (5 de junio de 2017): 70–78. http://dx.doi.org/10.1609/icaps.v27i1.13832.
Texto completoChatterjee, Krishnendu, Martin Chmelik y Ufuk Topcu. "Sensor Synthesis for POMDPs with Reachability Objectives". Proceedings of the International Conference on Automated Planning and Scheduling 28 (15 de junio de 2018): 47–55. http://dx.doi.org/10.1609/icaps.v28i1.13875.
Texto completoPark, Jaeyoung, Kee-Eung Kim y Yoon-Kyu Song. "A POMDP-Based Optimal Control of P300-Based Brain-Computer Interfaces". Proceedings of the AAAI Conference on Artificial Intelligence 25, n.º 1 (4 de agosto de 2011): 1559–62. http://dx.doi.org/10.1609/aaai.v25i1.7956.
Texto completoDoshi, P. y P. J. Gmytrasiewicz. "Monte Carlo Sampling Methods for Approximating Interactive POMDPs". Journal of Artificial Intelligence Research 34 (24 de marzo de 2009): 297–337. http://dx.doi.org/10.1613/jair.2630.
Texto completoShatkay, H. y L. P. Kaelbling. "Learning Geometrically-Constrained Hidden Markov Models for Robot Navigation: Bridging the Topological-Geometrical Gap". Journal of Artificial Intelligence Research 16 (1 de marzo de 2002): 167–207. http://dx.doi.org/10.1613/jair.874.
Texto completoLim, Michael H., Tyler J. Becker, Mykel J. Kochenderfer, Claire J. Tomlin y Zachary N. Sunberg. "Optimality Guarantees for Particle Belief Approximation of POMDPs". Journal of Artificial Intelligence Research 77 (27 de agosto de 2023): 1591–636. http://dx.doi.org/10.1613/jair.1.14525.
Texto completoSpaan, M. T. J. y N. Vlassis. "Perseus: Randomized Point-based Value Iteration for POMDPs". Journal of Artificial Intelligence Research 24 (1 de agosto de 2005): 195–220. http://dx.doi.org/10.1613/jair.1659.
Texto completoKraemer, Landon y Bikramjit Banerjee. "Informed Initial Policies for Learning in Dec-POMDPs". Proceedings of the AAAI Conference on Artificial Intelligence 26, n.º 1 (20 de septiembre de 2021): 2433–34. http://dx.doi.org/10.1609/aaai.v26i1.8426.
Texto completoBanerjee, Bikramjit, Jeremy Lyle, Landon Kraemer y Rajesh Yellamraju. "Sample Bounded Distributed Reinforcement Learning for Decentralized POMDPs". Proceedings of the AAAI Conference on Artificial Intelligence 26, n.º 1 (20 de septiembre de 2021): 1256–62. http://dx.doi.org/10.1609/aaai.v26i1.8260.
Texto completoBernstein, D. S., C. Amato, E. A. Hansen y S. Zilberstein. "Policy Iteration for Decentralized Control of Markov Decision Processes". Journal of Artificial Intelligence Research 34 (1 de marzo de 2009): 89–132. http://dx.doi.org/10.1613/jair.2667.
Texto completoWu, Bo, Yan Peng Feng y Hong Yan Zheng. "Point-Based Monte Carto Online Planning in POMDPs". Advanced Materials Research 846-847 (noviembre de 2013): 1388–91. http://dx.doi.org/10.4028/www.scientific.net/amr.846-847.1388.
Texto completoAjdarów, Michal, Šimon Brlej y Petr Novotný. "Shielding in Resource-Constrained Goal POMDPs". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 12 (26 de junio de 2023): 14674–82. http://dx.doi.org/10.1609/aaai.v37i12.26715.
Texto completoSimão, Thiago D., Marnix Suilen y Nils Jansen. "Safe Policy Improvement for POMDPs via Finite-State Controllers". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 12 (26 de junio de 2023): 15109–17. http://dx.doi.org/10.1609/aaai.v37i12.26763.
Texto completoZhang, Zongzhang, David Hsu, Wee Sun Lee, Zhan Wei Lim y Aijun Bai. "PLEASE: Palm Leaf Search for POMDPs with Large Observation Spaces". Proceedings of the International Conference on Automated Planning and Scheduling 25 (8 de abril de 2015): 249–57. http://dx.doi.org/10.1609/icaps.v25i1.13706.
Texto completoBouton, Maxime, Jana Tumova y Mykel J. Kochenderfer. "Point-Based Methods for Model Checking in Partially Observable Markov Decision Processes". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 06 (3 de abril de 2020): 10061–68. http://dx.doi.org/10.1609/aaai.v34i06.6563.
Texto completoSonu, Ekhlas, Yingke Chen y Prashant Doshi. "Individual Planning in Agent Populations: Exploiting Anonymity and Frame-Action Hypergraphs". Proceedings of the International Conference on Automated Planning and Scheduling 25 (8 de abril de 2015): 202–10. http://dx.doi.org/10.1609/icaps.v25i1.13712.
Texto completoPetrik, Marek y Shlomo Zilberstein. "Linear Dynamic Programs for Resource Management". Proceedings of the AAAI Conference on Artificial Intelligence 25, n.º 1 (4 de agosto de 2011): 1377–83. http://dx.doi.org/10.1609/aaai.v25i1.7794.
Texto completoDibangoye, Jilles Steeve, Christopher Amato, Olivier Buffet y François Charpillet. "Optimally Solving Dec-POMDPs as Continuous-State MDPs". Journal of Artificial Intelligence Research 55 (24 de febrero de 2016): 443–97. http://dx.doi.org/10.1613/jair.4623.
Texto completoNg, Brenda, Carol Meyers, Kofi Boakye y John Nitao. "Towards Applying Interactive POMDPs to Real-World Adversary Modeling". Proceedings of the AAAI Conference on Artificial Intelligence 24, n.º 2 (7 de octubre de 2021): 1814–20. http://dx.doi.org/10.1609/aaai.v24i2.18818.
Texto completoBoots, Byron y Geoffrey Gordon. "An Online Spectral Learning Algorithm for Partially Observable Nonlinear Dynamical Systems". Proceedings of the AAAI Conference on Artificial Intelligence 25, n.º 1 (4 de agosto de 2011): 293–300. http://dx.doi.org/10.1609/aaai.v25i1.7924.
Texto completoBanerjee, Bikramjit. "Pruning for Monte Carlo Distributed Reinforcement Learning in Decentralized POMDPs". Proceedings of the AAAI Conference on Artificial Intelligence 27, n.º 1 (30 de junio de 2013): 88–94. http://dx.doi.org/10.1609/aaai.v27i1.8670.
Texto completoAmato, Christopher, George Konidaris, Leslie P. Kaelbling y Jonathan P. How. "Modeling and Planning with Macro-Actions in Decentralized POMDPs". Journal of Artificial Intelligence Research 64 (25 de marzo de 2019): 817–59. http://dx.doi.org/10.1613/jair.1.11418.
Texto completoWANG, YI, SHIQI ZHANG y JOOHYUNG LEE. "Bridging Commonsense Reasoning and Probabilistic Planning via a Probabilistic Action Language". Theory and Practice of Logic Programming 19, n.º 5-6 (septiembre de 2019): 1090–106. http://dx.doi.org/10.1017/s1471068419000371.
Texto completoSarraute, Carlos, Olivier Buffet y Jörg Hoffmann. "POMDPs Make Better Hackers: Accounting for Uncertainty in Penetration Testing". Proceedings of the AAAI Conference on Artificial Intelligence 26, n.º 1 (20 de septiembre de 2021): 1816–24. http://dx.doi.org/10.1609/aaai.v26i1.8363.
Texto completoShi, Weihao, Shanhong Guo, Xiaoyu Cong, Weixing Sheng, Jing Yan y Jinkun Chen. "Frequency Agile Anti-Interference Technology Based on Reinforcement Learning Using Long Short-Term Memory and Multi-Layer Historical Information Observation". Remote Sensing 15, n.º 23 (23 de noviembre de 2023): 5467. http://dx.doi.org/10.3390/rs15235467.
Texto completoNababan, Maxtulus Junedy, Herman Mawengkang, Tulus Tulus y Sutarman Sutarman. "Hidden Markov Model to Optimize Coordination Relationship for Learning Behaviour". International Journal of Religion 5, n.º 9 (27 de mayo de 2024): 459–69. http://dx.doi.org/10.61707/52exbt60.
Texto completo