Journal articles on the topic 'Partially Observable Markov Decision Processes (POMDPs)'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 journal articles for your research on the topic 'Partially Observable Markov Decision Processes (POMDPs).'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.
NI, YAODONG, and ZHI-QIANG LIU. "BOUNDED-PARAMETER PARTIALLY OBSERVABLE MARKOV DECISION PROCESSES: FRAMEWORK AND ALGORITHM." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 21, no. 06 (December 2013): 821–63. http://dx.doi.org/10.1142/s0218488513500396.
Tennenholtz, Guy, Uri Shalit, and Shie Mannor. "Off-Policy Evaluation in Partially Observable Environments." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 06 (April 3, 2020): 10276–83. http://dx.doi.org/10.1609/aaai.v34i06.6590.
Carr, Steven, Nils Jansen, and Ufuk Topcu. "Task-Aware Verifiable RNN-Based Policies for Partially Observable Markov Decision Processes." Journal of Artificial Intelligence Research 72 (November 18, 2021): 819–47. http://dx.doi.org/10.1613/jair.1.12963.
Kim, Sung-Kyun, Oren Salzman, and Maxim Likhachev. "POMHDP: Search-Based Belief Space Planning Using Multiple Heuristics." Proceedings of the International Conference on Automated Planning and Scheduling 29 (May 25, 2021): 734–44. http://dx.doi.org/10.1609/icaps.v29i1.3542.
Wang, Chenggang, and Roni Khardon. "Relational Partially Observable MDPs." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 1 (July 4, 2010): 1153–58. http://dx.doi.org/10.1609/aaai.v24i1.7742.
Hauskrecht, M. "Value-Function Approximations for Partially Observable Markov Decision Processes." Journal of Artificial Intelligence Research 13 (August 1, 2000): 33–94. http://dx.doi.org/10.1613/jair.678.
Victorio-Meza, Hermilo, Manuel Mejía-Lavalle, Alicia Martínez Rebollar, Andrés Blanco Ortega, Obdulia Pichardo Lagunas, and Grigori Sidorov. "Searching for Cerebrovascular Disease Optimal Treatment Recommendations Applying Partially Observable Markov Decision Processes." International Journal of Pattern Recognition and Artificial Intelligence 32, no. 01 (October 9, 2017): 1860015. http://dx.doi.org/10.1142/s0218001418600157.
Zhang, N. L., and W. Liu. "A Model Approximation Scheme for Planning in Partially Observable Stochastic Domains." Journal of Artificial Intelligence Research 7 (November 1, 1997): 199–230. http://dx.doi.org/10.1613/jair.419.
Omidshafiei, Shayegan, Ali–Akbar Agha–Mohammadi, Christopher Amato, Shih–Yuan Liu, Jonathan P. How, and John Vian. "Decentralized control of multi-robot partially observable Markov decision processes using belief space macro-actions." International Journal of Robotics Research 36, no. 2 (February 2017): 231–58. http://dx.doi.org/10.1177/0278364917692864.
Rozek, Brandon, Junkyu Lee, Harsha Kokel, Michael Katz, and Shirin Sohrabi. "Partially Observable Hierarchical Reinforcement Learning with AI Planning (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 21 (March 24, 2024): 23635–36. http://dx.doi.org/10.1609/aaai.v38i21.30504.
Theocharous, Georgios, and Sridhar Mahadevan. "Compressing POMDPs Using Locality Preserving Non-Negative Matrix Factorization." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 1 (July 4, 2010): 1147–52. http://dx.doi.org/10.1609/aaai.v24i1.7750.
Walraven, Erwin, and Matthijs T. J. Spaan. "Point-Based Value Iteration for Finite-Horizon POMDPs." Journal of Artificial Intelligence Research 65 (July 11, 2019): 307–41. http://dx.doi.org/10.1613/jair.1.11324.
Zhang, N. L., and W. Zhang. "Speeding Up the Convergence of Value Iteration in Partially Observable Markov Decision Processes." Journal of Artificial Intelligence Research 14 (February 1, 2001): 29–51. http://dx.doi.org/10.1613/jair.761.
Wang, Erli, Hanna Kurniawati, and Dirk Kroese. "An On-Line Planner for POMDPs with Large Discrete Action Space: A Quantile-Based Approach." Proceedings of the International Conference on Automated Planning and Scheduling 28 (June 15, 2018): 273–77. http://dx.doi.org/10.1609/icaps.v28i1.13906.
Zhang, Zongzhang, Michael Littman, and Xiaoping Chen. "Covering Number as a Complexity Measure for POMDP Planning and Learning." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (September 20, 2021): 1853–59. http://dx.doi.org/10.1609/aaai.v26i1.8360.
Ross, S., J. Pineau, S. Paquet, and B. Chaib-draa. "Online Planning Algorithms for POMDPs." Journal of Artificial Intelligence Research 32 (July 29, 2008): 663–704. http://dx.doi.org/10.1613/jair.2567.
Ko, Li Ling, David Hsu, Wee Sun Lee, and Sylvie Ong. "Structured Parameter Elicitation." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 1 (July 4, 2010): 1102–7. http://dx.doi.org/10.1609/aaai.v24i1.7744.
XIANG, YANG, and FRANK HANSHAR. "MULTIAGENT EXPEDITION WITH GRAPHICAL MODELS." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 19, no. 06 (December 2011): 939–76. http://dx.doi.org/10.1142/s0218488511007416.
Sanner, Scott, and Kristian Kersting. "Symbolic Dynamic Programming for First-order POMDPs." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 1 (July 4, 2010): 1140–46. http://dx.doi.org/10.1609/aaai.v24i1.7747.
Capitan, Jesus, Matthijs Spaan, Luis Merino, and Anibal Ollero. "Decentralized Multi-Robot Cooperation with Auctioned POMDPs." Proceedings of the International Conference on Automated Planning and Scheduling 24 (May 11, 2014): 515–18. http://dx.doi.org/10.1609/icaps.v24i1.13658.
Lin, Yong, Xingjia Lu, and Fillia Makedon. "Approximate Planning in POMDPs with Weighted Graph Models." International Journal on Artificial Intelligence Tools 24, no. 04 (August 2015): 1550014. http://dx.doi.org/10.1142/s0218213015500141.
Aras, R., and A. Dutech. "An Investigation into Mathematical Programming for Finite Horizon Decentralized POMDPs." Journal of Artificial Intelligence Research 37 (March 26, 2010): 329–96. http://dx.doi.org/10.1613/jair.2915.
Wen, Xian, Haifeng Huo, and Jinhua Cui. "The optimal probability of the risk for finite horizon partially observable Markov decision processes." AIMS Mathematics 8, no. 12 (2023): 28435–49. http://dx.doi.org/10.3934/math.20231455.
Itoh, Hideaki, Hisao Fukumoto, Hiroshi Wakuya, and Tatsuya Furukawa. "Bottom-up learning of hierarchical models in a class of deterministic POMDP environments." International Journal of Applied Mathematics and Computer Science 25, no. 3 (September 1, 2015): 597–615. http://dx.doi.org/10.1515/amcs-2015-0044.
Dressel, Louis, and Mykel Kochenderfer. "Efficient Decision-Theoretic Target Localization." Proceedings of the International Conference on Automated Planning and Scheduling 27 (June 5, 2017): 70–78. http://dx.doi.org/10.1609/icaps.v27i1.13832.
Chatterjee, Krishnendu, Martin Chmelik, and Ufuk Topcu. "Sensor Synthesis for POMDPs with Reachability Objectives." Proceedings of the International Conference on Automated Planning and Scheduling 28 (June 15, 2018): 47–55. http://dx.doi.org/10.1609/icaps.v28i1.13875.
Park, Jaeyoung, Kee-Eung Kim, and Yoon-Kyu Song. "A POMDP-Based Optimal Control of P300-Based Brain-Computer Interfaces." Proceedings of the AAAI Conference on Artificial Intelligence 25, no. 1 (August 4, 2011): 1559–62. http://dx.doi.org/10.1609/aaai.v25i1.7956.
Doshi, P., and P. J. Gmytrasiewicz. "Monte Carlo Sampling Methods for Approximating Interactive POMDPs." Journal of Artificial Intelligence Research 34 (March 24, 2009): 297–337. http://dx.doi.org/10.1613/jair.2630.
Shatkay, H., and L. P. Kaelbling. "Learning Geometrically-Constrained Hidden Markov Models for Robot Navigation: Bridging the Topological-Geometrical Gap." Journal of Artificial Intelligence Research 16 (March 1, 2002): 167–207. http://dx.doi.org/10.1613/jair.874.
Lim, Michael H., Tyler J. Becker, Mykel J. Kochenderfer, Claire J. Tomlin, and Zachary N. Sunberg. "Optimality Guarantees for Particle Belief Approximation of POMDPs." Journal of Artificial Intelligence Research 77 (August 27, 2023): 1591–636. http://dx.doi.org/10.1613/jair.1.14525.
Spaan, M. T. J., and N. Vlassis. "Perseus: Randomized Point-based Value Iteration for POMDPs." Journal of Artificial Intelligence Research 24 (August 1, 2005): 195–220. http://dx.doi.org/10.1613/jair.1659.
Kraemer, Landon, and Bikramjit Banerjee. "Informed Initial Policies for Learning in Dec-POMDPs." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (September 20, 2021): 2433–34. http://dx.doi.org/10.1609/aaai.v26i1.8426.
Banerjee, Bikramjit, Jeremy Lyle, Landon Kraemer, and Rajesh Yellamraju. "Sample Bounded Distributed Reinforcement Learning for Decentralized POMDPs." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (September 20, 2021): 1256–62. http://dx.doi.org/10.1609/aaai.v26i1.8260.
Bernstein, D. S., C. Amato, E. A. Hansen, and S. Zilberstein. "Policy Iteration for Decentralized Control of Markov Decision Processes." Journal of Artificial Intelligence Research 34 (March 1, 2009): 89–132. http://dx.doi.org/10.1613/jair.2667.
Wu, Bo, Yan Peng Feng, and Hong Yan Zheng. "Point-Based Monte Carto Online Planning in POMDPs." Advanced Materials Research 846-847 (November 2013): 1388–91. http://dx.doi.org/10.4028/www.scientific.net/amr.846-847.1388.
Ajdarów, Michal, Šimon Brlej, and Petr Novotný. "Shielding in Resource-Constrained Goal POMDPs." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (June 26, 2023): 14674–82. http://dx.doi.org/10.1609/aaai.v37i12.26715.
Simão, Thiago D., Marnix Suilen, and Nils Jansen. "Safe Policy Improvement for POMDPs via Finite-State Controllers." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (June 26, 2023): 15109–17. http://dx.doi.org/10.1609/aaai.v37i12.26763.
Zhang, Zongzhang, David Hsu, Wee Sun Lee, Zhan Wei Lim, and Aijun Bai. "PLEASE: Palm Leaf Search for POMDPs with Large Observation Spaces." Proceedings of the International Conference on Automated Planning and Scheduling 25 (April 8, 2015): 249–57. http://dx.doi.org/10.1609/icaps.v25i1.13706.
Bouton, Maxime, Jana Tumova, and Mykel J. Kochenderfer. "Point-Based Methods for Model Checking in Partially Observable Markov Decision Processes." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 06 (April 3, 2020): 10061–68. http://dx.doi.org/10.1609/aaai.v34i06.6563.
Sonu, Ekhlas, Yingke Chen, and Prashant Doshi. "Individual Planning in Agent Populations: Exploiting Anonymity and Frame-Action Hypergraphs." Proceedings of the International Conference on Automated Planning and Scheduling 25 (April 8, 2015): 202–10. http://dx.doi.org/10.1609/icaps.v25i1.13712.
Petrik, Marek, and Shlomo Zilberstein. "Linear Dynamic Programs for Resource Management." Proceedings of the AAAI Conference on Artificial Intelligence 25, no. 1 (August 4, 2011): 1377–83. http://dx.doi.org/10.1609/aaai.v25i1.7794.
Dibangoye, Jilles Steeve, Christopher Amato, Olivier Buffet, and François Charpillet. "Optimally Solving Dec-POMDPs as Continuous-State MDPs." Journal of Artificial Intelligence Research 55 (February 24, 2016): 443–97. http://dx.doi.org/10.1613/jair.4623.
Ng, Brenda, Carol Meyers, Kofi Boakye, and John Nitao. "Towards Applying Interactive POMDPs to Real-World Adversary Modeling." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 2 (October 7, 2021): 1814–20. http://dx.doi.org/10.1609/aaai.v24i2.18818.
Boots, Byron, and Geoffrey Gordon. "An Online Spectral Learning Algorithm for Partially Observable Nonlinear Dynamical Systems." Proceedings of the AAAI Conference on Artificial Intelligence 25, no. 1 (August 4, 2011): 293–300. http://dx.doi.org/10.1609/aaai.v25i1.7924.
Banerjee, Bikramjit. "Pruning for Monte Carlo Distributed Reinforcement Learning in Decentralized POMDPs." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (June 30, 2013): 88–94. http://dx.doi.org/10.1609/aaai.v27i1.8670.
Amato, Christopher, George Konidaris, Leslie P. Kaelbling, and Jonathan P. How. "Modeling and Planning with Macro-Actions in Decentralized POMDPs." Journal of Artificial Intelligence Research 64 (March 25, 2019): 817–59. http://dx.doi.org/10.1613/jair.1.11418.
WANG, YI, SHIQI ZHANG, and JOOHYUNG LEE. "Bridging Commonsense Reasoning and Probabilistic Planning via a Probabilistic Action Language." Theory and Practice of Logic Programming 19, no. 5-6 (September 2019): 1090–106. http://dx.doi.org/10.1017/s1471068419000371.
Sarraute, Carlos, Olivier Buffet, and Jörg Hoffmann. "POMDPs Make Better Hackers: Accounting for Uncertainty in Penetration Testing." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (September 20, 2021): 1816–24. http://dx.doi.org/10.1609/aaai.v26i1.8363.
Shi, Weihao, Shanhong Guo, Xiaoyu Cong, Weixing Sheng, Jing Yan, and Jinkun Chen. "Frequency Agile Anti-Interference Technology Based on Reinforcement Learning Using Long Short-Term Memory and Multi-Layer Historical Information Observation." Remote Sensing 15, no. 23 (November 23, 2023): 5467. http://dx.doi.org/10.3390/rs15235467.
Nababan, Maxtulus Junedy, Herman Mawengkang, Tulus Tulus, and Sutarman Sutarman. "Hidden Markov Model to Optimize Coordination Relationship for Learning Behaviour." International Journal of Religion 5, no. 9 (May 27, 2024): 459–69. http://dx.doi.org/10.61707/52exbt60.