Articoli di riviste sul tema "Partially Observable Markov Decision Processes (POMDPs)"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Partially Observable Markov Decision Processes (POMDPs)".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.
NI, YAODONG, e ZHI-QIANG LIU. "BOUNDED-PARAMETER PARTIALLY OBSERVABLE MARKOV DECISION PROCESSES: FRAMEWORK AND ALGORITHM". International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 21, n. 06 (dicembre 2013): 821–63. http://dx.doi.org/10.1142/s0218488513500396.
Tennenholtz, Guy, Uri Shalit e Shie Mannor. "Off-Policy Evaluation in Partially Observable Environments". Proceedings of the AAAI Conference on Artificial Intelligence 34, n. 06 (3 aprile 2020): 10276–83. http://dx.doi.org/10.1609/aaai.v34i06.6590.
Carr, Steven, Nils Jansen e Ufuk Topcu. "Task-Aware Verifiable RNN-Based Policies for Partially Observable Markov Decision Processes". Journal of Artificial Intelligence Research 72 (18 novembre 2021): 819–47. http://dx.doi.org/10.1613/jair.1.12963.
Kim, Sung-Kyun, Oren Salzman e Maxim Likhachev. "POMHDP: Search-Based Belief Space Planning Using Multiple Heuristics". Proceedings of the International Conference on Automated Planning and Scheduling 29 (25 maggio 2021): 734–44. http://dx.doi.org/10.1609/icaps.v29i1.3542.
Wang, Chenggang, e Roni Khardon. "Relational Partially Observable MDPs". Proceedings of the AAAI Conference on Artificial Intelligence 24, n. 1 (4 luglio 2010): 1153–58. http://dx.doi.org/10.1609/aaai.v24i1.7742.
Hauskrecht, M. "Value-Function Approximations for Partially Observable Markov Decision Processes". Journal of Artificial Intelligence Research 13 (1 agosto 2000): 33–94. http://dx.doi.org/10.1613/jair.678.
Victorio-Meza, Hermilo, Manuel Mejía-Lavalle, Alicia Martínez Rebollar, Andrés Blanco Ortega, Obdulia Pichardo Lagunas e Grigori Sidorov. "Searching for Cerebrovascular Disease Optimal Treatment Recommendations Applying Partially Observable Markov Decision Processes". International Journal of Pattern Recognition and Artificial Intelligence 32, n. 01 (9 ottobre 2017): 1860015. http://dx.doi.org/10.1142/s0218001418600157.
Zhang, N. L., e W. Liu. "A Model Approximation Scheme for Planning in Partially Observable Stochastic Domains". Journal of Artificial Intelligence Research 7 (1 novembre 1997): 199–230. http://dx.doi.org/10.1613/jair.419.
Omidshafiei, Shayegan, Ali–Akbar Agha–Mohammadi, Christopher Amato, Shih–Yuan Liu, Jonathan P. How e John Vian. "Decentralized control of multi-robot partially observable Markov decision processes using belief space macro-actions". International Journal of Robotics Research 36, n. 2 (febbraio 2017): 231–58. http://dx.doi.org/10.1177/0278364917692864.
Rozek, Brandon, Junkyu Lee, Harsha Kokel, Michael Katz e Shirin Sohrabi. "Partially Observable Hierarchical Reinforcement Learning with AI Planning (Student Abstract)". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 21 (24 marzo 2024): 23635–36. http://dx.doi.org/10.1609/aaai.v38i21.30504.
Theocharous, Georgios, e Sridhar Mahadevan. "Compressing POMDPs Using Locality Preserving Non-Negative Matrix Factorization". Proceedings of the AAAI Conference on Artificial Intelligence 24, n. 1 (4 luglio 2010): 1147–52. http://dx.doi.org/10.1609/aaai.v24i1.7750.
Walraven, Erwin, e Matthijs T. J. Spaan. "Point-Based Value Iteration for Finite-Horizon POMDPs". Journal of Artificial Intelligence Research 65 (11 luglio 2019): 307–41. http://dx.doi.org/10.1613/jair.1.11324.
Zhang, N. L., e W. Zhang. "Speeding Up the Convergence of Value Iteration in Partially Observable Markov Decision Processes". Journal of Artificial Intelligence Research 14 (1 febbraio 2001): 29–51. http://dx.doi.org/10.1613/jair.761.
Wang, Erli, Hanna Kurniawati e Dirk Kroese. "An On-Line Planner for POMDPs with Large Discrete Action Space: A Quantile-Based Approach". Proceedings of the International Conference on Automated Planning and Scheduling 28 (15 giugno 2018): 273–77. http://dx.doi.org/10.1609/icaps.v28i1.13906.
Zhang, Zongzhang, Michael Littman e Xiaoping Chen. "Covering Number as a Complexity Measure for POMDP Planning and Learning". Proceedings of the AAAI Conference on Artificial Intelligence 26, n. 1 (20 settembre 2021): 1853–59. http://dx.doi.org/10.1609/aaai.v26i1.8360.
Ross, S., J. Pineau, S. Paquet e B. Chaib-draa. "Online Planning Algorithms for POMDPs". Journal of Artificial Intelligence Research 32 (29 luglio 2008): 663–704. http://dx.doi.org/10.1613/jair.2567.
Ko, Li Ling, David Hsu, Wee Sun Lee e Sylvie Ong. "Structured Parameter Elicitation". Proceedings of the AAAI Conference on Artificial Intelligence 24, n. 1 (4 luglio 2010): 1102–7. http://dx.doi.org/10.1609/aaai.v24i1.7744.
XIANG, YANG, e FRANK HANSHAR. "MULTIAGENT EXPEDITION WITH GRAPHICAL MODELS". International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 19, n. 06 (dicembre 2011): 939–76. http://dx.doi.org/10.1142/s0218488511007416.
Sanner, Scott, e Kristian Kersting. "Symbolic Dynamic Programming for First-order POMDPs". Proceedings of the AAAI Conference on Artificial Intelligence 24, n. 1 (4 luglio 2010): 1140–46. http://dx.doi.org/10.1609/aaai.v24i1.7747.
Capitan, Jesus, Matthijs Spaan, Luis Merino e Anibal Ollero. "Decentralized Multi-Robot Cooperation with Auctioned POMDPs". Proceedings of the International Conference on Automated Planning and Scheduling 24 (11 maggio 2014): 515–18. http://dx.doi.org/10.1609/icaps.v24i1.13658.
Lin, Yong, Xingjia Lu e Fillia Makedon. "Approximate Planning in POMDPs with Weighted Graph Models". International Journal on Artificial Intelligence Tools 24, n. 04 (agosto 2015): 1550014. http://dx.doi.org/10.1142/s0218213015500141.
Aras, R., e A. Dutech. "An Investigation into Mathematical Programming for Finite Horizon Decentralized POMDPs". Journal of Artificial Intelligence Research 37 (26 marzo 2010): 329–96. http://dx.doi.org/10.1613/jair.2915.
Wen, Xian, Haifeng Huo e Jinhua Cui. "The optimal probability of the risk for finite horizon partially observable Markov decision processes". AIMS Mathematics 8, n. 12 (2023): 28435–49. http://dx.doi.org/10.3934/math.20231455.
Itoh, Hideaki, Hisao Fukumoto, Hiroshi Wakuya e Tatsuya Furukawa. "Bottom-up learning of hierarchical models in a class of deterministic POMDP environments". International Journal of Applied Mathematics and Computer Science 25, n. 3 (1 settembre 2015): 597–615. http://dx.doi.org/10.1515/amcs-2015-0044.
Dressel, Louis, e Mykel Kochenderfer. "Efficient Decision-Theoretic Target Localization". Proceedings of the International Conference on Automated Planning and Scheduling 27 (5 giugno 2017): 70–78. http://dx.doi.org/10.1609/icaps.v27i1.13832.
Chatterjee, Krishnendu, Martin Chmelik e Ufuk Topcu. "Sensor Synthesis for POMDPs with Reachability Objectives". Proceedings of the International Conference on Automated Planning and Scheduling 28 (15 giugno 2018): 47–55. http://dx.doi.org/10.1609/icaps.v28i1.13875.
Park, Jaeyoung, Kee-Eung Kim e Yoon-Kyu Song. "A POMDP-Based Optimal Control of P300-Based Brain-Computer Interfaces". Proceedings of the AAAI Conference on Artificial Intelligence 25, n. 1 (4 agosto 2011): 1559–62. http://dx.doi.org/10.1609/aaai.v25i1.7956.
Doshi, P., e P. J. Gmytrasiewicz. "Monte Carlo Sampling Methods for Approximating Interactive POMDPs". Journal of Artificial Intelligence Research 34 (24 marzo 2009): 297–337. http://dx.doi.org/10.1613/jair.2630.
Shatkay, H., e L. P. Kaelbling. "Learning Geometrically-Constrained Hidden Markov Models for Robot Navigation: Bridging the Topological-Geometrical Gap". Journal of Artificial Intelligence Research 16 (1 marzo 2002): 167–207. http://dx.doi.org/10.1613/jair.874.
Lim, Michael H., Tyler J. Becker, Mykel J. Kochenderfer, Claire J. Tomlin e Zachary N. Sunberg. "Optimality Guarantees for Particle Belief Approximation of POMDPs". Journal of Artificial Intelligence Research 77 (27 agosto 2023): 1591–636. http://dx.doi.org/10.1613/jair.1.14525.
Spaan, M. T. J., e N. Vlassis. "Perseus: Randomized Point-based Value Iteration for POMDPs". Journal of Artificial Intelligence Research 24 (1 agosto 2005): 195–220. http://dx.doi.org/10.1613/jair.1659.
Kraemer, Landon, e Bikramjit Banerjee. "Informed Initial Policies for Learning in Dec-POMDPs". Proceedings of the AAAI Conference on Artificial Intelligence 26, n. 1 (20 settembre 2021): 2433–34. http://dx.doi.org/10.1609/aaai.v26i1.8426.
Banerjee, Bikramjit, Jeremy Lyle, Landon Kraemer e Rajesh Yellamraju. "Sample Bounded Distributed Reinforcement Learning for Decentralized POMDPs". Proceedings of the AAAI Conference on Artificial Intelligence 26, n. 1 (20 settembre 2021): 1256–62. http://dx.doi.org/10.1609/aaai.v26i1.8260.
Bernstein, D. S., C. Amato, E. A. Hansen e S. Zilberstein. "Policy Iteration for Decentralized Control of Markov Decision Processes". Journal of Artificial Intelligence Research 34 (1 marzo 2009): 89–132. http://dx.doi.org/10.1613/jair.2667.
Wu, Bo, Yan Peng Feng e Hong Yan Zheng. "Point-Based Monte Carto Online Planning in POMDPs". Advanced Materials Research 846-847 (novembre 2013): 1388–91. http://dx.doi.org/10.4028/www.scientific.net/amr.846-847.1388.
Ajdarów, Michal, Šimon Brlej e Petr Novotný. "Shielding in Resource-Constrained Goal POMDPs". Proceedings of the AAAI Conference on Artificial Intelligence 37, n. 12 (26 giugno 2023): 14674–82. http://dx.doi.org/10.1609/aaai.v37i12.26715.
Simão, Thiago D., Marnix Suilen e Nils Jansen. "Safe Policy Improvement for POMDPs via Finite-State Controllers". Proceedings of the AAAI Conference on Artificial Intelligence 37, n. 12 (26 giugno 2023): 15109–17. http://dx.doi.org/10.1609/aaai.v37i12.26763.
Zhang, Zongzhang, David Hsu, Wee Sun Lee, Zhan Wei Lim e Aijun Bai. "PLEASE: Palm Leaf Search for POMDPs with Large Observation Spaces". Proceedings of the International Conference on Automated Planning and Scheduling 25 (8 aprile 2015): 249–57. http://dx.doi.org/10.1609/icaps.v25i1.13706.
Bouton, Maxime, Jana Tumova e Mykel J. Kochenderfer. "Point-Based Methods for Model Checking in Partially Observable Markov Decision Processes". Proceedings of the AAAI Conference on Artificial Intelligence 34, n. 06 (3 aprile 2020): 10061–68. http://dx.doi.org/10.1609/aaai.v34i06.6563.
Sonu, Ekhlas, Yingke Chen e Prashant Doshi. "Individual Planning in Agent Populations: Exploiting Anonymity and Frame-Action Hypergraphs". Proceedings of the International Conference on Automated Planning and Scheduling 25 (8 aprile 2015): 202–10. http://dx.doi.org/10.1609/icaps.v25i1.13712.
Petrik, Marek, e Shlomo Zilberstein. "Linear Dynamic Programs for Resource Management". Proceedings of the AAAI Conference on Artificial Intelligence 25, n. 1 (4 agosto 2011): 1377–83. http://dx.doi.org/10.1609/aaai.v25i1.7794.
Dibangoye, Jilles Steeve, Christopher Amato, Olivier Buffet e François Charpillet. "Optimally Solving Dec-POMDPs as Continuous-State MDPs". Journal of Artificial Intelligence Research 55 (24 febbraio 2016): 443–97. http://dx.doi.org/10.1613/jair.4623.
Ng, Brenda, Carol Meyers, Kofi Boakye e John Nitao. "Towards Applying Interactive POMDPs to Real-World Adversary Modeling". Proceedings of the AAAI Conference on Artificial Intelligence 24, n. 2 (7 ottobre 2021): 1814–20. http://dx.doi.org/10.1609/aaai.v24i2.18818.
Boots, Byron, e Geoffrey Gordon. "An Online Spectral Learning Algorithm for Partially Observable Nonlinear Dynamical Systems". Proceedings of the AAAI Conference on Artificial Intelligence 25, n. 1 (4 agosto 2011): 293–300. http://dx.doi.org/10.1609/aaai.v25i1.7924.
Banerjee, Bikramjit. "Pruning for Monte Carlo Distributed Reinforcement Learning in Decentralized POMDPs". Proceedings of the AAAI Conference on Artificial Intelligence 27, n. 1 (30 giugno 2013): 88–94. http://dx.doi.org/10.1609/aaai.v27i1.8670.
Amato, Christopher, George Konidaris, Leslie P. Kaelbling e Jonathan P. How. "Modeling and Planning with Macro-Actions in Decentralized POMDPs". Journal of Artificial Intelligence Research 64 (25 marzo 2019): 817–59. http://dx.doi.org/10.1613/jair.1.11418.
WANG, YI, SHIQI ZHANG e JOOHYUNG LEE. "Bridging Commonsense Reasoning and Probabilistic Planning via a Probabilistic Action Language". Theory and Practice of Logic Programming 19, n. 5-6 (settembre 2019): 1090–106. http://dx.doi.org/10.1017/s1471068419000371.
Sarraute, Carlos, Olivier Buffet e Jörg Hoffmann. "POMDPs Make Better Hackers: Accounting for Uncertainty in Penetration Testing". Proceedings of the AAAI Conference on Artificial Intelligence 26, n. 1 (20 settembre 2021): 1816–24. http://dx.doi.org/10.1609/aaai.v26i1.8363.
Shi, Weihao, Shanhong Guo, Xiaoyu Cong, Weixing Sheng, Jing Yan e Jinkun Chen. "Frequency Agile Anti-Interference Technology Based on Reinforcement Learning Using Long Short-Term Memory and Multi-Layer Historical Information Observation". Remote Sensing 15, n. 23 (23 novembre 2023): 5467. http://dx.doi.org/10.3390/rs15235467.
Nababan, Maxtulus Junedy, Herman Mawengkang, Tulus Tulus e Sutarman Sutarman. "Hidden Markov Model to Optimize Coordination Relationship for Learning Behaviour". International Journal of Religion 5, n. 9 (27 maggio 2024): 459–69. http://dx.doi.org/10.61707/52exbt60.