Artykuły w czasopismach na temat „Partially Observable Markov Decision Processes (POMDPs)”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Partially Observable Markov Decision Processes (POMDPs)”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.
NI, YAODONG, i ZHI-QIANG LIU. "BOUNDED-PARAMETER PARTIALLY OBSERVABLE MARKOV DECISION PROCESSES: FRAMEWORK AND ALGORITHM". International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 21, nr 06 (grudzień 2013): 821–63. http://dx.doi.org/10.1142/s0218488513500396.
Pełny tekst źródłaTennenholtz, Guy, Uri Shalit i Shie Mannor. "Off-Policy Evaluation in Partially Observable Environments". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 06 (3.04.2020): 10276–83. http://dx.doi.org/10.1609/aaai.v34i06.6590.
Pełny tekst źródłaCarr, Steven, Nils Jansen i Ufuk Topcu. "Task-Aware Verifiable RNN-Based Policies for Partially Observable Markov Decision Processes". Journal of Artificial Intelligence Research 72 (18.11.2021): 819–47. http://dx.doi.org/10.1613/jair.1.12963.
Pełny tekst źródłaKim, Sung-Kyun, Oren Salzman i Maxim Likhachev. "POMHDP: Search-Based Belief Space Planning Using Multiple Heuristics". Proceedings of the International Conference on Automated Planning and Scheduling 29 (25.05.2021): 734–44. http://dx.doi.org/10.1609/icaps.v29i1.3542.
Pełny tekst źródłaWang, Chenggang, i Roni Khardon. "Relational Partially Observable MDPs". Proceedings of the AAAI Conference on Artificial Intelligence 24, nr 1 (4.07.2010): 1153–58. http://dx.doi.org/10.1609/aaai.v24i1.7742.
Pełny tekst źródłaHauskrecht, M. "Value-Function Approximations for Partially Observable Markov Decision Processes". Journal of Artificial Intelligence Research 13 (1.08.2000): 33–94. http://dx.doi.org/10.1613/jair.678.
Pełny tekst źródłaVictorio-Meza, Hermilo, Manuel Mejía-Lavalle, Alicia Martínez Rebollar, Andrés Blanco Ortega, Obdulia Pichardo Lagunas i Grigori Sidorov. "Searching for Cerebrovascular Disease Optimal Treatment Recommendations Applying Partially Observable Markov Decision Processes". International Journal of Pattern Recognition and Artificial Intelligence 32, nr 01 (9.10.2017): 1860015. http://dx.doi.org/10.1142/s0218001418600157.
Pełny tekst źródłaZhang, N. L., i W. Liu. "A Model Approximation Scheme for Planning in Partially Observable Stochastic Domains". Journal of Artificial Intelligence Research 7 (1.11.1997): 199–230. http://dx.doi.org/10.1613/jair.419.
Pełny tekst źródłaOmidshafiei, Shayegan, Ali–Akbar Agha–Mohammadi, Christopher Amato, Shih–Yuan Liu, Jonathan P. How i John Vian. "Decentralized control of multi-robot partially observable Markov decision processes using belief space macro-actions". International Journal of Robotics Research 36, nr 2 (luty 2017): 231–58. http://dx.doi.org/10.1177/0278364917692864.
Pełny tekst źródłaRozek, Brandon, Junkyu Lee, Harsha Kokel, Michael Katz i Shirin Sohrabi. "Partially Observable Hierarchical Reinforcement Learning with AI Planning (Student Abstract)". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 21 (24.03.2024): 23635–36. http://dx.doi.org/10.1609/aaai.v38i21.30504.
Pełny tekst źródłaTheocharous, Georgios, i Sridhar Mahadevan. "Compressing POMDPs Using Locality Preserving Non-Negative Matrix Factorization". Proceedings of the AAAI Conference on Artificial Intelligence 24, nr 1 (4.07.2010): 1147–52. http://dx.doi.org/10.1609/aaai.v24i1.7750.
Pełny tekst źródłaWalraven, Erwin, i Matthijs T. J. Spaan. "Point-Based Value Iteration for Finite-Horizon POMDPs". Journal of Artificial Intelligence Research 65 (11.07.2019): 307–41. http://dx.doi.org/10.1613/jair.1.11324.
Pełny tekst źródłaZhang, N. L., i W. Zhang. "Speeding Up the Convergence of Value Iteration in Partially Observable Markov Decision Processes". Journal of Artificial Intelligence Research 14 (1.02.2001): 29–51. http://dx.doi.org/10.1613/jair.761.
Pełny tekst źródłaWang, Erli, Hanna Kurniawati i Dirk Kroese. "An On-Line Planner for POMDPs with Large Discrete Action Space: A Quantile-Based Approach". Proceedings of the International Conference on Automated Planning and Scheduling 28 (15.06.2018): 273–77. http://dx.doi.org/10.1609/icaps.v28i1.13906.
Pełny tekst źródłaZhang, Zongzhang, Michael Littman i Xiaoping Chen. "Covering Number as a Complexity Measure for POMDP Planning and Learning". Proceedings of the AAAI Conference on Artificial Intelligence 26, nr 1 (20.09.2021): 1853–59. http://dx.doi.org/10.1609/aaai.v26i1.8360.
Pełny tekst źródłaRoss, S., J. Pineau, S. Paquet i B. Chaib-draa. "Online Planning Algorithms for POMDPs". Journal of Artificial Intelligence Research 32 (29.07.2008): 663–704. http://dx.doi.org/10.1613/jair.2567.
Pełny tekst źródłaKo, Li Ling, David Hsu, Wee Sun Lee i Sylvie Ong. "Structured Parameter Elicitation". Proceedings of the AAAI Conference on Artificial Intelligence 24, nr 1 (4.07.2010): 1102–7. http://dx.doi.org/10.1609/aaai.v24i1.7744.
Pełny tekst źródłaXIANG, YANG, i FRANK HANSHAR. "MULTIAGENT EXPEDITION WITH GRAPHICAL MODELS". International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 19, nr 06 (grudzień 2011): 939–76. http://dx.doi.org/10.1142/s0218488511007416.
Pełny tekst źródłaSanner, Scott, i Kristian Kersting. "Symbolic Dynamic Programming for First-order POMDPs". Proceedings of the AAAI Conference on Artificial Intelligence 24, nr 1 (4.07.2010): 1140–46. http://dx.doi.org/10.1609/aaai.v24i1.7747.
Pełny tekst źródłaCapitan, Jesus, Matthijs Spaan, Luis Merino i Anibal Ollero. "Decentralized Multi-Robot Cooperation with Auctioned POMDPs". Proceedings of the International Conference on Automated Planning and Scheduling 24 (11.05.2014): 515–18. http://dx.doi.org/10.1609/icaps.v24i1.13658.
Pełny tekst źródłaLin, Yong, Xingjia Lu i Fillia Makedon. "Approximate Planning in POMDPs with Weighted Graph Models". International Journal on Artificial Intelligence Tools 24, nr 04 (sierpień 2015): 1550014. http://dx.doi.org/10.1142/s0218213015500141.
Pełny tekst źródłaAras, R., i A. Dutech. "An Investigation into Mathematical Programming for Finite Horizon Decentralized POMDPs". Journal of Artificial Intelligence Research 37 (26.03.2010): 329–96. http://dx.doi.org/10.1613/jair.2915.
Pełny tekst źródłaWen, Xian, Haifeng Huo i Jinhua Cui. "The optimal probability of the risk for finite horizon partially observable Markov decision processes". AIMS Mathematics 8, nr 12 (2023): 28435–49. http://dx.doi.org/10.3934/math.20231455.
Pełny tekst źródłaItoh, Hideaki, Hisao Fukumoto, Hiroshi Wakuya i Tatsuya Furukawa. "Bottom-up learning of hierarchical models in a class of deterministic POMDP environments". International Journal of Applied Mathematics and Computer Science 25, nr 3 (1.09.2015): 597–615. http://dx.doi.org/10.1515/amcs-2015-0044.
Pełny tekst źródłaDressel, Louis, i Mykel Kochenderfer. "Efficient Decision-Theoretic Target Localization". Proceedings of the International Conference on Automated Planning and Scheduling 27 (5.06.2017): 70–78. http://dx.doi.org/10.1609/icaps.v27i1.13832.
Pełny tekst źródłaChatterjee, Krishnendu, Martin Chmelik i Ufuk Topcu. "Sensor Synthesis for POMDPs with Reachability Objectives". Proceedings of the International Conference on Automated Planning and Scheduling 28 (15.06.2018): 47–55. http://dx.doi.org/10.1609/icaps.v28i1.13875.
Pełny tekst źródłaPark, Jaeyoung, Kee-Eung Kim i Yoon-Kyu Song. "A POMDP-Based Optimal Control of P300-Based Brain-Computer Interfaces". Proceedings of the AAAI Conference on Artificial Intelligence 25, nr 1 (4.08.2011): 1559–62. http://dx.doi.org/10.1609/aaai.v25i1.7956.
Pełny tekst źródłaDoshi, P., i P. J. Gmytrasiewicz. "Monte Carlo Sampling Methods for Approximating Interactive POMDPs". Journal of Artificial Intelligence Research 34 (24.03.2009): 297–337. http://dx.doi.org/10.1613/jair.2630.
Pełny tekst źródłaShatkay, H., i L. P. Kaelbling. "Learning Geometrically-Constrained Hidden Markov Models for Robot Navigation: Bridging the Topological-Geometrical Gap". Journal of Artificial Intelligence Research 16 (1.03.2002): 167–207. http://dx.doi.org/10.1613/jair.874.
Pełny tekst źródłaLim, Michael H., Tyler J. Becker, Mykel J. Kochenderfer, Claire J. Tomlin i Zachary N. Sunberg. "Optimality Guarantees for Particle Belief Approximation of POMDPs". Journal of Artificial Intelligence Research 77 (27.08.2023): 1591–636. http://dx.doi.org/10.1613/jair.1.14525.
Pełny tekst źródłaSpaan, M. T. J., i N. Vlassis. "Perseus: Randomized Point-based Value Iteration for POMDPs". Journal of Artificial Intelligence Research 24 (1.08.2005): 195–220. http://dx.doi.org/10.1613/jair.1659.
Pełny tekst źródłaKraemer, Landon, i Bikramjit Banerjee. "Informed Initial Policies for Learning in Dec-POMDPs". Proceedings of the AAAI Conference on Artificial Intelligence 26, nr 1 (20.09.2021): 2433–34. http://dx.doi.org/10.1609/aaai.v26i1.8426.
Pełny tekst źródłaBanerjee, Bikramjit, Jeremy Lyle, Landon Kraemer i Rajesh Yellamraju. "Sample Bounded Distributed Reinforcement Learning for Decentralized POMDPs". Proceedings of the AAAI Conference on Artificial Intelligence 26, nr 1 (20.09.2021): 1256–62. http://dx.doi.org/10.1609/aaai.v26i1.8260.
Pełny tekst źródłaBernstein, D. S., C. Amato, E. A. Hansen i S. Zilberstein. "Policy Iteration for Decentralized Control of Markov Decision Processes". Journal of Artificial Intelligence Research 34 (1.03.2009): 89–132. http://dx.doi.org/10.1613/jair.2667.
Pełny tekst źródłaWu, Bo, Yan Peng Feng i Hong Yan Zheng. "Point-Based Monte Carto Online Planning in POMDPs". Advanced Materials Research 846-847 (listopad 2013): 1388–91. http://dx.doi.org/10.4028/www.scientific.net/amr.846-847.1388.
Pełny tekst źródłaAjdarów, Michal, Šimon Brlej i Petr Novotný. "Shielding in Resource-Constrained Goal POMDPs". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 12 (26.06.2023): 14674–82. http://dx.doi.org/10.1609/aaai.v37i12.26715.
Pełny tekst źródłaSimão, Thiago D., Marnix Suilen i Nils Jansen. "Safe Policy Improvement for POMDPs via Finite-State Controllers". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 12 (26.06.2023): 15109–17. http://dx.doi.org/10.1609/aaai.v37i12.26763.
Pełny tekst źródłaZhang, Zongzhang, David Hsu, Wee Sun Lee, Zhan Wei Lim i Aijun Bai. "PLEASE: Palm Leaf Search for POMDPs with Large Observation Spaces". Proceedings of the International Conference on Automated Planning and Scheduling 25 (8.04.2015): 249–57. http://dx.doi.org/10.1609/icaps.v25i1.13706.
Pełny tekst źródłaBouton, Maxime, Jana Tumova i Mykel J. Kochenderfer. "Point-Based Methods for Model Checking in Partially Observable Markov Decision Processes". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 06 (3.04.2020): 10061–68. http://dx.doi.org/10.1609/aaai.v34i06.6563.
Pełny tekst źródłaSonu, Ekhlas, Yingke Chen i Prashant Doshi. "Individual Planning in Agent Populations: Exploiting Anonymity and Frame-Action Hypergraphs". Proceedings of the International Conference on Automated Planning and Scheduling 25 (8.04.2015): 202–10. http://dx.doi.org/10.1609/icaps.v25i1.13712.
Pełny tekst źródłaPetrik, Marek, i Shlomo Zilberstein. "Linear Dynamic Programs for Resource Management". Proceedings of the AAAI Conference on Artificial Intelligence 25, nr 1 (4.08.2011): 1377–83. http://dx.doi.org/10.1609/aaai.v25i1.7794.
Pełny tekst źródłaDibangoye, Jilles Steeve, Christopher Amato, Olivier Buffet i François Charpillet. "Optimally Solving Dec-POMDPs as Continuous-State MDPs". Journal of Artificial Intelligence Research 55 (24.02.2016): 443–97. http://dx.doi.org/10.1613/jair.4623.
Pełny tekst źródłaNg, Brenda, Carol Meyers, Kofi Boakye i John Nitao. "Towards Applying Interactive POMDPs to Real-World Adversary Modeling". Proceedings of the AAAI Conference on Artificial Intelligence 24, nr 2 (7.10.2021): 1814–20. http://dx.doi.org/10.1609/aaai.v24i2.18818.
Pełny tekst źródłaBoots, Byron, i Geoffrey Gordon. "An Online Spectral Learning Algorithm for Partially Observable Nonlinear Dynamical Systems". Proceedings of the AAAI Conference on Artificial Intelligence 25, nr 1 (4.08.2011): 293–300. http://dx.doi.org/10.1609/aaai.v25i1.7924.
Pełny tekst źródłaBanerjee, Bikramjit. "Pruning for Monte Carlo Distributed Reinforcement Learning in Decentralized POMDPs". Proceedings of the AAAI Conference on Artificial Intelligence 27, nr 1 (30.06.2013): 88–94. http://dx.doi.org/10.1609/aaai.v27i1.8670.
Pełny tekst źródłaAmato, Christopher, George Konidaris, Leslie P. Kaelbling i Jonathan P. How. "Modeling and Planning with Macro-Actions in Decentralized POMDPs". Journal of Artificial Intelligence Research 64 (25.03.2019): 817–59. http://dx.doi.org/10.1613/jair.1.11418.
Pełny tekst źródłaWANG, YI, SHIQI ZHANG i JOOHYUNG LEE. "Bridging Commonsense Reasoning and Probabilistic Planning via a Probabilistic Action Language". Theory and Practice of Logic Programming 19, nr 5-6 (wrzesień 2019): 1090–106. http://dx.doi.org/10.1017/s1471068419000371.
Pełny tekst źródłaSarraute, Carlos, Olivier Buffet i Jörg Hoffmann. "POMDPs Make Better Hackers: Accounting for Uncertainty in Penetration Testing". Proceedings of the AAAI Conference on Artificial Intelligence 26, nr 1 (20.09.2021): 1816–24. http://dx.doi.org/10.1609/aaai.v26i1.8363.
Pełny tekst źródłaShi, Weihao, Shanhong Guo, Xiaoyu Cong, Weixing Sheng, Jing Yan i Jinkun Chen. "Frequency Agile Anti-Interference Technology Based on Reinforcement Learning Using Long Short-Term Memory and Multi-Layer Historical Information Observation". Remote Sensing 15, nr 23 (23.11.2023): 5467. http://dx.doi.org/10.3390/rs15235467.
Pełny tekst źródłaNababan, Maxtulus Junedy, Herman Mawengkang, Tulus Tulus i Sutarman Sutarman. "Hidden Markov Model to Optimize Coordination Relationship for Learning Behaviour". International Journal of Religion 5, nr 9 (27.05.2024): 459–69. http://dx.doi.org/10.61707/52exbt60.
Pełny tekst źródła