Artigos de revistas sobre o tema "Partially Observable Markov Decision Processes (POMDPs)"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Partially Observable Markov Decision Processes (POMDPs)".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.
NI, YAODONG, e ZHI-QIANG LIU. "BOUNDED-PARAMETER PARTIALLY OBSERVABLE MARKOV DECISION PROCESSES: FRAMEWORK AND ALGORITHM". International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 21, n.º 06 (dezembro de 2013): 821–63. http://dx.doi.org/10.1142/s0218488513500396.
Texto completo da fonteTennenholtz, Guy, Uri Shalit e Shie Mannor. "Off-Policy Evaluation in Partially Observable Environments". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 06 (3 de abril de 2020): 10276–83. http://dx.doi.org/10.1609/aaai.v34i06.6590.
Texto completo da fonteCarr, Steven, Nils Jansen e Ufuk Topcu. "Task-Aware Verifiable RNN-Based Policies for Partially Observable Markov Decision Processes". Journal of Artificial Intelligence Research 72 (18 de novembro de 2021): 819–47. http://dx.doi.org/10.1613/jair.1.12963.
Texto completo da fonteKim, Sung-Kyun, Oren Salzman e Maxim Likhachev. "POMHDP: Search-Based Belief Space Planning Using Multiple Heuristics". Proceedings of the International Conference on Automated Planning and Scheduling 29 (25 de maio de 2021): 734–44. http://dx.doi.org/10.1609/icaps.v29i1.3542.
Texto completo da fonteWang, Chenggang, e Roni Khardon. "Relational Partially Observable MDPs". Proceedings of the AAAI Conference on Artificial Intelligence 24, n.º 1 (4 de julho de 2010): 1153–58. http://dx.doi.org/10.1609/aaai.v24i1.7742.
Texto completo da fonteHauskrecht, M. "Value-Function Approximations for Partially Observable Markov Decision Processes". Journal of Artificial Intelligence Research 13 (1 de agosto de 2000): 33–94. http://dx.doi.org/10.1613/jair.678.
Texto completo da fonteVictorio-Meza, Hermilo, Manuel Mejía-Lavalle, Alicia Martínez Rebollar, Andrés Blanco Ortega, Obdulia Pichardo Lagunas e Grigori Sidorov. "Searching for Cerebrovascular Disease Optimal Treatment Recommendations Applying Partially Observable Markov Decision Processes". International Journal of Pattern Recognition and Artificial Intelligence 32, n.º 01 (9 de outubro de 2017): 1860015. http://dx.doi.org/10.1142/s0218001418600157.
Texto completo da fonteZhang, N. L., e W. Liu. "A Model Approximation Scheme for Planning in Partially Observable Stochastic Domains". Journal of Artificial Intelligence Research 7 (1 de novembro de 1997): 199–230. http://dx.doi.org/10.1613/jair.419.
Texto completo da fonteOmidshafiei, Shayegan, Ali–Akbar Agha–Mohammadi, Christopher Amato, Shih–Yuan Liu, Jonathan P. How e John Vian. "Decentralized control of multi-robot partially observable Markov decision processes using belief space macro-actions". International Journal of Robotics Research 36, n.º 2 (fevereiro de 2017): 231–58. http://dx.doi.org/10.1177/0278364917692864.
Texto completo da fonteRozek, Brandon, Junkyu Lee, Harsha Kokel, Michael Katz e Shirin Sohrabi. "Partially Observable Hierarchical Reinforcement Learning with AI Planning (Student Abstract)". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 21 (24 de março de 2024): 23635–36. http://dx.doi.org/10.1609/aaai.v38i21.30504.
Texto completo da fonteTheocharous, Georgios, e Sridhar Mahadevan. "Compressing POMDPs Using Locality Preserving Non-Negative Matrix Factorization". Proceedings of the AAAI Conference on Artificial Intelligence 24, n.º 1 (4 de julho de 2010): 1147–52. http://dx.doi.org/10.1609/aaai.v24i1.7750.
Texto completo da fonteWalraven, Erwin, e Matthijs T. J. Spaan. "Point-Based Value Iteration for Finite-Horizon POMDPs". Journal of Artificial Intelligence Research 65 (11 de julho de 2019): 307–41. http://dx.doi.org/10.1613/jair.1.11324.
Texto completo da fonteZhang, N. L., e W. Zhang. "Speeding Up the Convergence of Value Iteration in Partially Observable Markov Decision Processes". Journal of Artificial Intelligence Research 14 (1 de fevereiro de 2001): 29–51. http://dx.doi.org/10.1613/jair.761.
Texto completo da fonteWang, Erli, Hanna Kurniawati e Dirk Kroese. "An On-Line Planner for POMDPs with Large Discrete Action Space: A Quantile-Based Approach". Proceedings of the International Conference on Automated Planning and Scheduling 28 (15 de junho de 2018): 273–77. http://dx.doi.org/10.1609/icaps.v28i1.13906.
Texto completo da fonteZhang, Zongzhang, Michael Littman e Xiaoping Chen. "Covering Number as a Complexity Measure for POMDP Planning and Learning". Proceedings of the AAAI Conference on Artificial Intelligence 26, n.º 1 (20 de setembro de 2021): 1853–59. http://dx.doi.org/10.1609/aaai.v26i1.8360.
Texto completo da fonteRoss, S., J. Pineau, S. Paquet e B. Chaib-draa. "Online Planning Algorithms for POMDPs". Journal of Artificial Intelligence Research 32 (29 de julho de 2008): 663–704. http://dx.doi.org/10.1613/jair.2567.
Texto completo da fonteKo, Li Ling, David Hsu, Wee Sun Lee e Sylvie Ong. "Structured Parameter Elicitation". Proceedings of the AAAI Conference on Artificial Intelligence 24, n.º 1 (4 de julho de 2010): 1102–7. http://dx.doi.org/10.1609/aaai.v24i1.7744.
Texto completo da fonteXIANG, YANG, e FRANK HANSHAR. "MULTIAGENT EXPEDITION WITH GRAPHICAL MODELS". International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 19, n.º 06 (dezembro de 2011): 939–76. http://dx.doi.org/10.1142/s0218488511007416.
Texto completo da fonteSanner, Scott, e Kristian Kersting. "Symbolic Dynamic Programming for First-order POMDPs". Proceedings of the AAAI Conference on Artificial Intelligence 24, n.º 1 (4 de julho de 2010): 1140–46. http://dx.doi.org/10.1609/aaai.v24i1.7747.
Texto completo da fonteCapitan, Jesus, Matthijs Spaan, Luis Merino e Anibal Ollero. "Decentralized Multi-Robot Cooperation with Auctioned POMDPs". Proceedings of the International Conference on Automated Planning and Scheduling 24 (11 de maio de 2014): 515–18. http://dx.doi.org/10.1609/icaps.v24i1.13658.
Texto completo da fonteLin, Yong, Xingjia Lu e Fillia Makedon. "Approximate Planning in POMDPs with Weighted Graph Models". International Journal on Artificial Intelligence Tools 24, n.º 04 (agosto de 2015): 1550014. http://dx.doi.org/10.1142/s0218213015500141.
Texto completo da fonteAras, R., e A. Dutech. "An Investigation into Mathematical Programming for Finite Horizon Decentralized POMDPs". Journal of Artificial Intelligence Research 37 (26 de março de 2010): 329–96. http://dx.doi.org/10.1613/jair.2915.
Texto completo da fonteWen, Xian, Haifeng Huo e Jinhua Cui. "The optimal probability of the risk for finite horizon partially observable Markov decision processes". AIMS Mathematics 8, n.º 12 (2023): 28435–49. http://dx.doi.org/10.3934/math.20231455.
Texto completo da fonteItoh, Hideaki, Hisao Fukumoto, Hiroshi Wakuya e Tatsuya Furukawa. "Bottom-up learning of hierarchical models in a class of deterministic POMDP environments". International Journal of Applied Mathematics and Computer Science 25, n.º 3 (1 de setembro de 2015): 597–615. http://dx.doi.org/10.1515/amcs-2015-0044.
Texto completo da fonteDressel, Louis, e Mykel Kochenderfer. "Efficient Decision-Theoretic Target Localization". Proceedings of the International Conference on Automated Planning and Scheduling 27 (5 de junho de 2017): 70–78. http://dx.doi.org/10.1609/icaps.v27i1.13832.
Texto completo da fonteChatterjee, Krishnendu, Martin Chmelik e Ufuk Topcu. "Sensor Synthesis for POMDPs with Reachability Objectives". Proceedings of the International Conference on Automated Planning and Scheduling 28 (15 de junho de 2018): 47–55. http://dx.doi.org/10.1609/icaps.v28i1.13875.
Texto completo da fontePark, Jaeyoung, Kee-Eung Kim e Yoon-Kyu Song. "A POMDP-Based Optimal Control of P300-Based Brain-Computer Interfaces". Proceedings of the AAAI Conference on Artificial Intelligence 25, n.º 1 (4 de agosto de 2011): 1559–62. http://dx.doi.org/10.1609/aaai.v25i1.7956.
Texto completo da fonteDoshi, P., e P. J. Gmytrasiewicz. "Monte Carlo Sampling Methods for Approximating Interactive POMDPs". Journal of Artificial Intelligence Research 34 (24 de março de 2009): 297–337. http://dx.doi.org/10.1613/jair.2630.
Texto completo da fonteShatkay, H., e L. P. Kaelbling. "Learning Geometrically-Constrained Hidden Markov Models for Robot Navigation: Bridging the Topological-Geometrical Gap". Journal of Artificial Intelligence Research 16 (1 de março de 2002): 167–207. http://dx.doi.org/10.1613/jair.874.
Texto completo da fonteLim, Michael H., Tyler J. Becker, Mykel J. Kochenderfer, Claire J. Tomlin e Zachary N. Sunberg. "Optimality Guarantees for Particle Belief Approximation of POMDPs". Journal of Artificial Intelligence Research 77 (27 de agosto de 2023): 1591–636. http://dx.doi.org/10.1613/jair.1.14525.
Texto completo da fonteSpaan, M. T. J., e N. Vlassis. "Perseus: Randomized Point-based Value Iteration for POMDPs". Journal of Artificial Intelligence Research 24 (1 de agosto de 2005): 195–220. http://dx.doi.org/10.1613/jair.1659.
Texto completo da fonteKraemer, Landon, e Bikramjit Banerjee. "Informed Initial Policies for Learning in Dec-POMDPs". Proceedings of the AAAI Conference on Artificial Intelligence 26, n.º 1 (20 de setembro de 2021): 2433–34. http://dx.doi.org/10.1609/aaai.v26i1.8426.
Texto completo da fonteBanerjee, Bikramjit, Jeremy Lyle, Landon Kraemer e Rajesh Yellamraju. "Sample Bounded Distributed Reinforcement Learning for Decentralized POMDPs". Proceedings of the AAAI Conference on Artificial Intelligence 26, n.º 1 (20 de setembro de 2021): 1256–62. http://dx.doi.org/10.1609/aaai.v26i1.8260.
Texto completo da fonteBernstein, D. S., C. Amato, E. A. Hansen e S. Zilberstein. "Policy Iteration for Decentralized Control of Markov Decision Processes". Journal of Artificial Intelligence Research 34 (1 de março de 2009): 89–132. http://dx.doi.org/10.1613/jair.2667.
Texto completo da fonteWu, Bo, Yan Peng Feng e Hong Yan Zheng. "Point-Based Monte Carto Online Planning in POMDPs". Advanced Materials Research 846-847 (novembro de 2013): 1388–91. http://dx.doi.org/10.4028/www.scientific.net/amr.846-847.1388.
Texto completo da fonteAjdarów, Michal, Šimon Brlej e Petr Novotný. "Shielding in Resource-Constrained Goal POMDPs". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 12 (26 de junho de 2023): 14674–82. http://dx.doi.org/10.1609/aaai.v37i12.26715.
Texto completo da fonteSimão, Thiago D., Marnix Suilen e Nils Jansen. "Safe Policy Improvement for POMDPs via Finite-State Controllers". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 12 (26 de junho de 2023): 15109–17. http://dx.doi.org/10.1609/aaai.v37i12.26763.
Texto completo da fonteZhang, Zongzhang, David Hsu, Wee Sun Lee, Zhan Wei Lim e Aijun Bai. "PLEASE: Palm Leaf Search for POMDPs with Large Observation Spaces". Proceedings of the International Conference on Automated Planning and Scheduling 25 (8 de abril de 2015): 249–57. http://dx.doi.org/10.1609/icaps.v25i1.13706.
Texto completo da fonteBouton, Maxime, Jana Tumova e Mykel J. Kochenderfer. "Point-Based Methods for Model Checking in Partially Observable Markov Decision Processes". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 06 (3 de abril de 2020): 10061–68. http://dx.doi.org/10.1609/aaai.v34i06.6563.
Texto completo da fonteSonu, Ekhlas, Yingke Chen e Prashant Doshi. "Individual Planning in Agent Populations: Exploiting Anonymity and Frame-Action Hypergraphs". Proceedings of the International Conference on Automated Planning and Scheduling 25 (8 de abril de 2015): 202–10. http://dx.doi.org/10.1609/icaps.v25i1.13712.
Texto completo da fontePetrik, Marek, e Shlomo Zilberstein. "Linear Dynamic Programs for Resource Management". Proceedings of the AAAI Conference on Artificial Intelligence 25, n.º 1 (4 de agosto de 2011): 1377–83. http://dx.doi.org/10.1609/aaai.v25i1.7794.
Texto completo da fonteDibangoye, Jilles Steeve, Christopher Amato, Olivier Buffet e François Charpillet. "Optimally Solving Dec-POMDPs as Continuous-State MDPs". Journal of Artificial Intelligence Research 55 (24 de fevereiro de 2016): 443–97. http://dx.doi.org/10.1613/jair.4623.
Texto completo da fonteNg, Brenda, Carol Meyers, Kofi Boakye e John Nitao. "Towards Applying Interactive POMDPs to Real-World Adversary Modeling". Proceedings of the AAAI Conference on Artificial Intelligence 24, n.º 2 (7 de outubro de 2021): 1814–20. http://dx.doi.org/10.1609/aaai.v24i2.18818.
Texto completo da fonteBoots, Byron, e Geoffrey Gordon. "An Online Spectral Learning Algorithm for Partially Observable Nonlinear Dynamical Systems". Proceedings of the AAAI Conference on Artificial Intelligence 25, n.º 1 (4 de agosto de 2011): 293–300. http://dx.doi.org/10.1609/aaai.v25i1.7924.
Texto completo da fonteBanerjee, Bikramjit. "Pruning for Monte Carlo Distributed Reinforcement Learning in Decentralized POMDPs". Proceedings of the AAAI Conference on Artificial Intelligence 27, n.º 1 (30 de junho de 2013): 88–94. http://dx.doi.org/10.1609/aaai.v27i1.8670.
Texto completo da fonteAmato, Christopher, George Konidaris, Leslie P. Kaelbling e Jonathan P. How. "Modeling and Planning with Macro-Actions in Decentralized POMDPs". Journal of Artificial Intelligence Research 64 (25 de março de 2019): 817–59. http://dx.doi.org/10.1613/jair.1.11418.
Texto completo da fonteWANG, YI, SHIQI ZHANG e JOOHYUNG LEE. "Bridging Commonsense Reasoning and Probabilistic Planning via a Probabilistic Action Language". Theory and Practice of Logic Programming 19, n.º 5-6 (setembro de 2019): 1090–106. http://dx.doi.org/10.1017/s1471068419000371.
Texto completo da fonteSarraute, Carlos, Olivier Buffet e Jörg Hoffmann. "POMDPs Make Better Hackers: Accounting for Uncertainty in Penetration Testing". Proceedings of the AAAI Conference on Artificial Intelligence 26, n.º 1 (20 de setembro de 2021): 1816–24. http://dx.doi.org/10.1609/aaai.v26i1.8363.
Texto completo da fonteShi, Weihao, Shanhong Guo, Xiaoyu Cong, Weixing Sheng, Jing Yan e Jinkun Chen. "Frequency Agile Anti-Interference Technology Based on Reinforcement Learning Using Long Short-Term Memory and Multi-Layer Historical Information Observation". Remote Sensing 15, n.º 23 (23 de novembro de 2023): 5467. http://dx.doi.org/10.3390/rs15235467.
Texto completo da fonteNababan, Maxtulus Junedy, Herman Mawengkang, Tulus Tulus e Sutarman Sutarman. "Hidden Markov Model to Optimize Coordination Relationship for Learning Behaviour". International Journal of Religion 5, n.º 9 (27 de maio de 2024): 459–69. http://dx.doi.org/10.61707/52exbt60.
Texto completo da fonte