Academic literature on the topic 'POMDT planning'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'POMDT planning.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "POMDT planning"
Rafferty, Anna N., Emma Brunskill, Thomas L. Griffiths, and Patrick Shafto. "Faster Teaching via POMDP Planning." Cognitive Science 40, no. 6 (September 24, 2015): 1290–332. http://dx.doi.org/10.1111/cogs.12290.
Full textLuo, Yuanfu, Haoyu Bai, David Hsu, and Wee Sun Lee. "Importance sampling for online planning under uncertainty." International Journal of Robotics Research 38, no. 2-3 (June 19, 2018): 162–81. http://dx.doi.org/10.1177/0278364918780322.
Full textKhalvati, Koosha, and Alan Mackworth. "A Fast Pairwise Heuristic for Planning under Uncertainty." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (June 30, 2013): 503–9. http://dx.doi.org/10.1609/aaai.v27i1.8672.
Full textRoss, S., J. Pineau, S. Paquet, and B. Chaib-draa. "Online Planning Algorithms for POMDPs." Journal of Artificial Intelligence Research 32 (July 29, 2008): 663–704. http://dx.doi.org/10.1613/jair.2567.
Full textYe, Nan, Adhiraj Somani, David Hsu, and Wee Sun Lee. "DESPOT: Online POMDP Planning with Regularization." Journal of Artificial Intelligence Research 58 (January 26, 2017): 231–66. http://dx.doi.org/10.1613/jair.5328.
Full textZhang, N. L., and W. Liu. "A Model Approximation Scheme for Planning in Partially Observable Stochastic Domains." Journal of Artificial Intelligence Research 7 (November 1, 1997): 199–230. http://dx.doi.org/10.1613/jair.419.
Full textBrafman, Ronen, Guy Shani, and Shlomo Zilberstein. "Qualitative Planning under Partial Observability in Multi-Agent Domains." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (June 30, 2013): 130–37. http://dx.doi.org/10.1609/aaai.v27i1.8643.
Full textHe, Ruijie, Emma Brunskill, and Nicholas Roy. "PUMA: Planning Under Uncertainty with Macro-Actions." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 1 (July 4, 2010): 1089–95. http://dx.doi.org/10.1609/aaai.v24i1.7749.
Full textYang, Qiming, Jiancheng Xu, Haibao Tian, and Yong Wu. "Decision Modeling of UAV On-Line Path Planning Based on IMM." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 36, no. 2 (April 2018): 323–31. http://dx.doi.org/10.1051/jnwpu/20183620323.
Full textWANG, YI, SHIQI ZHANG, and JOOHYUNG LEE. "Bridging Commonsense Reasoning and Probabilistic Planning via a Probabilistic Action Language." Theory and Practice of Logic Programming 19, no. 5-6 (September 2019): 1090–106. http://dx.doi.org/10.1017/s1471068419000371.
Full textDissertations / Theses on the topic "POMDT planning"
Saldaña, Gadea Santiago Jesús. "The effectiveness of social plan sharing in online planning in POMDP-type domains." Winston-Salem, NC : Wake Forest University, 2009. http://dspace.zsr.wfu.edu/jspui/handle/10339/44699.
Full textTitle from electronic thesis title page. Thesis advisor: William H. Turkett Jr. Vita. Includes bibliographical references (p. 47-48).
Pinheiro, Paulo Gurgel 1983. "Planning for mobile robot localization using architectural design features on a hierarchical POMDP approach = Planejamento para localização de robôs móveis utilizando padrões arquitetônicos em um modelo hierárquico de POMDP." [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275601.
Full textTese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-24T02:06:24Z (GMT). No. of bitstreams: 1 Pinheiro_PauloGurgel_D.pdf: 41476694 bytes, checksum: f3d5b1e2aa32aa6f00ef7ac689a261e2 (MD5) Previous issue date: 2013
Resumo: Localização de robôs móveis é uma das áreas mais exploradas da robótica devido a sua importância para a resolução de problemas, como: navegação, mapeamento e SLAM. Muitos trabalhos apresentaram soluções envolvendo cooperação, comunicação e exploração do ambiente, onde em geral a localização é obtida através de ações randômicas ou puramente orientadas pelo estado de crença. Nesta tese, é apresentado um modelo de planejamento para localização utilizando POMDP e Localização de Markov, que indicaria a melhor ação que o robô deve efetuar em cada momento, com o objetivo de diminuir a quantidade de passos. O foco está principalmente em: i) problemas de difícil localização: onde não há landmark ou informação extra no ambiente que auxilie o robô, ii) situações de performance crítica: onde o robô deve evitar passos randômicos e o gasto de energia e, por último, iii) situações com múltiplas missões. Sabendo que um robô é projetado para desempenhar missões, será proposto, neste trabalho, um modelo onde essas missões são consideradas em paralelo com a localização. Planejar para cenários com múltiplos ambientes é um desafio devido a grande quantidade de estados que deve ser tratada. Para esse tipo de problema, será apresentado um modelo de compressão de mapas que utiliza padrões arquiteturais e de design, como: quantidade de portas, paredes ou área total de um ambiente, para condensar informações que possam ser redundantes. O modelo baseia-se na similaridade das características de desing para agrupar ambientes similares e combiná-los, gerando um único mapa representante que possui uma quantidade de estados menor que a soma total de todos os estados dos ambientes do grupo. Planos em POMDP são gerados apenas para os representantes e não para todo o mapa. Finalmente, será apresentado o modelo hierárquico onde a localização é executada em duas camadas. Na camada superior, o robô utiliza os planos POMDP e os mapas compactos para estimar a grossa estimativa de sua localização e, na camada inferior, utiliza POMDP ou Localização de Markov para a obtenção da postura mais precisa. O modelo hierárquico foi demonstrado com experimentos utilizando o simulador V-REP, e o robô Pioneer 3-DX. Resultados comparativos mostraram que o robô utilizando o modelo proposto, foi capaz de realizar o processo de localização em cenários com múltiplos ambientes e cumprir a missão, mantendo a precisão com uma significativa redução na quantidade de passos efetuados
Abstract: Mobile Robot localization is one of the most explored areas in robotics due to its importance for solving problems, such as navigation, mapping and SLAM. In this work, we are interested in solving global localization problems, where the initial pose of the robot is completely unknown. Several works have proposed solutions for localization focusing on robot cooperation, communication or environment exploration, where the robot's pose is often found by a certain amount of random actions or state belief oriented actions. In order to decrease the total steps performed, we will introduce a model of planning for localization using POMDPs and Markov Localization that indicates the optimal action to be taken by the robot for each decision time. Our focus is on i) hard localization problems, where there are no special landmarks or extra features over the environment to help the robot, ii) critical performance situation, where the robot is required to avoid random actions and the waste of energy roaming over the environment, and iii) multiple missions situations. Aware the robot is designed to perform missions, we have proposed a model that runs missions and the localization process, simultaneously. Also, since the robot can have different missions, the model computes the planning for localization as an offline process, but loading the missions at runtime. Planning for multiple environments is a challenge due to the amount of states we must consider. Thus, we also proposed a solution to compress the original map, creating a smaller topological representation that is easier and cheaper to get plans done. The map compression takes advantage of the similarity of rooms found especially in offices and residential environments. Similar rooms have similar architectural design features that can be shared. To deal with the compressed map, we proposed a hierarchical approach that uses light POMDP plans and the compressed map on the higher layer to find the gross pose, and on the lower layer, decomposed maps to find the precise pose. We have demonstrated the hierarchical approach with the map compression using both V-REP Simulator and a Pioneer 3-DX robot. Comparing to other active localization models, the results show that our approach allowed the robot to perform both localization and the mission in a multiple room environment with a significant reduction on the number of steps while keeping the pose accuracy
Doutorado
Ciência da Computação
Doutor em Ciência da Computação
Corona, Gabriel. "Utilisation de croyances heuristiques pour la planification multi-agent dans le cadre des Dec-POMDP." Phd thesis, Université Henri Poincaré - Nancy I, 2011. http://tel.archives-ouvertes.fr/tel-00598689.
Full textVanegas, Alvarez Fernando. "Uncertainty based online planning for UAV missions in GPS-denied and cluttered environments." Thesis, Queensland University of Technology, 2017. https://eprints.qut.edu.au/103846/1/Fernando_Vanegas%20Alvarez_Thesis.pdf.
Full textPonzoni, Carvalho Chanel Caroline. "Planification de perception et de mission en environnement incertain : Application à la détection et à la reconnaissance de cibles par un hélicoptère autonome." Thesis, Toulouse, ISAE, 2013. http://www.theses.fr/2013ESAE0011/document.
Full textMobile and aerial robots are faced to the need of planning actions with incomplete information about the state of theworld. In this context, this thesis proposes a modeling and resolution framework for perception and mission planningproblems where an autonomous helicopter must detect and recognize targets in an uncertain and partially observableenvironment. We founded our work on Partially Observable Markov Decision Processes (POMDPs), because it proposes ageneral optimization framework for perception and decision tasks under long-term horizon. A special attention is given tothe outputs of the image processing algorithm in order to model its uncertain behavior as a probabilistic observationfunction. A critical study on the POMDP model and its optimization criterion is also conducted. In order to respect safetyconstraints of aerial robots, we then propose an approach to properly handle action feasibility constraints in partiallyobservable domains: the AC-POMDP model, which distinguishes between the verification of environmental properties andthe information about targets' nature. Furthermore, we propose a framework to optimize and execute POMDP policies inparallel under time constraints. This framework is based on anticipated and probabilistic optimization of future executionstates of the system. Finally, we embedded this algorithmic framework on-board Onera's autonomous helicopters, andperformed real flight experiments for multi-target detection and recognition missions
Drougard, Nicolas. "Exploiting imprecise information sources in sequential decision making problems under uncertainty." Thesis, Toulouse, ISAE, 2015. http://www.theses.fr/2015ESAE0037/document.
Full textPartially Observable Markov Decision Processes (POMDPs) define a useful formalism to express probabilistic sequentialdecision problems under uncertainty. When this model is used for a robotic mission, the system is defined as the featuresof the robot and its environment, needed to express the mission. The system state is not directly seen by the agent (therobot). Solving a POMDP consists thus in computing a strategy which, on average, achieves the mission best i.e. a functionmapping the information known by the agent to an action. Some practical issues of the POMDP model are first highlightedin the robotic context: it concerns the modeling of the agent ignorance, the imprecision of the observation model and thecomplexity of solving real world problems. A counterpart of the POMDP model, called pi-POMDP, simplifies uncertaintyrepresentation with a qualitative evaluation of event plausibilities. It comes from Qualitative Possibility Theory whichprovides the means to model imprecision and ignorance. After a formal presentation of the POMDP and pi-POMDP models,an update of the possibilistic model is proposed. Next, the study of factored pi-POMDPs allows to set up an algorithmnamed PPUDD which uses Algebraic Decision Diagrams to solve large structured planning problems. Strategies computedby PPUDD, which have been tested in the context of the competition IPPC 2014, can be more efficient than those producedby probabilistic solvers when the model is imprecise or for high dimensional problems. This thesis proposes some ways ofusing Qualitative Possibility Theory to improve computation time and uncertainty modeling in practice
Allen, Martin William. "Agent interactions in decentralized environments." Amherst, Mass. : University of Massachusetts Amherst, 2009. http://scholarworks.umass.edu/open_access_dissertations/1.
Full textNguyen, Hoa Van. "Methods for Online UAV Path Planning for Tracking Multiple Objects." Thesis, 2020. http://hdl.handle.net/2440/126537.
Full textThesis (Ph.D.) -- University of Adelaide, School of Computer Science, 2020
Saborío, Morales Juan Carlos. "Relevance-based Online Planning in Complex POMDPs." Doctoral thesis, 2020. https://repositorium.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-202007173302.
Full textRens, Gavin B. "A belief-desire-intention architechture with a logic-based planner for agents in stochastic domains." Diss., 2010. http://hdl.handle.net/10500/3517.
Full textComputing
M. Sc. (Computer Science)
Book chapters on the topic "POMDT planning"
Rafferty, Anna N., Emma Brunskill, Thomas L. Griffiths, and Patrick Shafto. "Faster Teaching by POMDP Planning." In Lecture Notes in Computer Science, 280–87. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21869-9_37.
Full textPyeatt, Larry D., and Adele E. Howe. "A Parallel Algorithm for POMDP Solution." In Recent Advances in AI Planning, 73–83. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/10720246_6.
Full textWashington, Richard. "BI-POMDP: Bounded, incremental partially-observable Markov-model planning." In Recent Advances in AI Planning, 440–51. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63912-8_105.
Full textBauters, Kim, Kevin McAreavey, Jun Hong, Yingke Chen, Weiru Liu, Lluís Godo, and Carles Sierra. "Probabilistic Planning in AgentSpeak Using the POMDP Framework." In Combinations of Intelligent Methods and Applications, 19–37. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-26860-6_2.
Full textJean-Baptiste, Emilie M. D., Pia Rotshtein, and Martin Russell. "POMDP Based Action Planning and Human Error Detection." In IFIP Advances in Information and Communication Technology, 250–65. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23868-5_18.
Full textSchöbi, Roland, and Eleni Chatzi. "Maintenance Planning Under Uncertainties Using a Continuous-State POMDP Framework." In Model Validation and Uncertainty Quantification, Volume 3, 135–43. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-04552-8_13.
Full textKurniawati, Hanna, and Vinay Yadav. "An Online POMDP Solver for Uncertainty Planning in Dynamic Environment." In Springer Tracts in Advanced Robotics, 611–29. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-28872-7_35.
Full textRens, Gavin, and Thomas Meyer. "A Hybrid POMDP-BDI Agent Architecture with Online Stochastic Planning and Desires with Changing Intensity Levels." In Lecture Notes in Computer Science, 3–19. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-27947-3_1.
Full textOgnibene, Dimitri, Lorenzo Mirante, and Letizia Marchegiani. "Proactive Intention Recognition for Joint Human-Robot Search and Rescue Missions Through Monte-Carlo Planning in POMDP Environments." In Social Robotics, 332–43. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-35888-4_31.
Full textZaninotti, Marion, Charles Lesire, Yoko Watanabe, and Caroline P. C. Chanel. "Learning Path Constraints for UAV Autonomous Navigation Under Uncertain GNSS Availability." In PAIS 2022. IOS Press, 2022. http://dx.doi.org/10.3233/faia220065.
Full textConference papers on the topic "POMDT planning"
Khonji, Majid, Ashkan Jasour, and Brian Williams. "Approximability of Constant-horizon Constrained POMDP." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/775.
Full textBey, Henrik, Moritz Sackmann, Alexander Lange, and Jorn Thielecke. "POMDP Planning at Roundabouts." In 2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops). IEEE, 2021. http://dx.doi.org/10.1109/ivworkshops54471.2021.9669232.
Full textSztyglic, Ori, and Vadim Indelman. "Speeding up POMDP Planning via Simplification." In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2022. http://dx.doi.org/10.1109/iros47612.2022.9981442.
Full textWang, Yunbo, Bo Liu, Jiajun Wu, Yuke Zhu, Simon S. Du, Li Fei-Fei, and Joshua B. Tenenbaum. "DualSMC: Tunneling Differentiable Filtering and Planning under Continuous POMDPs." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/579.
Full textBaisero, Andrea, and Christopher Amato. "Reconciling Rewards with Predictive State Representations." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/299.
Full textDong, Wenjie, Xiaozhi Qi, Zhixian Chen, Chao Song, and Xiaojun Yang. "An indoor path planning and motion planning method based on POMDP." In 2017 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, 2017. http://dx.doi.org/10.1109/robio.2017.8324640.
Full textLi, Jinning, Liting Sun, Wei Zhan, and Masayoshi Tomizuka. "Interaction-Aware Behavior Planning for Autonomous Vehicles Validated With Real Traffic Data." In ASME 2020 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/dscc2020-3328.
Full textChen, Min, Emilio Frazzoli, David Hsu, and Wee Sun Lee. "POMDP-lite for robust robot planning under uncertainty." In 2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2016. http://dx.doi.org/10.1109/icra.2016.7487754.
Full textLee, Yiyuan, Panpan Cai, and David Hsu. "MAGIC: Learning Macro-Actions for Online POMDP Planning." In Robotics: Science and Systems 2021. Robotics: Science and Systems Foundation, 2021. http://dx.doi.org/10.15607/rss.2021.xvii.041.
Full textBurks, Luke, and Nisar Ahmed. "Optimal continuous state POMDP planning with semantic observations." In 2017 IEEE 56th Annual Conference on Decision and Control (CDC). IEEE, 2017. http://dx.doi.org/10.1109/cdc.2017.8263866.
Full text