Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: SMT, planning, POMDP, POMCP.

Статті в журналах з теми "SMT, planning, POMDP, POMCP"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-21 статей у журналах для дослідження на тему "SMT, planning, POMDP, POMCP".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Meli, Daniele, Alberto Castellini, and Alessandro Farinelli. "Learning Logic Specifications for Policy Guidance in POMDPs: an Inductive Logic Programming Approach." Journal of Artificial Intelligence Research 79 (February 28, 2024): 725–76. http://dx.doi.org/10.1613/jair.1.15826.

Повний текст джерела
Анотація:
Partially Observable Markov Decision Processes (POMDPs) are a powerful framework for planning under uncertainty. They allow to model state uncertainty as a belief probability distribution. Approximate solvers based on Monte Carlo sampling show great success to relax the computational demand and perform online planning. However, scaling to complex realistic domains with many actions and long planning horizons is still a major challenge, and a key point to achieve good performance is guiding the action-selection process with domain-dependent policy heuristics which are tailored for the specific
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Meli, Daniele, Alberto Castellini, and Alessandro Farinelli. "Learning Logic Specifications for Policy Guidance in POMDPs: an Inductive Logic Programming Approach." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 27 (2025): 28743. https://doi.org/10.1609/aaai.v39i27.35134.

Повний текст джерела
Анотація:
Partially Observable Markov Decision Processes (POMDPs) are a powerful framework for planning under uncertainty. They allow to model state uncertainty as a belief probability distribution. Approximate solvers based on Monte Carlo sampling show great success to relax the computational demand and perform online planning. However, scaling to complex realistic domains with many actions and long planning horizons is still a major challenge, and a key point to achieve good performance is guiding the action-selection process with domain-dependent policy heuristics which are tailored for the specific
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Mazzi, Giulio, Alberto Castellini, and Alessandro Farinelli. "Rule-based Shielding for Partially Observable Monte-Carlo Planning." Proceedings of the International Conference on Automated Planning and Scheduling 31 (May 17, 2021): 243–51. http://dx.doi.org/10.1609/icaps.v31i1.15968.

Повний текст джерела
Анотація:
Partially Observable Monte-Carlo Planning (POMCP) is a powerful online algorithm able to generate approximate policies for large Partially Observable Markov Decision Processes. The online nature of this method supports scalability by avoiding complete policy representation. The lack of an explicit representation however hinders policy interpretability and makes policy verification very complex. In this work, we propose two contributions. The first is a method for identifying unexpected actions selected by POMCP with respect to expert prior knowledge of the task. The second is a shielding appro
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Li, Xinchen, Levent Guvenc, and Bilin Aksun-Guvenc. "Autonomous Vehicle Decision-Making with Policy Prediction for Handling a Round Intersection." Electronics 12, no. 22 (2023): 4670. http://dx.doi.org/10.3390/electronics12224670.

Повний текст джерела
Анотація:
Autonomous shuttles have been used as end-mile solutions for smart mobility in smart cities. The urban driving conditions of smart cities with many other actors sharing the road and the presence of intersections have posed challenges to the use of autonomous shuttles. Round intersections are more challenging because it is more difficult to perceive the other vehicles in and near the intersection. Thus, this paper focuses on the decision-making of autonomous vehicles for handling round intersections. The round intersection is introduced first, followed by introductions of the Markov Decision Pr
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Zhang, Zongzhang, Michael Littman, and Xiaoping Chen. "Covering Number as a Complexity Measure for POMDP Planning and Learning." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (2021): 1853–59. http://dx.doi.org/10.1609/aaai.v26i1.8360.

Повний текст джерела
Анотація:
Finding a meaningful way of characterizing the difficulty of partially observable Markov decision processes (POMDPs) is a core theoretical problem in POMDP research. State-space size is often used as a proxy for POMDP difficulty, but it is a weak metric at best. Existing work has shown that the covering number for the reachable belief space, which is a set of belief points that are reachable from the initial belief point, has interesting links with the complexity of POMDP planning, theoretically. In this paper, we present empirical evidence that the covering number for the reachable belief spa
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Omidshafiei, Shayegan, Ali–Akbar Agha–Mohammadi, Christopher Amato, Shih–Yuan Liu, Jonathan P. How, and John Vian. "Decentralized control of multi-robot partially observable Markov decision processes using belief space macro-actions." International Journal of Robotics Research 36, no. 2 (2017): 231–58. http://dx.doi.org/10.1177/0278364917692864.

Повний текст джерела
Анотація:
This work focuses on solving general multi-robot planning problems in continuous spaces with partial observability given a high-level domain description. Decentralized Partially Observable Markov Decision Processes (Dec-POMDPs) are general models for multi-robot coordination problems. However, representing and solving Dec-POMDPs is often intractable for large problems. This work extends the Dec-POMDP model to the Decentralized Partially Observable Semi-Markov Decision Process (Dec-POSMDP) to take advantage of the high-level representations that are natural for multi-robot problems and to facil
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Ye, Nan, Adhiraj Somani, David Hsu, and Wee Sun Lee. "DESPOT: Online POMDP Planning with Regularization." Journal of Artificial Intelligence Research 58 (January 26, 2017): 231–66. http://dx.doi.org/10.1613/jair.5328.

Повний текст джерела
Анотація:
The partially observable Markov decision process (POMDP) provides a principled general framework for planning under uncertainty, but solving POMDPs optimally is computationally intractable, due to the "curse of dimensionality" and the "curse of history". To overcome these challenges, we introduce the Determinized Sparse Partially Observable Tree (DESPOT), a sparse approximation of the standard belief tree, for online planning under uncertainty. A DESPOT focuses online planning on a set of randomly sampled scenarios and compactly captures the "execution" of all policies under these scenarios. W
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Chatterjee, Krishnendu, Martin Chmelik, and Ufuk Topcu. "Sensor Synthesis for POMDPs with Reachability Objectives." Proceedings of the International Conference on Automated Planning and Scheduling 28 (June 15, 2018): 47–55. http://dx.doi.org/10.1609/icaps.v28i1.13875.

Повний текст джерела
Анотація:
Partially observable Markov decision processes (POMDPs) are widely used in probabilistic planning problems in which an agent interacts with an environment using noisy and imprecise sensors. We study a setting in which the sensors are only partially defined and the goal is to synthesize “weakest” additional sensors, such that in the resulting POMDP, there is a small-memory policy for the agent that almost-surely (with probability 1) satisfies a reachability objective. We show that the problem is NP-complete, and present a symbolic algorithm by encoding the problem into SAT instances. We illustr
Стилі APA, Harvard, Vancouver, ISO та ін.
9

NI, YAODONG, and ZHI-QIANG LIU. "BOUNDED-PARAMETER PARTIALLY OBSERVABLE MARKOV DECISION PROCESSES: FRAMEWORK AND ALGORITHM." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 21, no. 06 (2013): 821–63. http://dx.doi.org/10.1142/s0218488513500396.

Повний текст джерела
Анотація:
Partially observable Markov decision processes (POMDPs) are powerful for planning under uncertainty. However, it is usually impractical to employ a POMDP with exact parameters to model the real-life situation precisely, due to various reasons such as limited data for learning the model, inability of exact POMDPs to model dynamic situations, etc. In this paper, assuming that the parameters of POMDPs are imprecise but bounded, we formulate the framework of bounded-parameter partially observable Markov decision processes (BPOMDPs). A modified value iteration is proposed as a basic strategy for ta
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Spaan, M. T. J., and N. Vlassis. "Perseus: Randomized Point-based Value Iteration for POMDPs." Journal of Artificial Intelligence Research 24 (August 1, 2005): 195–220. http://dx.doi.org/10.1613/jair.1659.

Повний текст джерела
Анотація:
Partially observable Markov decision processes (POMDPs) form an attractive and principled framework for agent planning under uncertainty. Point-based approximate techniques for POMDPs compute a policy based on a finite set of points collected in advance from the agent's belief space. We present a randomized point-based value iteration algorithm called Perseus. The algorithm performs approximate value backup stages, ensuring that in each backup stage the value of each point in the belief set is improved; the key observation is that a single backup may improve the value of many belief points. Co
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Amato, Christopher, George Konidaris, Ariel Anders, Gabriel Cruz, Jonathan P. How, and Leslie P. Kaelbling. "Policy search for multi-robot coordination under uncertainty." International Journal of Robotics Research 35, no. 14 (2016): 1760–78. http://dx.doi.org/10.1177/0278364916679611.

Повний текст джерела
Анотація:
We introduce a principled method for multi-robot coordination based on a general model (termed a MacDec-POMDP) of multi-robot cooperative planning in the presence of stochasticity, uncertain sensing, and communication limitations. A new MacDec-POMDP planning algorithm is presented that searches over policies represented as finite-state controllers, rather than the previous policy tree representation. Finite-state controllers can be much more concise than trees, are much easier to interpret, and can operate over an infinite horizon. The resulting policy search algorithm requires a substantially
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Pineau, J., G. Gordon, and S. Thrun. "Anytime Point-Based Approximations for Large POMDPs." Journal of Artificial Intelligence Research 27 (November 26, 2006): 335–80. http://dx.doi.org/10.1613/jair.2078.

Повний текст джерела
Анотація:
The Partially Observable Markov Decision Process has long been recognized as a rich framework for real-world planning and control problems, especially in robotics. However exact solutions in this framework are typically computationally intractable for all but the smallest problems. A well-known technique for speeding up POMDP solving involves performing value backups at specific belief points, rather than over the entire belief simplex. The efficiency of this approach, however, depends greatly on the selection of points. This paper presents a set of novel techniques for selecting informative b
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Wu, Chenyang, Rui Kong, Guoyu Yang, et al. "LB-DESPOT: Efficient Online POMDP Planning Considering Lower Bound in Action Selection (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 18 (2021): 15927–28. http://dx.doi.org/10.1609/aaai.v35i18.17960.

Повний текст джерела
Анотація:
Partially observable Markov decision process (POMDP) is an extension to MDP. It handles the state uncertainty by specifying the probability of getting a particular observation given the current state. DESPOT is one of the most popular scalable online planning algorithms for POMDPs, which manages to significantly reduce the size of the decision tree while deriving a near-optimal policy by considering only $K$ scenarios. Nevertheless, there is a gap in action selection criteria between planning and execution in DESPOT. During the planning stage, it keeps choosing the action with the highest uppe
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Nguyen, Hoa Van, Hamid Rezatofighi, Ba-Ngu Vo, and Damith C. Ranasinghe. "Multi-Objective Multi-Agent Planning for Jointly Discovering and Tracking Mobile Objects." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 7227–35. http://dx.doi.org/10.1609/aaai.v34i05.6213.

Повний текст джерела
Анотація:
We consider the challenging problem of online planning for a team of agents to autonomously search and track a time-varying number of mobile objects under the practical constraint of detection range limited onboard sensors. A standard POMDP with a value function that either encourages discovery or accurate tracking of mobile objects is inadequate to simultaneously meet the conflicting goals of searching for undiscovered mobile objects whilst keeping track of discovered objects. The planning problem is further complicated by misdetections or false detections of objects caused by range limited s
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Ajdarów, Michal, Šimon Brlej, and Petr Novotný. "Shielding in Resource-Constrained Goal POMDPs." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (2023): 14674–82. http://dx.doi.org/10.1609/aaai.v37i12.26715.

Повний текст джерела
Анотація:
We consider partially observable Markov decision processes (POMDPs) modeling an agent that needs a supply of a certain resource (e.g., electricity stored in batteries) to operate correctly. The resource is consumed by the agent's actions and can be replenished only in certain states. The agent aims to minimize the expected cost of reaching some goal while preventing resource exhaustion, a problem we call resource-constrained goal optimization (RSGO). We take a two-step approach to the RSGO problem. First, using formal methods techniques, we design an algorithm computing a shield for a given sc
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Beynier, Aurélie. "A Multiagent Planning Approach for Cooperative Patrolling with Non-Stationary Adversaries." International Journal on Artificial Intelligence Tools 26, no. 05 (2017): 1760018. http://dx.doi.org/10.1142/s0218213017600181.

Повний текст джерела
Анотація:
Multiagent patrolling is the problem faced by a set of agents that have to visit a set of sites to prevent or detect some threats or illegal actions. Although it is commonly assumed that patrollers share a common objective, the issue of cooperation between the patrollers has received little attention. Over the last years, the focus has been put on patrolling strategies to prevent a one-shot attack from an adversary. This adversary is usually assumed to be fully rational and to have full observability of the system. Most approaches are then based on game theory and consists in computing a best
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Ivashko, Yulia, Andrii Dmytrenko, Małgorzata Hryniewicz, Tetiana Petrunok, and Tеtiana Yevdokimova. ""Official" and "private" parks of the XVIII–XIX centuries through the prism of general landscape trends of the time." Landscape architecture and art, no. 20 (November 10, 2022): 24–36. http://dx.doi.org/10.22616/j.landarchart.2022.20.03.

Повний текст джерела
Анотація:
The article analyzes the basic principles of landscape design of the imperial and aristocratic parks in the Russian Empire in the XVIII–XIX centuries. There were "official" parks designed to be visited by high-ranking guests, and "private" parks, which were not covered by the canons of the "official" park. In the Tsarskoye Selo imperial residence Catherine's Park performed the function of "official" with the appropriate function of pomp, and located next to it Alexander's Park –respectively, the function of "private" imperial park. Catherine's Park became a model to follow one of the mostfamou
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Lauri, Mikko, Joni Pajarinen, and Jan Peters. "Multi-agent active information gathering in discrete and continuous-state decentralized POMDPs by policy graph improvement." Autonomous Agents and Multi-Agent Systems 34, no. 2 (2020). http://dx.doi.org/10.1007/s10458-020-09467-6.

Повний текст джерела
Анотація:
Abstract Decentralized policies for information gathering are required when multiple autonomous agents are deployed to collect data about a phenomenon of interest when constant communication cannot be assumed. This is common in tasks involving information gathering with multiple independently operating sensor devices that may operate over large physical distances, such as unmanned aerial vehicles, or in communication limited environments such as in the case of autonomous underwater vehicles. In this paper, we frame the information gathering task as a general decentralized partially observable
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Vien, Ngo Anh, and Marc Toussaint. "Hierarchical Monte-Carlo Planning." Proceedings of the AAAI Conference on Artificial Intelligence 29, no. 1 (2015). http://dx.doi.org/10.1609/aaai.v29i1.9687.

Повний текст джерела
Анотація:
Monte-Carlo Tree Search, especially UCT and its POMDP version POMCP, have demonstrated excellent performanceon many problems. However, to efficiently scale to large domains one should also exploit hierarchical structure if present. In such hierarchical domains, finding rewarded states typically requires to search deeply; covering enough such informative states very far from the root becomes computationally expensive in flat non-hierarchical search approaches. We propose novel, scalable MCTS methods which integrate atask hierarchy into the MCTS framework, specifically lead-ing to hierarchical v
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Sheng, Shili, Erfan Pakdamanian, Kyungtae Han, et al. "Planning for Automated Vehicles with Human Trust." ACM Transactions on Cyber-Physical Systems, September 2, 2022. http://dx.doi.org/10.1145/3561059.

Повний текст джерела
Анотація:
Recent work has considered personalized route planning based on user profiles, but none of it accounts for human trust. We argue that human trust is an important factor to consider when planning routes for automated vehicles. This paper presents a trust-based route planning approach for automated vehicles. We formalize the human-vehicle interaction as a partially observable Markov decision process (POMDP) and model trust as a partially observable state variable of the POMDP, representing the human’s hidden mental state. We build data-driven models of human trust dynamics and takeover decisions
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Blumenthal, Oded, and Guy Shani. "Domain independent heuristics for online stochastic contingent planning." Annals of Mathematics and Artificial Intelligence, July 8, 2024. http://dx.doi.org/10.1007/s10472-024-09947-5.

Повний текст джерела
Анотація:
AbstractPartially observable Markov decision processes (POMDP) are a useful model for decision-making under partial observability and stochastic actions. Partially Observable Monte-Carlo Planning (POMCP) is an online algorithm for deciding on the next action to perform, using a Monte-Carlo tree search approach, based on the UCT algorithm for fully observable Markov-decision processes. POMCP develops an action-observation tree, and at the leaves, uses a rollout policy to provide a value estimate for the leaf. As such, POMCP is highly dependent on the rollout policy to compute good estimates, an
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!