Academic literature on the topic 'POMDPs'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'POMDPs.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "POMDPs"

1

Zhang, N. L., and W. Liu. "A Model Approximation Scheme for Planning in Partially Observable Stochastic Domains." Journal of Artificial Intelligence Research 7 (November 1, 1997): 199–230. http://dx.doi.org/10.1613/jair.419.

Full text
Abstract:
Partially observable Markov decision processes (POMDPs) are a natural model for planning problems where effects of actions are nondeterministic and the state of the world is not completely observable. It is difficult to solve POMDPs exactly. This paper proposes a new approximation scheme. The basic idea is to transform a POMDP into another one where additional information is provided by an oracle. The oracle informs the planning agent that the current state of the world is in a certain region. The transformed POMDP is consequently said to be region observable. It is easier to solve than the original POMDP. We propose to solve the transformed POMDP and use its optimal policy to construct an approximate policy for the original POMDP. By controlling the amount of additional information that the oracle provides, it is possible to find a proper tradeoff between computational time and approximation quality. In terms of algorithmic contributions, we study in details how to exploit region observability in solving the transformed POMDP. To facilitate the study, we also propose a new exact algorithm for general POMDPs. The algorithm is conceptually simple and yet is significantly more efficient than all previous exact algorithms.
APA, Harvard, Vancouver, ISO, and other styles
2

Aras, R., and A. Dutech. "An Investigation into Mathematical Programming for Finite Horizon Decentralized POMDPs." Journal of Artificial Intelligence Research 37 (March 26, 2010): 329–96. http://dx.doi.org/10.1613/jair.2915.

Full text
Abstract:
Decentralized planning in uncertain environments is a complex task generally dealt with by using a decision-theoretic approach, mainly through the framework of Decentralized Partially Observable Markov Decision Processes (DEC-POMDPs). Although DEC-POMDPS are a general and powerful modeling tool, solving them is a task with an overwhelming complexity that can be doubly exponential. In this paper, we study an alternate formulation of DEC-POMDPs relying on a sequence-form representation of policies. From this formulation, we show how to derive Mixed Integer Linear Programming (MILP) problems that, once solved, give exact optimal solutions to the DEC-POMDPs. We show that these MILPs can be derived either by using some combinatorial characteristics of the optimal solutions of the DEC-POMDPs or by using concepts borrowed from game theory. Through an experimental validation on classical test problems from the DEC-POMDP literature, we compare our approach to existing algorithms. Results show that mathematical programming outperforms dynamic programming but is less efficient than forward search, except for some particular problems. The main contributions of this work are the use of mathematical programming for DEC-POMDPs and a better understanding of DEC-POMDPs and of their solutions. Besides, we argue that our alternate representation of DEC-POMDPs could be helpful for designing novel algorithms looking for approximate solutions to DEC-POMDPs.
APA, Harvard, Vancouver, ISO, and other styles
3

Tennenholtz, Guy, Uri Shalit, and Shie Mannor. "Off-Policy Evaluation in Partially Observable Environments." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 06 (April 3, 2020): 10276–83. http://dx.doi.org/10.1609/aaai.v34i06.6590.

Full text
Abstract:
This work studies the problem of batch off-policy evaluation for Reinforcement Learning in partially observable environments. Off-policy evaluation under partial observability is inherently prone to bias, with risk of arbitrarily large errors. We define the problem of off-policy evaluation for Partially Observable Markov Decision Processes (POMDPs) and establish what we believe is the first off-policy evaluation result for POMDPs. In addition, we formulate a model in which observed and unobserved variables are decoupled into two dynamic processes, called a Decoupled POMDP. We show how off-policy evaluation can be performed under this new model, mitigating estimation errors inherent to general POMDPs. We demonstrate the pitfalls of off-policy evaluation in POMDPs using a well-known off-policy method, Importance Sampling, and compare it with our result on synthetic medical data.
APA, Harvard, Vancouver, ISO, and other styles
4

Walraven, Erwin, and Matthijs T. J. Spaan. "Column Generation Algorithms for Constrained POMDPs." Journal of Artificial Intelligence Research 62 (July 17, 2018): 489–533. http://dx.doi.org/10.1613/jair.1.11216.

Full text
Abstract:
In several real-world domains it is required to plan ahead while there are finite resources available for executing the plan. The limited availability of resources imposes constraints on the plans that can be executed, which need to be taken into account while computing a plan. A Constrained Partially Observable Markov Decision Process (Constrained POMDP) can be used to model resource-constrained planning problems which include uncertainty and partial observability. Constrained POMDPs provide a framework for computing policies which maximize expected reward, while respecting constraints on a secondary objective such as cost or resource consumption. Column generation for linear programming can be used to obtain Constrained POMDP solutions. This method incrementally adds columns to a linear program, in which each column corresponds to a POMDP policy obtained by solving an unconstrained subproblem. Column generation requires solving a potentially large number of POMDPs, as well as exact evaluation of the resulting policies, which is computationally difficult. We propose a method to solve subproblems in a two-stage fashion using approximation algorithms. First, we use a tailored point-based POMDP algorithm to obtain an approximate subproblem solution. Next, we convert this approximate solution into a policy graph, which we can evaluate efficiently. The resulting algorithm is a new approximate method for Constrained POMDPs in single-agent settings, but also in settings in which multiple independent agents share a global constraint. Experiments based on several domains show that our method outperforms the current state of the art.
APA, Harvard, Vancouver, ISO, and other styles
5

Doshi, P., and P. J. Gmytrasiewicz. "Monte Carlo Sampling Methods for Approximating Interactive POMDPs." Journal of Artificial Intelligence Research 34 (March 24, 2009): 297–337. http://dx.doi.org/10.1613/jair.2630.

Full text
Abstract:
Partially observable Markov decision processes (POMDPs) provide a principled framework for sequential planning in uncertain single agent settings. An extension of POMDPs to multiagent settings, called interactive POMDPs (I-POMDPs), replaces POMDP belief spaces with interactive hierarchical belief systems which represent an agent’s belief about the physical world, about beliefs of other agents, and about their beliefs about others’ beliefs. This modification makes the difficulties of obtaining solutions due to complexity of the belief and policy spaces even more acute. We describe a general method for obtaining approximate solutions of I-POMDPs based on particle filtering (PF). We introduce the interactive PF, which descends the levels of the interactive belief hierarchies and samples and propagates beliefs at each level. The interactive PF is able to mitigate the belief space complexity, but it does not address the policy space complexity. To mitigate the policy space complexity – sometimes also called the curse of history – we utilize a complementary method based on sampling likely observations while building the look ahead reachability tree. While this approach does not completely address the curse of history, it beats back the curse’s impact substantially. We provide experimental results and chart future work.
APA, Harvard, Vancouver, ISO, and other styles
6

Walraven, Erwin, and Matthijs T. J. Spaan. "Point-Based Value Iteration for Finite-Horizon POMDPs." Journal of Artificial Intelligence Research 65 (July 11, 2019): 307–41. http://dx.doi.org/10.1613/jair.1.11324.

Full text
Abstract:
Partially Observable Markov Decision Processes (POMDPs) are a popular formalism for sequential decision making in partially observable environments. Since solving POMDPs to optimality is a difficult task, point-based value iteration methods are widely used. These methods compute an approximate POMDP solution, and in some cases they even provide guarantees on the solution quality, but these algorithms have been designed for problems with an infinite planning horizon. In this paper we discuss why state-of-the-art point-based algorithms cannot be easily applied to finite-horizon problems that do not include discounting. Subsequently, we present a general point-based value iteration algorithm for finite-horizon problems which provides solutions with guarantees on solution quality. Furthermore, we introduce two heuristics to reduce the number of belief points considered during execution, which lowers the computational requirements. In experiments we demonstrate that the algorithm is an effective method for solving finite-horizon POMDPs.
APA, Harvard, Vancouver, ISO, and other styles
7

Ross, S., J. Pineau, S. Paquet, and B. Chaib-draa. "Online Planning Algorithms for POMDPs." Journal of Artificial Intelligence Research 32 (July 29, 2008): 663–704. http://dx.doi.org/10.1613/jair.2567.

Full text
Abstract:
Partially Observable Markov Decision Processes (POMDPs) provide a rich framework for sequential decision-making under uncertainty in stochastic domains. However, solving a POMDP is often intractable except for small problems due to their complexity. Here, we focus on online approaches that alleviate the computational complexity by computing good local policies at each decision step during the execution. Online algorithms generally consist of a lookahead search to find the best action to execute at each time step in an environment. Our objectives here are to survey the various existing online POMDP methods, analyze their properties and discuss their advantages and disadvantages; and to thoroughly evaluate these online approaches in different environments under various metrics (return, error bound reduction, lower bound improvement). Our experimental results indicate that state-of-the-art online heuristic search methods can handle large POMDP domains efficiently.
APA, Harvard, Vancouver, ISO, and other styles
8

NI, YAODONG, and ZHI-QIANG LIU. "BOUNDED-PARAMETER PARTIALLY OBSERVABLE MARKOV DECISION PROCESSES: FRAMEWORK AND ALGORITHM." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 21, no. 06 (December 2013): 821–63. http://dx.doi.org/10.1142/s0218488513500396.

Full text
Abstract:
Partially observable Markov decision processes (POMDPs) are powerful for planning under uncertainty. However, it is usually impractical to employ a POMDP with exact parameters to model the real-life situation precisely, due to various reasons such as limited data for learning the model, inability of exact POMDPs to model dynamic situations, etc. In this paper, assuming that the parameters of POMDPs are imprecise but bounded, we formulate the framework of bounded-parameter partially observable Markov decision processes (BPOMDPs). A modified value iteration is proposed as a basic strategy for tackling parameter imprecision in BPOMDPs. In addition, we design the UL-based value iteration algorithm, in which each value backup is based on two sets of vectors called U-set and L-set. We propose four strategies for computing U-set and L-set. We analyze theoretically the computational complexity and the reward loss of the algorithm. The effectiveness and robustness of the algorithm are shown empirically.
APA, Harvard, Vancouver, ISO, and other styles
9

Oliehoek, F. A., M. T. J. Spaan, and N. Vlassis. "Optimal and Approximate Q-value Functions for Decentralized POMDPs." Journal of Artificial Intelligence Research 32 (May 28, 2008): 289–353. http://dx.doi.org/10.1613/jair.2447.

Full text
Abstract:
Decision-theoretic planning is a popular approach to sequential decision making problems, because it treats uncertainty in sensing and acting in a principled way. In single-agent frameworks like MDPs and POMDPs, planning can be carried out by resorting to Q-value functions: an optimal Q-value function Q* is computed in a recursive manner by dynamic programming, and then an optimal policy is extracted from Q*. In this paper we study whether similar Q-value functions can be defined for decentralized POMDP models (Dec-POMDPs), and how policies can be extracted from such value functions. We define two forms of the optimal Q-value function for Dec-POMDPs: one that gives a normative description as the Q-value function of an optimal pure joint policy and another one that is sequentially rational and thus gives a recipe for computation. This computation, however, is infeasible for all but the smallest problems. Therefore, we analyze various approximate Q-value functions that allow for efficient computation. We describe how they relate, and we prove that they all provide an upper bound to the optimal Q-value function Q*. Finally, unifying some previous approaches for solving Dec-POMDPs, we describe a family of algorithms for extracting policies from such Q-value functions, and perform an experimental evaluation on existing test problems, including a new firefighting benchmark problem.
APA, Harvard, Vancouver, ISO, and other styles
10

Spaan, M. T. J., and N. Vlassis. "Perseus: Randomized Point-based Value Iteration for POMDPs." Journal of Artificial Intelligence Research 24 (August 1, 2005): 195–220. http://dx.doi.org/10.1613/jair.1659.

Full text
Abstract:
Partially observable Markov decision processes (POMDPs) form an attractive and principled framework for agent planning under uncertainty. Point-based approximate techniques for POMDPs compute a policy based on a finite set of points collected in advance from the agent's belief space. We present a randomized point-based value iteration algorithm called Perseus. The algorithm performs approximate value backup stages, ensuring that in each backup stage the value of each point in the belief set is improved; the key observation is that a single backup may improve the value of many belief points. Contrary to other point-based methods, Perseus backs up only a (randomly selected) subset of points in the belief set, sufficient for improving the value of each belief point in the set. We show how the same idea can be extended to dealing with continuous action spaces. Experimental results show the potential of Perseus in large scale POMDP problems.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "POMDPs"

1

Aras, Raghav Charpillet François Dutech Alain. "Mathematical programming methods for decentralized POMDPs." S. l. : Nancy 1, 2008. http://www.scd.uhp-nancy.fr/docnum/SCD_T_2008_0092_ARAS.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Aras, Raghav. "Mathematical programming methods for decentralized POMDPs." Thesis, Nancy 1, 2008. http://www.theses.fr/2008NAN10092/document.

Full text
Abstract:
Nous étudions le problème du contrôle optimale décentralisé d'un processus de Markoff partiellement observé sur un horizon fini. Mathématiquement, ce problème se défini comme un DEC-POMDP. Plusieurs problèmes des domaines de l'intélligence artificielles et recherche opérationelles se formalisent comme des DEC-POMDPs. Résoudre un DEC-POMDP dans une mannière exacte est un problème difficile (NEXP-dur). Pourtant, des algorithmes exactes sont importants du point de vue des algorithmes approximés pour résoudre des problèmes pratiques. Les algorithmes existants sont nettement inefficace même pour des DEC-POMDP d'une très petite taille. Dans cette thèse, nous proposons une nouvelle approche basée sur la programmation mathématique. En utilisant la forme séquentielle d'une politique, nous montrons que ce problème peut être formalisé comme un programme non-linéaire. De plus, nous montrons comment transformer ce programme nonl-linéaire un des programmes linéaire avec des variables bivalents et continus (0-1 MIPs). L'éxpérience computationelle sur quatres problèmes DEC-POMDP standards montrent que notre approche trouve une politique optimale beaucoup plus rapidement que des approches existantes. Le temps réduit des heures aux seconds ou minutes
In this thesis, we study the problem of the optimal decentralized control of a partially observed Markov process over a finite horizon. The mathematical model corresponding to the problem is a decentralized POMDP (DEC-POMDP). Many problems in practice from the domains of artificial intelligence and operations research can be modeled as DEC-POMDPs. However, solving a DEC-POMDP exactly is intractable (NEXP-hard). The development of exact algorithms is necessary in order to guide the development of approximate algorithms that can scale to practical sized problems. Existing algorithms are mainly inspired from POMDP research (dynamic programming and forward search) and require an inordinate amount of time for even very small DEC-POMDPs. In this thesis, we develop a new mathematical programming based approach for exactly solving a finite horizon DEC-POMDP. We use the sequence form of a control policy in this approach. Using the sequence form, we show how the problem can be formulated as a mathematical progam with a nonlinear object and linear constraints. We thereby show how this nonlinear program can be linearized to a 0-1 mixed integer linear program (MIP). We present two different 0-1 MIPs based on two different properties of a DEC-POMDP. The computational experience of the mathematical programs presented in the thesis on four benchmark problems (MABC, MA-Tiger, Grid Meeting, Fire Fighting) shows that the time taken to find an optimal joint policy is one or two orders or magnitude lesser than the exact existing algorithms. In the problems tested, the time taken drops from several hours to a few seconds or minutes
APA, Harvard, Vancouver, ISO, and other styles
3

Ferrari, Fabio Valerio. "Cooperative POMDPs for human-Robot joint activities." Thesis, Normandie, 2017. http://www.theses.fr/2017NORMC257/document.

Full text
Abstract:
Objectif de cette thèse est le développent de méthodes de planification pour la résolution de tâches jointes homme-robot dans des espaces publiques. Dans les espaces publiques, les utilisateurs qui coopèrent avec le robot peuvent facilement se distraire et abandonner la tâche jointe. Cette thèse se focalise donc sur les défis posés par l’incertitude et imprévisibilité d’une coopération avec un humain. La thèse décrit l’état de l’art sur la coopération homme-robot dans la robotique de service, et sur les modèles de planification. Elle présente ensuite une nouvelle approche théorique, basée sur les processus décisionnels de Markov partiellement observables, qui permet de garantir la coopération de l’humain tout au long de la tâche, de façon flexible, robuste et rapide. La thèse introduit une structure hiérarchique qui sépare l’aspect coopératif d’une activité jointe de la tâche en soi. L’approche a été appliquée dans un scénario réel, un robot guide dans un centre commercial. La thèse présente les expériences effectuées pour mesurer la qualité de l’approche proposée, ainsi que les expériences avec le robot réel
This thesis presents a novel method for ensuring cooperation between humans and robots in public spaces, under the constraint of human behavior uncertainty. The thesis introduces a hierarchical and flexible framework based on POMDPs. The framework partitions the overall joint activity into independent planning modules, each dealing with a specific aspect of the joint activity: either ensuring the human-robot cooperation, or proceeding with the task to achieve. The cooperation part can be solved independently from the task and executed as a finite state machine in order to contain online planning effort. In order to do so, we introduce a belief shift function and describe how to use it to transform a POMDP policy into an executable finite state machine.The developed framework has been implemented in a real application scenario as part of the COACHES project. The thesis describes the Escort mission used as testbed application and the details of implementation on the real robots. This scenario has as well been used to carry several experiments and to evaluate our contributions
APA, Harvard, Vancouver, ISO, and other styles
4

Brooks, Alex. "Parametric POMDPs for planning in continuous state spaces." University of Sydney, 2007. http://hdl.handle.net/2123/1861.

Full text
Abstract:
PhD
This thesis is concerned with planning and acting under uncertainty in partially-observable continuous domains. In particular, it focusses on the problem of mobile robot navigation given a known map. The dominant paradigm for robot localisation is to use Bayesian estimation to maintain a probability distribution over possible robot poses. In contrast, control algorithms often base their decisions on the assumption that a single state, such as the mode of this distribution, is correct. In scenarios involving significant uncertainty, this can lead to serious control errors. It is generally agreed that the reliability of navigation in uncertain environments would be greatly improved by the ability to consider the entire distribution when acting, rather than the single most likely state. The framework adopted in this thesis for modelling navigation problems mathematically is the Partially Observable Markov Decision Process (POMDP). An exact solution to a POMDP problem provides the optimal balance between reward-seeking behaviour and information-seeking behaviour, in the presence of sensor and actuation noise. Unfortunately, previous exact and approximate solution methods have had difficulty scaling to real applications. The contribution of this thesis is the formulation of an approach to planning in the space of continuous parameterised approximations to probability distributions. Theoretical and practical results are presented which show that, when compared with similar methods from the literature, this approach is capable of scaling to larger and more realistic problems. In order to apply the solution algorithm to real-world problems, a number of novel improvements are proposed. Specifically, Monte Carlo methods are employed to estimate distributions over future parameterised beliefs, improving planning accuracy without a loss of efficiency. Conditional independence assumptions are exploited to simplify the problem, reducing computational requirements. Scalability is further increased by focussing computation on likely beliefs, using metric indexing structures for efficient function approximation. Local online planning is incorporated to assist global offline planning, allowing the precision of the latter to be decreased without adversely affecting solution quality. Finally, the algorithm is implemented and demonstrated during real-time control of a mobile robot in a challenging navigation task. We argue that this task is substantially more challenging and realistic than previous problems to which POMDP solution methods have been applied. Results show that POMDP planning, which considers the evolution of the entire probability distribution over robot poses, produces significantly more robust behaviour when compared with a heuristic planner which considers only the most likely states and outcomes.
APA, Harvard, Vancouver, ISO, and other styles
5

Brooks, Alex M. "Parametric POMDPs for planning in continuous state spaces." Connect to full text, 2007. http://hdl.handle.net/2123/1861.

Full text
Abstract:
Thesis (Ph. D.)--University of Sydney, 2007.
Title from title screen (viewed 15 January 2009). Submitted in fulfilment of the requirements for the degree of Doctor of Philosophy to the Australian Centre for Field Robotics, School of Aerospace, Mechanical and Mechatronic Engineering. Includes bibliographical references. Also available in print form.
APA, Harvard, Vancouver, ISO, and other styles
6

Atrash, Amin. "A Bayesian Framework for Online Parameter Learning in POMDPs." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=104587.

Full text
Abstract:
Decision-making under uncertainty has become critical as autonomous and semi-autonomous agents become more ubiquitious in our society. These agents must deal with uncertainty and ambiguity from the environment and still perform desired tasks robustly. Partially observable Markov decision processes (POMDPs) provide a principled mathematical framework for modelling agents operating in such an environment. These models are able to capture the uncertainty from noisy sensors, inaccurate actuators, and perform decision-making in light of the agent's incomplete knowledge of the world. POMDPs have been applied successfully in domains ranging from robotics to dialogue management to medical systems. Extensive research has been conducted on methods for optimizing policies for POMDPs. However, these methods typically assume a model of the environment is known. This thesis presents a Bayesian reinforcement learning framework for learning POMDP parameters during execution. This framework takes advantage of agents which work alongside an operator who can provide optimal policy information to help direct the learning. By using Bayesian reinforcement learning, the agent can perform learning concurrently with execution, incorporate incoming data immediately, and take advantage of prior knowledge of the world. By using such a framework, an agent is able to adapt its policy to that of the operator. This framework is validated on data collected from the interaction manager of an autonomous wheelchair. The interaction manager acts as an intelligent interface between the user and the robot, allowing the user to issue high-level commands through natural interface such as speech. This interaction manager is controlled using a POMDP and acts as a rich scenario for learning in which the agent must adjust to the needs of the user over time.
Comme le nombre d'agents autonomes et semi-autonomes dansnotre société ne cesse de croître, les prises de décisions sous incertitude constituent désormais un problème critique. Malgré l'incertitude et l'ambiguité inhérentes à leurs environnements, ces agents doivent demeurer robustes dans l'exécution de leurs tâches. Les processus de décision markoviens partiellement observables (POMDP) offrent un cadre mathématique permettant la modélisation des agents et de leurs environnements. Ces modèles sont capables de capturer l'incertitude due aux perturbations dans les capteurs ainsi qu'aux actionneurs imprécis. Ils permettent conséquemment une prise de décision tenant compte des connaissances imparfaites des agents. À ce jour, les POMDP ont été utilisés avec succès dans un éventail de domaines, allant de la robotique à la gestion de dialogue, en passant par la médecine. Plusieurs travaux de recherche se sont penchés sur des méthodes visant à optimiser les POMDP. Cependant, ces méthodes requièrent habituellement un modèle environnemental préalablement connu. Dans ce mémoire, une méthode bayésienne d'apprentissage par renforcement est présentée, avec laquelle il est possible d'apprendre les paramètres du modèle POMDP pendant l'éxécution. Cette méthode tire avantage d'une coopération avec un opérateur capable de guider l'apprentissage en divulguant certaines données optimales. Avec l'aide du renforcement bayésien, l'agent peut apprendre pendant l'éxécution, incorporer immédiatement les données nouvelles et profiter des connaissances précédentes, pour finalement pouvoir adapter sa politique de décision à celle de l'opérateur. La méthodologie décrite est validée à l'aide de données produites par le gestionnaire d'interactions d'une chaise roulante autonome. Ce gestionnaire prend la forme d'une interface intelligente entre le robot et l'usager, permettant à celui-ci de stipuler des commandes de haut niveau de façon naturelle, par exemple en parlant à voix haute. Les fonctions du gestionnaire sont accomplies à l'aide d'un POMDP et constituent un scénario d'apprentissage idéal, dans lequel l'agent doit s'ajuster progressivement aux besoins de l'usager.
APA, Harvard, Vancouver, ISO, and other styles
7

Skoglund, Caroline. "Risk-aware Autonomous Driving Using POMDPs and Responsibility-Sensitive Safety." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300909.

Full text
Abstract:
Autonomous vehicles promise to play an important role aiming at increased efficiency and safety in road transportation. Although we have seen several examples of autonomous vehicles out on the road over the past years, how to ensure the safety of autonomous vehicle in the uncertain and dynamic environment is still a challenging problem. This thesis studies this problem by developing a risk-aware decision making framework. The system that integrates the dynamics of an autonomous vehicle and the uncertain environment is modelled as a Partially Observable Markov Decision Process (POMDP). A risk measure is proposed based on the Responsibility-Sensitive Safety (RSS) distance, which quantifying the minimum distance to other vehicles for ensuring safety. This risk measure is incorporated into the reward function of POMDP for achieving a risk-aware decision making. The proposed risk-aware POMDP framework is evaluated in two case studies. In a single-lane car following scenario, it is shown that the ego vehicle is able to successfully avoid a collision in an emergency event where a vehicle in front of it makes a full stop. In the merge scenario, the ego vehicle successfully enters the main road from a ramp with a satisfactory distance to other vehicles. As a conclusion, the risk-aware POMDP framework is able to realize a trade-off between safety and usability by keeping a reasonable distance and adapting to other vehicles behaviours.
Autonoma fordon förutspås spela en stor roll i framtiden med målen att förbättra effektivitet och säkerhet för vägtransporter. Men även om vi sett flera exempel av autonoma fordon ute på vägarna de senaste åren är frågan om hur säkerhet ska kunna garanteras ett utmanande problem. Det här examensarbetet har studerat denna fråga genom att utveckla ett ramverk för riskmedvetet beslutsfattande. Det autonoma fordonets dynamik och den oförutsägbara omgivningen modelleras med en partiellt observerbar Markov-beslutsprocess (POMDP från engelskans “Partially Observable Markov Decision Process”). Ett riskmått föreslås baserat på ett säkerhetsavstånd förkortat RSS (från engelskans “Responsibility-Sensitive Safety”) som kvantifierar det minsta avståndet till andra fordon för garanterad säkerhet. Riskmåttet integreras i POMDP-modellens belöningsfunktion för att åstadkomma riskmedvetna beteenden. Den föreslagna riskmedvetna POMDP-modellen utvärderas i två fallstudier. I ett scenario där det egna fordonet följer ett annat fordon på en enfilig väg visar vi att det egna fordonet kan undvika en kollision då det framförvarande fordonet bromsar till stillastående. I ett scenario där det egna fordonet ansluter till en huvudled från en ramp visar vi att detta görs med ett tillfredställande avstånd till andra fordon. Slutsatsen är att den riskmedvetna POMDP-modellen lyckas realisera en avvägning mellan säkerhet och användbarhet genom att hålla ett rimligt säkerhetsavstånd och anpassa sig till andra fordons beteenden.
APA, Harvard, Vancouver, ISO, and other styles
8

Cohen, Jonathan. "Formation dynamique d'équipes dans les DEC-POMDPS ouverts à base de méthodes Monte-Carlo." Thesis, Normandie, 2019. http://www.theses.fr/2019NORMC225/document.

Full text
Abstract:
Cette thèse traite du problème où une équipe d'agents coopératifs et autonomes, évoluant dans un environnement stochastique partiellement observable, et œuvrant à la résolution d'une tâche complexe, doit modifier dynamiquement sa composition durant l'exécution de la tâche afin de s'adapter à l'évolution de celle-ci. Il s'agit d'un problème qui n'a été que peu étudié dans le domaine de la planification multi-agents. Pourtant, il existe de nombreuses situations où l'équipe d'agent mobilisée est amenée à changer au fil de l'exécution de la tâche.Nous nous intéressons plus particulièrement au cas où les agents peuvent décider d'eux-même de quitter ou de rejoindre l'équipe opérationnelle. Certaines fois, utiliser peu d'agents peut être bénéfique si les coûts induits par l'utilisation des agents sont trop prohibitifs. Inversement, il peut parfois être utile de faire appel à plus d'agents si la situation empire et que les compétences de certains agents se révèlent être de précieux atouts.Afin de proposer un modèle de décision qui permette de représenter ces situations, nous nous basons sur les processus décisionnels de Markov décentralisés et partiellement observables, un modèle standard utilisé dans le cadre de la planification multi-agents sous incertitude. Nous étendons ce modèle afin de permettre aux agents d'entrer et sortir du système. On parle alors de système ouvert. Nous présentons également deux algorithmes de résolution basés sur les populaires méthodes de recherche arborescente Monte-Carlo. Le premier de ces algorithmes nous permet de construire des politiques jointes séparables via des calculs de meilleures réponses successives, tandis que le second construit des politiques jointes non séparables en évaluant les équipes dans chaque situation via un système de classement Elo. Nous évaluons nos méthodes sur de nouveaux jeux de tests qui permettent de mettre en lumière les caractéristiques des systèmes ouverts
This thesis addresses the problem where a team of cooperative and autonomous agents, working in a stochastic and partially observable environment towards solving a complex task, needs toe dynamically modify its structure during the process execution, so as to adapt to the evolution of the task. It is a problem that has been seldom studied in the field of multi-agent planning. However, there are many situations where the team of agents is likely to evolve over time.We are particularly interested in the case where the agents can decide for themselves to leave or join the operational team. Sometimes, using few agents can be for the greater good. Conversely, it can sometimes be useful to call on more agents if the situation gets worse and the skills of some agents turn out to be valuable assets.In order to propose a decision model that can represent those situations, we base upon the decentralized and partially observable Markov decision processes, the standard model for planning under uncertainty in decentralized multi-agent settings. We extend this model to allow agents to enter and exit the system. This is what is called agent openness. We then present two planning algorithms based on the popular Monte-Carlo Tree Search methods. The first algorithm builds separable joint policies by computing series of best responses individual policies, while the second algorithm builds non-separable joint policies by ranking the teams in each situation via an Elo rating system. We evaluate our methods on new benchmarks that allow to highlight some interesting features of open systems
APA, Harvard, Vancouver, ISO, and other styles
9

Pokharel, Gaurab. "Increasing the Value of Information During Planning in Uncertain Environments." Oberlin College Honors Theses / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=oberlin1624976272271825.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tschantz, Michael Carl. "Formalizing and Enforcing Purpose Restrictions." Research Showcase @ CMU, 2012. http://repository.cmu.edu/dissertations/128.

Full text
Abstract:
Privacy policies often place restrictions on the purposes for which a governed entity may use personal information. For example, regulations, such as the Health Insurance Portability and Accountability Act (HIPAA), require that hospital employees use medical information for only certain purposes, such as treatment, but not for others, such as gossip. Thus, using formal or automated methods for enforcing privacy policies requires a semantics of purpose restrictions to determine whether an action is for a purpose. We provide such a semantics using a formalism based on planning. We model planning using a modified version of Markov Decision Processes (MDPs), which exclude redundant actions for a formal definition of redundant. We argue that an action is for a purpose if and only if the action is part of a plan for optimizing the satisfaction of that purpose under the MDP model. We use this formalization to define when a sequence of actions is only for or not for a purpose. This semantics enables us to create and implement an algorithm for automating auditing, and to describe formally and compare rigorously previous enforcement methods. We extend this formalization to Partially Observable Markov Decision Processes (POMDPs) to answer when information is used for a purpose. To validate our semantics, we provide an example application and conduct a survey to compare our semantics to how people commonly understand the word “purpose”.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "POMDPs"

1

Chinaei, Hamidreza, and Brahim Chaib-draa. Building Dialogue POMDPs from Expert Dialogues. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-26200-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Oliehoek, Frans A., and Christopher Amato. A Concise Introduction to Decentralized POMDPs. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-28929-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Brodowicz, Kazimierz. Pompy ciepła. Warszawa: Państwowe Wydawnictwo Naukowe, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Pompes. Athēnai: Indiktos, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Michels, Tilde. Ausgerechnet Pommes. Zürich: Nagel & Kimche, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Genet, Jean. Pompes funèbres. [Paris]: Gallimard, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

S, J. Llop. Josep Pomés. [Barcelona]: Gal Art, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Woodier, Olwen. Le temps des pommes: 150 délicieuses recettes. [Montréal]: Éditions de l'Homme, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Maill, Musique de Thibault. Drl̥es d'oiseaux : 17 pom̈es ̉chanter, 19 pom̈es ̉lire. S.I: Didier Jeunesse, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Braziunas, Darius. Stochastic local search for POMDP controllers. Ottawa: National Library of Canada, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "POMDPs"

1

Zeugmann, Thomas, Pascal Poupart, James Kennedy, Xin Jin, Jiawei Han, Lorenza Saitta, Michele Sebag, et al. "POMDPs." In Encyclopedia of Machine Learning, 776. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_642.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Oliehoek, Frans A. "Decentralized POMDPs." In Adaptation, Learning, and Optimization, 471–503. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-27645-3_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Oliehoek, Frans A., and Christopher Amato. "Finite-Horizon Dec-POMDPs." In SpringerBriefs in Intelligent Systems, 33–40. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-28929-8_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Oliehoek, Frans A., and Christopher Amato. "Infinite-Horizon Dec-POMDPs." In SpringerBriefs in Intelligent Systems, 69–77. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-28929-8_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bork, Alexander, Sebastian Junges, Joost-Pieter Katoen, and Tim Quatmann. "Verification of Indefinite-Horizon POMDPs." In Automated Technology for Verification and Analysis, 288–304. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-59152-6_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Winterer, Leonore, Ralf Wimmer, Nils Jansen, and Bernd Becker. "Strengthening Deterministic Policies for POMDPs." In Lecture Notes in Computer Science, 115–32. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-55754-6_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Junges, Sebastian, Nils Jansen, and Sanjit A. Seshia. "Enforcing Almost-Sure Reachability in POMDPs." In Computer Aided Verification, 602–25. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81688-9_28.

Full text
Abstract:
AbstractPartially-Observable Markov Decision Processes (POMDPs) are a well-known stochastic model for sequential decision making under limited information. We consider the EXPTIME-hard problem of synthesising policies that almost-surely reach some goal state without ever visiting a bad state. In particular, we are interested in computing the winning region, that is, the set of system configurations from which a policy exists that satisfies the reachability specification. A direct application of such a winning region is the safe exploration of POMDPs by, for instance, restricting the behavior of a reinforcement learning agent to the region. We present two algorithms: A novel SAT-based iterative approach and a decision-diagram based alternative. The empirical evaluation demonstrates the feasibility and efficacy of the approaches.
APA, Harvard, Vancouver, ISO, and other styles
8

Chinaei, Hamid R., Brahim Chaib-draa, and Luc Lamontagne. "Learning Observation Models for Dialogue POMDPs." In Advances in Artificial Intelligence, 280–86. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-30353-1_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bui, Trung H., Job Zwiers, Mannes Poel, and Anton Nijholt. "Affective Dialogue Management Using Factored POMDPs." In Interactive Collaborative Information Systems, 207–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-11688-9_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shani, Guy, Ronen I. Brafman, and Solomon E. Shimony. "Model-Based Online Learning of POMDPs." In Machine Learning: ECML 2005, 353–64. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11564096_35.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "POMDPs"

1

Wang, Yunbo, Bo Liu, Jiajun Wu, Yuke Zhu, Simon S. Du, Li Fei-Fei, and Joshua B. Tenenbaum. "DualSMC: Tunneling Differentiable Filtering and Planning under Continuous POMDPs." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/579.

Full text
Abstract:
A major difficulty of solving continuous POMDPs is to infer the multi-modal distribution of the unobserved true states and to make the planning algorithm dependent on the perceived uncertainty. We cast POMDP filtering and planning problems as two closely related Sequential Monte Carlo (SMC) processes, one over the real states and the other over the future optimal trajectories, and combine the merits of these two parts in a new model named the DualSMC network. In particular, we first introduce an adversarial particle filter that leverages the adversarial relationship between its internal components. Based on the filtering results, we then propose a planning algorithm that extends the previous SMC planning approach [Piche et al., 2018] to continuous POMDPs with an uncertainty-dependent policy. Crucially, not only can DualSMC handle complex observations such as image input but also it remains highly interpretable. It is shown to be effective in three continuous POMDP domains: the floor positioning domain, the 3D light-dark navigation domain, and a modified Reacher domain.
APA, Harvard, Vancouver, ISO, and other styles
2

Hsiao, Kaijen, Leslie Pack Kaelbling, and Tomas Lozano-Perez. "Grasping POMDPs." In 2007 IEEE International Conference on Robotics and Automation. IEEE, 2007. http://dx.doi.org/10.1109/robot.2007.364201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Carr, Steven, Nils Jansen, Ralf Wimmer, Alexandru Serban, Bernd Becker, and Ufuk Topcu. "Counterexample-Guided Strategy Improvement for POMDPs Using Recurrent Neural Networks." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/768.

Full text
Abstract:
We study strategy synthesis for partially observable Markov decision processes (POMDPs). The particular problem is to determine strategies that provably adhere to (probabilistic) temporal logic constraints. This problem is computationally intractable and theoretically hard. We propose a novel method that combines techniques from machine learning and formal verification. First, we train a recurrent neural network (RNN) to encode POMDP strategies. The RNN accounts for memory-based decisions without the need to expand the full belief space of a POMDP. Secondly, we restrict the RNN-based strategy to represent a finite-memory strategy and implement it on a specific POMDP. For the resulting finite Markov chain, efficient formal verification techniques provide provable guarantees against temporal logic specifications. If the specification is not satisfied, counterexamples supply diagnostic information. We use this information to improve the strategy by iteratively training the RNN. Numerical experiments show that the proposed method elevates the state of the art in POMDP solving by up to three orders of magnitude in terms of solving times and model sizes.
APA, Harvard, Vancouver, ISO, and other styles
4

Williams, J. D., and S. Young. "Scaling up POMDPs for Dialog Management: The ``Summary POMDP'' Method." In IEEE Workshop on Automatic Speech Recognition and Understanding, 2005. IEEE, 2005. http://dx.doi.org/10.1109/asru.2005.1566498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cohen, Jonathan, Jilles-Steeve Dibangoye, and Abdel-Illah Mouaddib. "Open Decentralized POMDPs." In 2017 IEEE 29th International Conference on Tools with Artificial Intelligence (ICTAI). IEEE, 2017. http://dx.doi.org/10.1109/ictai.2017.00150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hsiao, Chuck, and Richard Malak. "Modeling Information Gathering Decisions in Systems Engineering Projects." In ASME 2014 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/detc2014-34854.

Full text
Abstract:
Decisions in systems engineering projects commonly are made under significant amounts of uncertainty. This uncertainty can exist in many areas such as the performance of subsystems, interactions between subsystems, or project resource requirements such as budget or personnel. System engineers often can choose to gather information that reduces uncertainty, which allows for potentially better decisions, but at the cost of resources expended in acquiring the information. However, our understanding of how to analyze situations involving gathering information is limited, and thus heuristics, intuition, or deadlines are often used to judge the amount of information gathering needed in a decision. System engineers would benefit from a better understanding of how to determine the amount of information gathering needed to support a decision. This paper introduces Partially Observable Markov Decision Processes (POMDPs) as a formalism for modeling information-gathering decisions in systems engineering. A POMDP can model different states, alternatives, outcomes, and probabilities of outcomes to represent a decision maker’s beliefs about his situation. It also can represent sequential decisions in a compact format, avoiding the combinatorial explosion of decision trees and similar representations. The solution of a POMDP, in the form of value functions, prescribes the best course of action based on a decision maker’s beliefs about his situation. The value functions also determine if more information gathering is needed. Sophisticated computational solvers for POMDPs have been developed in recent years, allowing for a straightforward analysis of different alternatives, and determining the optimal course of action in a given situation. This paper demonstrates using a POMDP to model a systems engineering problem, and compares this approach with other approaches that account for information gathering in decision making.
APA, Harvard, Vancouver, ISO, and other styles
7

Imaizumi, Masaaki, and Ryohei Fujimaki. "Factorized Asymptotic Bayesian Policy Search for POMDPs." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/607.

Full text
Abstract:
This paper proposes a novel direct policy search (DPS) method with model selection for partially observed Markov decision processes (POMDPs). DPSs have been standard for learning POMDPs due to their computational efficiency and natural ability to maximize total rewards. An important open challenge for the best use of DPS methods is model selection, i.e., determination of the proper dimensionality of hidden states and complexity of policy functions, to mitigate overfitting in highly-flexible model representations of POMDPs. This paper bridges Bayesian inference and reward maximization and derives marginalized weighted log-likelihood~(MWL) for POMDPs which takes both advantages of Bayesian model selection and DPS. Then we propose factorized asymptotic Bayesian policy search (FABPS) to explore the model and the policy which maximizes MWL by expanding recently-developed factorized asymptotic Bayesian inference. Experimental results show that FABPS outperforms state-of-the-art model selection methods for POMDPs, with respect both to model selection and to expected total rewards.
APA, Harvard, Vancouver, ISO, and other styles
8

Horák, Karel, Branislav Bošanský, and Krishnendu Chatterjee. "Goal-HSVI: Heuristic Search Value Iteration for Goal POMDPs." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/662.

Full text
Abstract:
Partially observable Markov decision processes (POMDPs) are the standard models for planning under uncertainty with both finite and infinite horizon. Besides the well-known discounted-sum objective, indefinite-horizon objective (aka Goal-POMDPs) is another classical objective for POMDPs. In this case, given a set of target states and a positive cost for each transition, the optimization objective is to minimize the expected total cost until a target state is reached. In the literature, RTDP-Bel or heuristic search value iteration (HSVI) have been used for solving Goal-POMDPs. Neither of these algorithms has theoretical convergence guarantees, and HSVI may even fail to terminate its trials. We give the following contributions: (1) We discuss the challenges introduced in Goal-POMDPs and illustrate how they prevent the original HSVI from converging. (2) We present a novel algorithm inspired by HSVI, termed Goal-HSVI, and show that our algorithm has convergence guarantees. (3) We show that Goal-HSVI outperforms RTDP-Bel on a set of well-known examples.
APA, Harvard, Vancouver, ISO, and other styles
9

Baisero, Andrea, and Christopher Amato. "Reconciling Rewards with Predictive State Representations." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/299.

Full text
Abstract:
Predictive state representations (PSRs) are models of controlled non-Markov observation sequences which exhibit the same generative process governing POMDP observations without relying on an underlying latent state. In that respect, a PSR is indistinguishable from the corresponding POMDP. However, PSRs notoriously ignore the notion of rewards, which undermines the general utility of PSR models for control, planning, or reinforcement learning. Therefore, we describe a sufficient and necessary accuracy condition which determines whether a PSR is able to accurately model POMDP rewards, we show that rewards can be approximated even when the accuracy condition is not satisfied, and we find that a non-trivial number of POMDPs taken from a well-known third-party repository do not satisfy the accuracy condition. We propose reward-predictive state representations (R-PSRs), a generalization of PSRs which accurately models both observations and rewards, and develop value iteration for R-PSRs. We show that there is a mismatch between optimal POMDP policies and the optimal PSR policies derived from approximate rewards. On the other hand, optimal R-PSR policies perfectly match optimal POMDP policies, reconfirming R-PSRs as accurate state-less generative models of observations and rewards.
APA, Harvard, Vancouver, ISO, and other styles
10

"POMDPs for sustainable fishery management." In 23rd International Congress on Modelling and Simulation (MODSIM2019). Modelling and Simulation Society of Australia and New Zealand, 2019. http://dx.doi.org/10.36334/modsim.2019.g2.filar.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "POMDPs"

1

Srivastava, Siddharth, Xiang Cheng, Stuart J. Russell, and Avi Pfeffer. First-Order Open-Universe POMDPs: Formulation and Algorithms. Fort Belvoir, VA: Defense Technical Information Center, December 2013. http://dx.doi.org/10.21236/ada603645.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Theocharous, Georgios, Sridhar Mahadevan, and Leslie P. Kaelbling. Spatial and Temporal Abstractions in POMDPs Applied to Robot Navigation. Fort Belvoir, VA: Defense Technical Information Center, September 2005. http://dx.doi.org/10.21236/ada466737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Banerjee, Bikramjit, and Landon Kraemer. Distributed Reinforcement Learning for Policy Synchronization in Infinite-Horizon Dec-POMDPs. Fort Belvoir, VA: Defense Technical Information Center, January 2012. http://dx.doi.org/10.21236/ada585093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yost, Kirk A., and Alan R. Washburn. The LP/POMDP Marriage: Optimization with Imperfect Information. Fort Belvoir, VA: Defense Technical Information Center, January 2000. http://dx.doi.org/10.21236/ada486565.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bolinger, Mark, Ryan Wiser, and William Golove. Centrales au gaz et Energies renouvelables: comparer des pommes avec des pommes. Office of Scientific and Technical Information (OSTI), October 2003. http://dx.doi.org/10.2172/842891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chaiken, Joseph. Improved Materials for Photochromic Optical Memory Subsystem (POMS). Fort Belvoir, VA: Defense Technical Information Center, May 2000. http://dx.doi.org/10.21236/ada378152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tebbutt, John M. The NIST LEIDIR prototype - inserting hypertext links into the POMS using information retrieval:. Gaithersburg, MD: National Institute of Standards and Technology, 1999. http://dx.doi.org/10.6028/nist.ir.6321.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Guidati, Gianfranco, and Domenico Giardini. Synthèse conjointe «Géothermie» du PNR «Energie». Swiss National Science Foundation (SNSF), February 2020. http://dx.doi.org/10.46446/publication_pnr70_pnr71.2020.4.fr.

Full text
Abstract:
La géothermie de faible profondeur avec des pompes à chaleur correspond à l’état actuel de la technique et est déjà largement répandue en Suisse. Au sein du futur système énergétique, la géothermie de moyenne à grande profondeur (1 à 6 km) devrait également jouer un rôle important, notamment en matière de fourniture de chaleur pour les bâtiments et les process industriels. Cette forme d’utilisation de la chaleur géothermique nécessite un sous-sol bien perméable, permettant à un fluide – généralement de l’eau – d’engranger la chaleur naturellement présente dans la roche et de la transporter jusqu’à la surface. Dans les roches sédimentaires, cette condition est généralement vérifiée du fait de la structure naturelle, tandis que dans les granites et les gneiss la perméabilité doit être générée artificiellement par injection d’eau. La chaleur ainsi récupérée augmente au fur et à mesure de la profondeur de forage : la température souterraine atteint environ 40°C à 1 km de profondeur et environ 100°C à 3 km de profondeur. Pour entraîner une turbine à vapeur en vue de produire de l’électricité, des températures supérieures à 100°C sont nécessaires. Étant donné que cela implique de forer à des profondeurs de 3 à 6 km, le risque de sismicité induite augmente en conséquence. Le sous-sol peut également servir à stocker de la chaleur ou des gaz, par exemple de l’hydrogène ou du méthane, ou encore à enfouir de façon permanente du CO2. À cet effet, les mêmes exigences que pour l’extraction de chaleur doivent être vérifiées et le réservoir doit en outre être surmonté d’une couche étanche, empêchant le gaz de s’échapper. Le projet conjoint « Énergie hydroélectrique et géothermique » du PNR « Énergie » était avant tout consacré à la question de savoir où en Suisse trouver des couches de sol appropriées, répondant de manière optimale aux exigences des différentes utilisations. Un deuxième grand axe de recherche concernait les mesures visant à réduire la sismicité induite par les forages profonds et les dommages aux structures qui en résultent. Par ailleurs, des modèles et des simulations ont été élaborés dans le but de mieux comprendre les processus souterrains qui interviennent dans la mise en œuvre et l’exploitation des ressources géothermiques. En résumé, les résultats de recherche montrent que la Suisse jouit de bonnes conditions pour l’utilisation de la géothermie de moyenne profondeur (1-3 km), tant pour le parc de bâtiments que pour les processus industriels. L’optimisme est également de mise en ce qui concerne le stockage saisonnier de chaleur et de gaz. Le potentiel de stockage définitif de CO2 dans des quantités pertinentes s’avère en revanche plutôt limité. Concernant la production d’électricité à partir de la chaleur issue de la géothermie profonde (> 3 km), il n’existe pas encore de certitude définitive quant à l’importance du potentiel économiquement exploitable du sous-sol. Des installations de démonstration exploitées industriellement sont absolument nécessaires à cet égard, afin de renforcer l’acceptation par la population et les investisseurs.
APA, Harvard, Vancouver, ISO, and other styles
9

Ventilateurs et pompes. Natural Resources Canada/ESS/Scientific and Technical Publishing Services, 1987. http://dx.doi.org/10.4095/313762.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Refroidissement et pompes à chaleur. Natural Resources Canada/ESS/Scientific and Technical Publishing Services, 1987. http://dx.doi.org/10.4095/313703.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography