To see the other types of publications on this topic, follow the link: Markov Decision Process Planning.

Dissertations / Theses on the topic 'Markov Decision Process Planning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Markov Decision Process Planning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Geng, Na. "Combinatorial optimization and Markov decision process for planning MRI examinations." Phd thesis, Saint-Etienne, EMSE, 2010. http://tel.archives-ouvertes.fr/tel-00566257.

Full text
Abstract:
This research is motivated by our collaborations with a large French university teaching hospital in order to reduce the Length of Stay (LoS) of stroke patients treated in the neurovascular department. Quick diagnosis is critical for stroke patients but relies on expensive and heavily used imaging facilities such as MRI (Magnetic Resonance Imaging) scanners. Therefore, it is very important for the neurovascular department to reduce the patient LoS by reducing their waiting time of imaging examinations. From the neurovascular department perspective, this thesis proposes a new MRI examinations r
APA, Harvard, Vancouver, ISO, and other styles
2

Dai, Peng. "FASTER DYNAMIC PROGRAMMING FOR MARKOV DECISION PROCESSES." UKnowledge, 2007. http://uknowledge.uky.edu/gradschool_theses/428.

Full text
Abstract:
Markov decision processes (MDPs) are a general framework used by Artificial Intelligence (AI) researchers to model decision theoretic planning problems. Solving real world MDPs has been a major and challenging research topic in the AI literature. This paper discusses two main groups of approaches in solving MDPs. The first group of approaches combines the strategies of heuristic search and dynamic programming to expedite the convergence process. The second makes use of graphical structures in MDPs to decrease the effort of classic dynamic programming algorithms. Two new algorithms proposed by
APA, Harvard, Vancouver, ISO, and other styles
3

Alizadeh, Pegah. "Elicitation and planning in Markov decision processes with unknown rewards." Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCD011/document.

Full text
Abstract:
Les processus décisionnels de Markov (MDPs) modélisent des problèmes de décisionsséquentielles dans lesquels un utilisateur interagit avec l’environnement et adapte soncomportement en prenant en compte les signaux de récompense numérique reçus. La solutiond’unMDP se ramène à formuler le comportement de l’utilisateur dans l’environnementà l’aide d’une fonction de politique qui spécifie quelle action choisir dans chaque situation.Dans de nombreux problèmes de décision du monde réel, les utilisateurs ont despréférences différentes, donc, les gains de leurs actions sur les états sont différents et
APA, Harvard, Vancouver, ISO, and other styles
4

Ernsberger, Timothy S. "Integrating Deterministic Planning and Reinforcement Learning for Complex Sequential Decision Making." Case Western Reserve University School of Graduate Studies / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=case1354813154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Al, Sabban Wesam H. "Autonomous vehicle path planning for persistence monitoring under uncertainty using Gaussian based Markov decision process." Thesis, Queensland University of Technology, 2015. https://eprints.qut.edu.au/82297/1/Wesam%20H_Al%20Sabban_Thesis.pdf.

Full text
Abstract:
One of the main challenges facing online and offline path planners is the uncertainty in the magnitude and direction of the environmental energy because it is dynamic, changeable with time, and hard to forecast. This thesis develops an artificial intelligence for a mobile robot to learn from historical or forecasted data of environmental energy available in the area of interest which will help for a persistence monitoring under uncertainty using the developed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
6

Poulin, Nolan. "Proactive Planning through Active Policy Inference in Stochastic Environments." Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-theses/1267.

Full text
Abstract:
In multi-agent Markov Decision Processes, a controllable agent must perform optimal planning in a dynamic and uncertain environment that includes another unknown and uncontrollable agent. Given a task specification for the controllable agent, its ability to complete the task can be impeded by an inaccurate model of the intent and behaviors of other agents. In this work, we introduce an active policy inference algorithm that allows a controllable agent to infer a policy of the environmental agent through interaction. Active policy inference is data-efficient and is particularly useful when data
APA, Harvard, Vancouver, ISO, and other styles
7

Pokharel, Gaurab. "Increasing the Value of Information During Planning in Uncertain Environments." Oberlin College Honors Theses / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=oberlin1624976272271825.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Stárek, Ivo. "Plánování cesty robota pomocí dynamického programování." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2009. http://www.nusl.cz/ntk/nusl-228655.

Full text
Abstract:
This work is dedicated to robot path planning with using principles of dynamic programing in discrete state space. Theoretical part is dedicated to actual situation in this field and to principle of applying Markov decission process to path planning. Practical part is dedicated to implementation of two algorithms based on MDP principles.
APA, Harvard, Vancouver, ISO, and other styles
9

Junyent, Barbany Miquel. "Width-Based Planning and Learning." Doctoral thesis, Universitat Pompeu Fabra, 2021. http://hdl.handle.net/10803/672779.

Full text
Abstract:
Optimal sequential decision making is a fundamental problem to many diverse fields. In recent years, Reinforcement Learning (RL) methods have experienced unprecedented success, largely enabled by the use of deep learning models, reaching human-level performance in several domains, such as the Atari video games or the ancient game of Go. In contrast to the RL approach in which the agent learns a policy from environment interaction samples, ignoring the structure of the problem, the planning approach for decision making assumes known models for the agent's goals and domain dynamics, and fo
APA, Harvard, Vancouver, ISO, and other styles
10

Pinheiro, Paulo Gurgel 1983. "Localização multirrobo cooperativa com planejamento." [s.n.], 2009. http://repositorio.unicamp.br/jspui/handle/REPOSIP/276155.

Full text
Abstract:
Orientador: Jacques Wainer<br>Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação<br>Made available in DSpace on 2018-09-11T21:14:07Z (GMT). No. of bitstreams: 1 Pinheiro_PauloGurgel_M.pdf: 1259816 bytes, checksum: a4783df9aa3755becb68ee233ad43e3c (MD5) Previous issue date: 2009<br>Resumo: Em um problema de localização multirrobô cooperativa, um grupo de robôs encontra-se em um determinado ambiente, cuja localização exata de cada um dos robôs é desconhecida. Neste cenário, uma distribuição de probabilidades aponta as chances de um robô estar em um determinado
APA, Harvard, Vancouver, ISO, and other styles
11

Lacerda, Dênis Antonio. "Aprendizado por reforço em lote: um estudo de caso para o problema de tomada de decisão em processos de venda." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-03072014-101251/.

Full text
Abstract:
Planejamento Probabilístico estuda os problemas de tomada de decisão sequencial de um agente, em que as ações possuem efeitos probabilísticos, modelados como um processo de decisão markoviano (Markov Decision Process - MDP). Dadas a função de transição de estados probabilística e os valores de recompensa das ações, é possível determinar uma política de ações (i.e., um mapeamento entre estado do ambiente e ações do agente) que maximiza a recompensa esperada acumulada (ou minimiza o custo esperado acumulado) pela execução de uma sequência de ações. Nos casos em que o modelo MDP não é completamen
APA, Harvard, Vancouver, ISO, and other styles
12

Borges, Igor Oliveira. "Estratégias para otimização do algoritmo de Iteração de Valor Sensível a Risco." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-09012019-103826/.

Full text
Abstract:
Processos de decisão markovianos sensíveis a risco (Risk Sensitive Markov Decision Process - RS-MDP) permitem modelar atitudes de aversão e propensão ao risco no processo de tomada de decisão usando um fator de risco para representar a atitude ao risco. Para esse modelo, existem operadores que são baseados em funções de transformação linear por partes que incluem fator de risco e fator de desconto. Nesta dissertação são formulados dois algoritmos de Iteração de Valor Sensível a Risco baseados em um desses operadores, esses algoritmos são chamados de Iteração de Valor Sensível a Risco Síncrono
APA, Harvard, Vancouver, ISO, and other styles
13

Holguin, Mijail Gamarra. "Planejamento probabilístico usando programação dinâmica assíncrona e fatorada." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-14042013-131306/.

Full text
Abstract:
Processos de Decisão Markovianos (Markov Decision Process - MDP) modelam problemas de tomada de decisão sequencial em que as possíveis ações de um agente possuem efeitos probabilísticos sobre os estados sucessores (que podem ser definidas por matrizes de transição de estados). Programação dinâmica em tempo real (Real-time dynamic programming - RTDP), é uma técnica usada para resolver MDPs quando existe informação sobre o estado inicial. Abordagens tradicionais apresentam melhor desempenho em problemas com matrizes esparsas de transição de estados porque podem alcançar eficientemente a convergê
APA, Harvard, Vancouver, ISO, and other styles
14

Sandino, Mora Juan David. "Autonomous decision-making for UAVs operating under environmental and object detection uncertainty." Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/232513/1/Juan%20David_Sandino%20Mora_Thesis.pdf.

Full text
Abstract:
This study established a framework that increases cognitive levels in small UAVs (or drones), enabling autonomous navigation in partially observable environments. The UAV system was validated under search and rescue by locating victims last seen inside cluttered buildings and in bushlands. This framework improved the decision-making skills of the drone to collect more accurate statistics of detected victims. This study assists validation processes of detected objects in real-time when data is complex to interpret for UAV pilots and reduces human bias on scouting strategies.
APA, Harvard, Vancouver, ISO, and other styles
15

Freitas, Elthon Manhas de. "Planejamento probabilístico sensível a risco com ILAO* e função utilidade exponencial." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-17012019-092638/.

Full text
Abstract:
Os processos de decisão de Markov (Markov Decision Process - MDP) têm sido usados para resolução de problemas de tomada de decisão sequencial. Existem problemas em que lidar com os riscos do ambiente para obter um resultado confiável é mais importante do que maximizar o retorno médio esperado. MDPs que lidam com esse tipo de problemas são chamados de processos de decisão de Markov sensíveis a risco (Risk-Sensitive Markov Decision Process - RSMDP). Dentre as diversas variações de RSMDP, estão os trabalhos baseados em utilidade exponencial que utilizam um fator de risco, o qual modela a atitude
APA, Harvard, Vancouver, ISO, and other styles
16

Hireche, Chabha. "Etude et implémentation sur SoC-FPGA d'une méthode probabiliste pour le contrôle de mission de véhicule autonome Embedded context aware diagnosis for a UAV SoC platform, in Microprocessors and Microsystems 51, June 2017 Context/Resource-Aware Mission Planning Based on BNs and Concurrent MDPs for Autonomous UAVs, in MDPI-Sensors Journal, December 2018." Thesis, Brest, 2019. http://www.theses.fr/2019BRES0067.

Full text
Abstract:
Les systèmes autonomes embarquent différents types de capteurs, d’applications et de calculateurs puissants. Ils sont donc utilisés dans différents domaines d’application et réalisent diverses missions simples ou complexes. Ces missions se déroulent souvent dans des environnements non déterministes avec la présence d’évènements aléatoires pouvant perturber le déroulement de la mission. Il est donc nécessaire d’évaluer régulièrement l’état de santé du système et de ses composants matériels et logiciels dans le but de détecter les défaillances à l’aide de réseaux Bayésiens. Par la suite, une déc
APA, Harvard, Vancouver, ISO, and other styles
17

Hamadouche, Mohand. "Distributed decision-making in multi-UAV systems : exploring methods, rewards tuning, and operating mode adaptation." Electronic Thesis or Diss., Brest, 2024. http://www.theses.fr/2024BRES0014.

Full text
Abstract:
Les véhicules aériens sans pilote (UAV) prospèrent dans des environnements difficiles, améliorant la qualité des missions, la productivité et la sécurité. Opérer dans des contextes imprévisibles nécessite une prise de décision indépendante en temps réel pour une gestion efficace des missions. Ce document se concentre sur les missions collaboratives multi-UAV, couvrant : 1) la sélection d’une méthode de planification de mission basée sur des critères spécifiques, 2) l’auto-adaptation des politiques grâce à l’ajustement des récompenses, et 3) l’adaptation du mode opératoire basée sur un réseau b
APA, Harvard, Vancouver, ISO, and other styles
18

Hoock, Jean-Baptiste. "Contributions to Simulation-based High-dimensional Sequential Decision Making." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00912338.

Full text
Abstract:
My thesis is entitled "Contributions to Simulation-based High-dimensional Sequential Decision Making". The context of the thesis is about games, planning and Markov Decision Processes. An agent interacts with its environment by successively making decisions. The agent starts from an initial state until a final state in which the agent can not make decision anymore. At each timestep, the agent receives an observation of the state of the environment. From this observation and its knowledge, the agent makes a decision which modifies the state of the environment. Then, the agent receives a reward
APA, Harvard, Vancouver, ISO, and other styles
19

Regatti, Jayanth Reddy. "Dynamic Routing for Fuel Optimization in Autonomous Vehicles." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1524145002064074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Santos, Felipe Martins dos. "Soluções eficientes para processos de decisão markovianos baseadas em alcançabilidade e bissimulações estocásticas." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-12022014-140538/.

Full text
Abstract:
Planejamento em inteligência artificial é a tarefa de determinar ações que satisfaçam um dado objetivo. Nos problemas de planejamento sob incerteza, as ações podem ter efeitos probabilísticos. Esses problemas são modelados como Processos de Decisão Markovianos (Markov Decision Processes - MDPs), modelos que permitem o cálculo de soluções ótimas considerando o valor esperado de cada ação em cada estado. Contudo, resolver problemas grandes de planejamento probabilístico, i.e., com um grande número de estados e ações, é um enorme desafio. MDPs grandes podem ser reduzidos através da computaçã
APA, Harvard, Vancouver, ISO, and other styles
21

Skoglund, Caroline. "Risk-aware Autonomous Driving Using POMDPs and Responsibility-Sensitive Safety." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300909.

Full text
Abstract:
Autonomous vehicles promise to play an important role aiming at increased efficiency and safety in road transportation. Although we have seen several examples of autonomous vehicles out on the road over the past years, how to ensure the safety of autonomous vehicle in the uncertain and dynamic environment is still a challenging problem. This thesis studies this problem by developing a risk-aware decision making framework. The system that integrates the dynamics of an autonomous vehicle and the uncertain environment is modelled as a Partially Observable Markov Decision Process (POMDP). A risk m
APA, Harvard, Vancouver, ISO, and other styles
22

Desquesnes, Guillaume Louis Florent. "Distribution de Processus Décisionnels Markoviens pour une gestion prédictive d’une ressource partagée : application aux voies navigables des Hauts-de-France dans le contexte incertain du changement climatique." Thesis, Ecole nationale supérieure Mines-Télécom Lille Douai, 2018. http://www.theses.fr/2018MTLD0001/document.

Full text
Abstract:
Les travaux de cette thèse visent à mettre en place une gestion prédictive sous incertitudes de la ressource en eau pour les réseaux de voies navigables. L'objectif est de proposer un plan de gestion de l'eau pour optimiser les conditions de navigation de l'ensemble du réseau supervisé sur un horizon spécifié. La solution attendue doit rendre le réseau résilient aux effets probables du changement climatique et aux évolutions du trafic fluvial. Dans un premier temps, une modélisation générique d'une ressource distribuée sur un réseau est proposée. Celle-ci, basée sur les processus décisionnels
APA, Harvard, Vancouver, ISO, and other styles
23

Sun-Hosoya, Lisheng. "Meta-Learning as a Markov Decision Process." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS588/document.

Full text
Abstract:
L'apprentissage automatique (ML) a connu d'énormes succès ces dernières années et repose sur un nombre toujours croissant d'applications réelles. Cependant, la conception d'algorithmes prometteurs pour un problème spécifique nécessite toujours un effort humain considérable. L'apprentissage automatique (AutoML) a pour objectif de sortir l'homme de la boucle. AutoML est généralement traité comme un problème de sélection d’algorithme / hyper-paramètre. Les approches existantes incluent l’optimisation Bayésienne, les algorithmes évolutionnistes et l’apprentissage par renforcement. Parmi eux, auto-
APA, Harvard, Vancouver, ISO, and other styles
24

Berggren, Andreas, Martin Gunnarsson, and Johannes Wallin. "Artificial intelligence as a decision support system in property development and facility management." Thesis, Högskolan i Borås, Akademin för textil, teknik och ekonomi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-25535.

Full text
Abstract:
The construction industry has been hesitant for a long time to apply new technologies. In property development, the industry relies heavily on employees bringing experience from one project to another. These employees learn to manage risks in connection with the acquisition of land, but when these people retire, the knowledge disappears. An AI-based decision-support system that takes the risks and the market into account when acquiring land can learn from each project and bring this knowledge into future projects. In facility management, artificial intelligence could increase the efficiency of
APA, Harvard, Vancouver, ISO, and other styles
25

Liu, Yaxin. "Decision-Theoretic Planning under Risk-Sensitive Planning Objectives." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/6959.

Full text
Abstract:
Risk attitudes are important for human decision making, especially in scenarios where huge wins or losses are possible, as exemplified by planetary rover navigation, oilspill response, and business applications. Decision-theoretic planners therefore need to take risk aspects into account to serve their users better. However, most existing decision-theoretic planners use simplistic planning objectives that are risk-neutral. The thesis research is the first comprehensive study of how to incorporate risk attitudes into decision-theoretic planners and solve large-scale planning problems represe
APA, Harvard, Vancouver, ISO, and other styles
26

So, Mee Chi. "Optimizing credit limit policy by Markov decision process models." Thesis, University of Southampton, 2009. https://eprints.soton.ac.uk/68761/.

Full text
Abstract:
Credit cards have become an essential product for most consumers. Lenders have recognized the profit that can be achieved from the credit card market and thus they have introduced different credit cards to attract consumers. Thus, the credit card market has undergone keen competition in recent years. Lenders realize their operation decisions are crucial in determining how much pofit is achieved from a card. This thesis focuses on the most well-known operating policy: the management of credit limit. Lenders traditionally applied static decision models to manage the credit limit of credit card a
APA, Harvard, Vancouver, ISO, and other styles
27

Zang, Peng. "Scaling solutions to Markov Decision Problems." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42906.

Full text
Abstract:
The Markov Decision Problem (MDP) is a widely applied mathematical model useful for describing a wide array of real world decision problems ranging from navigation to scheduling to robotics. Existing methods for solving MDPs scale poorly when applied to large domains where there are many components and factors to consider. In this dissertation, I study the use of non-tabular representations and human input as scaling techniques. I will show that the joint approach has desirable optimality and convergence guarantees, and demonstrates several orders of magnitude speedup over conventional tabula
APA, Harvard, Vancouver, ISO, and other styles
28

Hudson, Joshua. "A Partially Observable Markov Decision Process for Breast Cancer Screening." Thesis, Linköpings universitet, Statistik och maskininlärning, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-154437.

Full text
Abstract:
In the US, breast cancer is one of the most common forms of cancer and the most lethal. There are many decisions that must be made by the doctor and/or the patient when dealing with a potential breast cancer. Many of these decisions are made under uncertainty, whether it is the uncertainty related to the progression of the patient's health, or that related to the accuracy of the doctor's tests. Each possible action under consideration can have positive effects, such as a surgery successfully removing a tumour, and negative effects: a post-surgery infection for example. The human mind simply ca
APA, Harvard, Vancouver, ISO, and other styles
29

Tabaeh, Izadi Masoumeh. "On knowledge representation and decision making under uncertainty." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=103012.

Full text
Abstract:
Designing systems with the ability to make optimal decisions under uncertainty is one of the goals of artificial intelligence. However, in many applications the design of optimal planners is complicated due to imprecise inputs and uncertain outputs resulting from stochastic dynamics. Partially Observable Markov Decision Processes (POMDPs) provide a rich mathematical framework to model these kinds of problems. However, the high computational demand of solution methods for POMDPs is a drawback for applying them in practice.<br>In this thesis, we present a two-fold approach for improving the trac
APA, Harvard, Vancouver, ISO, and other styles
30

Lommel, Peter Hans. "An extended Kalman filter extension of the augmented Markov decision process." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/32453.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2005.<br>Includes bibliographical references (p. 99-102).<br>As the field of robotics continues to mature, individual robots are increasingly capable of performing multiple complex tasks. As a result, the ability for robots to move autonomously through their environments is a fundamental necessity. If perfect knowledge of the robot's position is available, the robot motion planning problem can be solved efficiently using any of a number of existing algorithms. Frequently though, the robot's position ca
APA, Harvard, Vancouver, ISO, and other styles
31

Hamroun, Youcef F. "The decision-making process in metropolitan planning organizations." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file 0.19 Mb., p, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:1435819.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Balali, Samaneh. "Incorporating expert judgement into condition based maintenance decision support using a coupled hidden markov model and a partially observable markov decision process." Thesis, University of Strathclyde, 2012. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=19510.

Full text
Abstract:
Preventive maintenance consists of activities performed to maintain a system in a satisfactory functional condition. Condition Based Maintenance (CBM) aims to reduce the cost of preventive maintenance by supporting decisions on performing maintenance actions, based on information reflecting a system's health condition. In practice, the condition related information can be obtained in various ways, including continuous condition monitoring performed by sensors, or subjective assessment performed by humans. An experienced engineer might provide such subjective assessment by visually inspecting a
APA, Harvard, Vancouver, ISO, and other styles
33

Chang, Yanling. "A leader-follower partially observed Markov game." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54407.

Full text
Abstract:
The intent of this dissertation is to generate a set of non-dominated finite-memory policies from which one of two agents (the leader) can select a most preferred policy to control a dynamic system that is also affected by the control decisions of the other agent (the follower). The problem is described by an infinite horizon total discounted reward, partially observed Markov game (POMG). Each agent’s policy assumes that the agent knows its current and recent state values, its recent actions, and the current and recent possibly inaccurate observations of the other agent’s state. For each candi
APA, Harvard, Vancouver, ISO, and other styles
34

Woodward, Mark P. "Framing Human-Robot Task Communication as a Partially Observable Markov Decision Process." Thesis, Harvard University, 2012. http://dissertations.umi.com/gsas.harvard:10188.

Full text
Abstract:
As general purpose robots become more capable, pre-programming of all tasks at the factory will become less practical. We would like for non-technical human owners to be able to communicate, through interaction with their robot, the details of a new task; I call this interaction "task communication". During task communication the robot must infer the details of the task from unstructured human signals, and it must choose actions that facilitate this inference. In this dissertation I propose the use of a partially observable Markov decision process (POMDP) for representing the task communicatio
APA, Harvard, Vancouver, ISO, and other styles
35

Morad, Olivia. "Prefetching control for on-demand contents distribution : a Markov decision process study." Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20111/document.

Full text
Abstract:
Le contexte de la thèse porte sur le contrôle des réseaux de distribution de contenu à la demande. La performance des systèmes distribués interactifs dépend essentiellement sur la prévision du comportement de l'utilisateur et la bande passante en tant que ressource de réseau critique. Le préchargement est une approche prédictive bien connu dans le World Wide Web ce qui évite les délais de réponse en exploitant un temps d'arrêt que permet d'anticiper les futures demandes de l'utilisateur et prend avantage des ressources réseau disponibles. Le contrôle de préchargement est une opération vitale p
APA, Harvard, Vancouver, ISO, and other styles
36

HUANG, HEXIANG. "APPLICATION OF VISUALIZATION IN URBAN PLANNING DECISION-MAKING PROCESS." University of Cincinnati / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1100979099.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Selvi, Ersin Suleyman. "Cognitive Radar Applied To Target Tracking Using Markov Decision Processes." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/81968.

Full text
Abstract:
The radio-frequency spectrum is a precious resource, with many applications and users, especially with the recent spectrum auction in the United States. Future platforms and devices, such as radars and radios, need to be adaptive to their spectral environment in order to continue serving the needs of their users. This thesis considers an environment with one tracking radar, a single target, and a communications system. The radar-communications coexistence problem is modeled as a Markov decision process (MDP), and reinforcement learning is applied to drive the radar to optimal behavior.<br>Mast
APA, Harvard, Vancouver, ISO, and other styles
38

Castro, Rivadeneira Pablo Samuel. "On planning, prediction and knowledge transfer in fully and partially observable Markov decision processes." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=104525.

Full text
Abstract:
This dissertation addresses the problem of sequential decision making under uncertainty in large systems. The formalisms used to study this problem are fully and partially observable Markov Decision Processes (MDPs and POMDPs, respectively). The first contribution of this dissertation is a theoretical analysis of the behavior of POMDPs when only subsets of the observation set are used. One of these subsets is used to update the agent's state estimate, while the other subset contains observations the agent is interested in predicting and/or optimizing. The behaviors are formalized as three type
APA, Harvard, Vancouver, ISO, and other styles
39

Chen, Yu Fan Ph D. Massachusetts Institute of Technology. "Hierarchical decomposition of multi-agent Markov decision processes with application to health aware planning." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/93795.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2014.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 99-104).<br>Multi-agent robotic systems have attracted the interests of both researchers and practitioners because they provide more capabilities and afford greater flexibility than single-agent systems. Coordination of individual agents within large teams is often challenging because of the combinatorial nature of such problems. In particular, the number of possible joint configurations is the product of t
APA, Harvard, Vancouver, ISO, and other styles
40

West, Aaron P. "A decision support system for fabrication process planning in stereolithography." Thesis, Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/16896.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Rezende, Marisa Barcia Guaraldo Marcondes. "Planning and citizenship : decision-making in the planning process in Campo Grande, MS, Brazil." Thesis, University of Sheffield, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.240366.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Wongsuphasawat, Luxmon. "Extended metropolitanisation and the process of industrial location decision-making in Thailand." Thesis, University of Hull, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.264954.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Erdogdu, Utku. "Efficient Partially Observable Markov Decision Process Based Formulation Of Gene Regulatory Network Control Problem." Phd thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614317/index.pdf.

Full text
Abstract:
The need to analyze and closely study the gene related mechanisms motivated the research on the modeling and control of gene regulatory networks (GRN). Dierent approaches exist to model GRNs<br>they are mostly simulated as mathematical models that represent relationships between genes. Though it turns into a more challenging problem, we argue that partial observability would be a more natural and realistic method for handling the control of GRNs. Partial observability is a fundamental aspect of the problem<br>it is mostly ignored and substituted by the assumption that states of GRN are known p
APA, Harvard, Vancouver, ISO, and other styles
44

Ni, Wenlong. "Optimal call admission control policies in wireless cellular networks using Semi Markov Decision Process /." Connect to full text in OhioLINK ETD Center, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=toledo1227028023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Chippa, Mukesh K. "Goal-seeking Decision Support System to Empower Personal Wellness Management." University of Akron / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=akron1480413936639467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Allen, Martha Paralee. "A constructivist study of the decision-making process in permanency planning." CSUSB ScholarWorks, 1993. https://scholarworks.lib.csusb.edu/etd-project/688.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Gokhale, Mihir. "Use of analytical hierarchy process in university strategy planning." Diss., Rolla, Mo. : University of Missouri-Rolla, 2007. http://scholarsmine.mst.edu/thesis/pdf/thesis_mihir_09007dcc804ef452.pdf.

Full text
Abstract:
Thesis (M.S.)--University of Missouri--Rolla, 2007.<br>Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed April 29, 2008) Includes bibliographical references (p. 96-100).
APA, Harvard, Vancouver, ISO, and other styles
48

Huh, Keun S. M. Massachusetts Institute of Technology. "Asian real estate investment : data utilization for the decision making process." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/42020.

Full text
Abstract:
Thesis (S.M. in Real Estate Development)--Massachusetts Institute of Technology, Dept. of Urban Studies and Planning, 2007.<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Includes bibliographical references (leaf 38).<br>Many investors in developed countries believe the Asian emerging market to be highly risky due to numerous uncertainties including limited market information to make sound investment decisions. However, still successful investment deals were completed by many investors who
APA, Harvard, Vancouver, ISO, and other styles
49

Alasmari, Khalid R. "Novel methods for Reduced Energy and Time Consumption for Mobile Devices using Markov Decision Process." University of Toledo / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1587906971444328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Ngai, Ka-kui. "Web-based intelligent decision support system for optimization of polishing process planning." Click to view the E-thesis via HKUTO, 2007. http://sunzi.lib.hku.hk/HKUTO/record/B39558472.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!