Academic literature on the topic 'Multi-Objective Markov Decision Processes'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multi-Objective Markov Decision Processes.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Multi-Objective Markov Decision Processes"
Wakuta, K., and K. Togawa. "Solution procedures for multi-objective markov decision processes." Optimization 43, no. 1 (January 1998): 29–46. http://dx.doi.org/10.1080/02331939808844372.
Full textAhn, Hyun-Soo, and Rhonda Righter. "Multi-Actor Markov Decision Processes." Journal of Applied Probability 42, no. 1 (March 2005): 15–26. http://dx.doi.org/10.1239/jap/1110381367.
Full textAhn, Hyun-Soo, and Rhonda Righter. "Multi-Actor Markov Decision Processes." Journal of Applied Probability 42, no. 01 (March 2005): 15–26. http://dx.doi.org/10.1017/s0021900200000024.
Full textLIU, QIU-SHENG, KATSUHISA OHNO, and HIROTAKA NAKAYAMA. "Multi-objective discounted Markov decision processes with expectation and variance criteria." International Journal of Systems Science 23, no. 6 (June 1992): 903–14. http://dx.doi.org/10.1080/00207729208949257.
Full textFujita, Toshiharu, and Akifumi Kira. "Mutually Dependent Markov Decision Processes." Journal of Advanced Computational Intelligence and Intelligent Informatics 18, no. 6 (November 20, 2014): 992–98. http://dx.doi.org/10.20965/jaciii.2014.p0992.
Full textMandow, L., J. L. Perez-de-la-Cruz, and N. Pozas. "Multi-objective dynamic programming with limited precision." Journal of Global Optimization 82, no. 3 (November 2, 2021): 595–614. http://dx.doi.org/10.1007/s10898-021-01096-x.
Full textWernz, Christian. "Multi-time-scale Markov decision processes for organizational decision-making." EURO Journal on Decision Processes 1, no. 3-4 (November 2013): 299–324. http://dx.doi.org/10.1007/s40070-013-0020-7.
Full textBouyer, Patricia, Mauricio González, Nicolas Markey, and Mickael Randour. "Multi-weighted Markov Decision Processes with Reachability Objectives." Electronic Proceedings in Theoretical Computer Science 277 (September 7, 2018): 250–64. http://dx.doi.org/10.4204/eptcs.277.18.
Full textWhite, D. J. "An Heuristic for Multi-Dimensional Markov Decision Processes." Journal of Information and Optimization Sciences 14, no. 2 (May 1993): 203–19. http://dx.doi.org/10.1080/02522667.1993.10699150.
Full textRandour, Mickael, Jean-François Raskin, and Ocan Sankur. "Percentile queries in multi-dimensional Markov decision processes." Formal Methods in System Design 50, no. 2-3 (January 5, 2017): 207–48. http://dx.doi.org/10.1007/s10703-016-0262-7.
Full textDissertations / Theses on the topic "Multi-Objective Markov Decision Processes"
Pratikakis, Nikolaos. "Multistage decisions and risk in Markov decision processes towards effective approximate dynamic programming architectures /." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/31654.
Full textCommittee Chair: Jay H. Lee; Committee Member: Martha Grover; Committee Member: Matthew J. Realff; Committee Member: Shabbir Ahmed; Committee Member: Stylianos Kavadias. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Dorff, Rebecca. "Modelling Infertility with Markov Chains." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/4070.
Full textChen, Yu Fan Ph D. Massachusetts Institute of Technology. "Hierarchical decomposition of multi-agent Markov decision processes with application to health aware planning." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/93795.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 99-104).
Multi-agent robotic systems have attracted the interests of both researchers and practitioners because they provide more capabilities and afford greater flexibility than single-agent systems. Coordination of individual agents within large teams is often challenging because of the combinatorial nature of such problems. In particular, the number of possible joint configurations is the product of that of every agent. Further, real world applications often contain various sources of uncertainties. This thesis investigates techniques to address the scalability issue of multi-agent planning under uncertainties. This thesis develops a novel hierarchical decomposition approach (HD-MMDP) for solving Multi-agent Markov Decision Processes (MMDPs), which is a natural framework for formulating stochastic sequential decision-making problems. In particular, the HD-MMDP algorithm builds a decomposition structure by exploiting coupling relationships in the reward function. A number of smaller subproblems are formed and are solved individually. The planning spaces of each subproblem are much smaller than that of the original problem, which improves the computational efficiency, and the solutions to the subproblems can be combined to form a solution (policy) to the original problem. The HD-MMDP algorithm is applied on a ten agent persistent search and track (PST) mission and shows more than 35% improvement over an existing algorithm developed specifically for this domain. This thesis also contributes to the development of the software infrastructure that enables hardware experiments involving multiple robots. In particular, the thesis presents a novel optimization based multi-agent path planning algorithm, which was tested in simulation and hardware (quadrotor) experiment. The HD-MMDP algorithm is also used to solve a multi-agent intruder monitoring mission implemented using real robots.
by Yu Fan Chen.
S.M.
Omidshafiei, Shayegan. "Decentralized control of multi-robot systems using partially observable Markov Decision Processes and belief space macro-actions." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/101447.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 129-139).
Planning, control, perception, and learning for multi-robot systems present signicant challenges. Transition dynamics of the robots may be stochastic, making it difficult to select the best action each robot should take at a given time. The observation model, a function of the robots' sensors, may be noisy or partial, meaning that deterministic knowledge of the team's state is often impossible to attain. Robots designed for real-world applications require careful consideration of such sources of uncertainty. This thesis contributes a framework for multi-robot planning in continuous spaces with partial observability. Decentralized Partially Observable Markov Decision Processes (Dec-POMDPs) are general models for multi-robot coordination problems. However, representing and solving Dec-POMDPs is often intractable for large problems. This thesis extends the Dec-POMDP framework to the Decentralized Partially Observable Semi-Markov Decision Process (Dec-POSMDP), taking advantage of high- level representations that are natural for multi-robot problems. Dec-POSMDPs allow asynchronous decision-making, which is crucial in multi-robot domains. This thesis also presents algorithms for solving Dec-POSMDPs, which are more scalable than previous methods due to use of closed-loop macro-actions in planning. The proposed framework's performance is evaluated in a constrained multi-robot package delivery domain, showing its ability to provide high-quality solutions for large problems. Due to the probabilistic nature of state transitions and observations, robots operate in belief space, the space of probability distributions over all of their possible states. This thesis also contributes a hardware platform called Measurable Augmented Reality for Prototyping Cyber-Physical Systems (MAR-CPS). MAR-CPS allows real-time visualization of the belief space in laboratory settings.
by Shayegan Omidshafiei.
S.M.
Dorini, Gianluca. "The neighbour search approach for solving multi-objectie Markov Decision Processes, and the application in reservoirs operation planning." Thesis, University of Exeter, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.445450.
Full textFowler, Michael C. "Intelligent Knowledge Distribution for Multi-Agent Communication, Planning, and Learning." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/97996.
Full textDoctor of Philosophy
This dissertation addresses a fundamental question behind when multiple autonomous sys- tems, like drone swarms, in the field need to coordinate and share data: what information should be sent to whom and when, with the limited resources available to each agent? Intelligent Knowledge Distribution is a framework that answers these questions. Communication requirements for multi-agent systems can be rather high when an accurate picture of the environment and the state of other agents must be maintained. To reduce the impact of multi-agent coordination on networked systems, e.g., power and bandwidth, this dissertation introduces new concepts to enable Intelligent Knowledge Distribution (IKD), including Constrained-action POMDPs and concurrent decentralized (CoDec) POMDPs for an agnostic plug-and-play capability for fully autonomous systems. The IKD model was able to demonstrate its validity as a "plug-and-play" library that manages communications between agents that ensures the right information is being transmitted at the right time to the right agent to ensure mission success.
Leung, Hiu-lan, and 梁曉蘭. "Wandering ideal point models for single or multi-attribute ranking data: a Bayesian approach." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B29552357.
Full textMurugesan, Sugumar. "Opportunistic Scheduling Using Channel Memory in Markov-modeled Wireless Networks." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1282065836.
Full textRaffensperger, Peter Abraham. "Measuring and Influencing Sequential Joint Agent Behaviours." Thesis, University of Canterbury. Electrical and Computer Engineering, 2013. http://hdl.handle.net/10092/7472.
Full textLafleur, Jarret Marshall. "A Markovian state-space framework for integrating flexibility into space system design decisions." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/43749.
Full textBooks on the topic "Multi-Objective Markov Decision Processes"
White, D. J. Dynamic programming, Markov decision processes and value, efficiency and multiple objective methods. Birmingham: University of Birmingham, 1986.
Find full textKovalyov, Anatoliy, ed. Scientific problems of management at the macro-, meso- and microeconomic levels:Proceedings of the XIX International scientific-practical conference dedicated to the 100th anniversary of Odessa National Economic University, May 17-18, 2021. Odessa National Economic University, 2021. http://dx.doi.org/10.32680/978-966-992-589-3.
Full textKovalyov, Anatoliy, ed. Scientific problems of management at the macro-, meso- and microeconomic levels : Proceedings of the 20 th International Scientific and Practice Conference, April 14, 2022. Odessa. Odessa National Economic University, 2022. http://dx.doi.org/10.32680/npg.conf.oneu.2022.
Full textKeiser, Sandra, Deborah Vandermar, and Myrna B. Garner. Beyond Design. 5th ed. Fairchild Books, 2022. http://dx.doi.org/10.5040/9781501366581.
Full textGrare, Frédéric. India’s and China’s Economic Standing in Asia. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780190859336.003.0008.
Full textTrepulė, Elena, Airina Volungevičienė, Margarita Teresevičienė, Estela Daukšienė, Rasa Greenspon, Giedrė Tamoliūnė, Marius Šadauskas, and Gintarė Vaitonytė. Guidelines for open and online learning assessment and recognition with reference to the National and European qualification framework: micro-credentials as a proposal for tuning and transparency. Vytauto Didžiojo universitetas, 2021. http://dx.doi.org/10.7220/9786094674792.
Full textSobczyk, Eugeniusz Jacek. Uciążliwość eksploatacji złóż węgla kamiennego wynikająca z warunków geologicznych i górniczych. Instytut Gospodarki Surowcami Mineralnymi i Energią PAN, 2022. http://dx.doi.org/10.33223/onermin/0222.
Full textBook chapters on the topic "Multi-Objective Markov Decision Processes"
Sedova, Ekaterina, Lawrence Mandow, and José-Luis Pérez-de-la-Cruz. "Asynchronous Vector Iteration in Multi-objective Markov Decision Processes." In Advances in Artificial Intelligence, 129–38. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-85713-4_13.
Full textHahn, Ernst Moritz, Vahid Hashemi, Holger Hermanns, Morteza Lahijanian, and Andrea Turrini. "Multi-objective Robust Strategy Synthesis for Interval Markov Decision Processes." In Quantitative Evaluation of Systems, 207–23. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66335-7_13.
Full textRandour, Mickael, Jean-François Raskin, and Ocan Sankur. "Percentile Queries in Multi-dimensional Markov Decision Processes." In Computer Aided Verification, 123–39. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-21690-4_8.
Full textQuatmann, Tim, and Joost-Pieter Katoen. "Multi-objective Optimization of Long-run Average and Total Rewards." In Tools and Algorithms for the Construction and Analysis of Systems, 230–49. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72016-2_13.
Full textEven-Dar, Eyal, Shie Mannor, and Yishay Mansour. "PAC Bounds for Multi-armed Bandit and Markov Decision Processes." In Lecture Notes in Computer Science, 255–70. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-45435-7_18.
Full textDutta, Ayan, O. Patrick Kreidl, and Jason M. O’Kane. "Opportunistic Multi-robot Environmental Sampling via Decentralized Markov Decision Processes." In Distributed Autonomous Robotic Systems, 163–75. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-92790-5_13.
Full textPonomarenko, Leonid, Che Soong Kim, and Agassi Melikov. "Markov Decision Processes (MDP) Approach to Optimization Problems for Multi-Rate Systems." In Performance Analysis and Optimization of Multi-Traffic on Communication Networks, 167–86. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15458-4_7.
Full textRadoszycki, Julia, Nathalie Peyrard, and Régis Sabbadin. "Solving F $$^3$$ MDPs: Collaborative Multiagent Markov Decision Processes with Factored Transitions, Rewards and Stochastic Policies." In PRIMA 2015: Principles and Practice of Multi-Agent Systems, 3–19. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-25524-8_1.
Full textDimuro, Graçaliz P., and Antônio C. R. Costa. "Interval-Based Markov Decision Processes for Regulating Interactions Between Two Agents in Multi-agent Systems." In Applied Parallel Computing. State of the Art in Scientific Computing, 102–11. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11558958_12.
Full textYi, Sha, Changjoo Nam, and Katia Sycara. "Indoor Pursuit-Evasion with Hybrid Hierarchical Partially Observable Markov Decision Processes for Multi-robot Systems." In Distributed Autonomous Robotic Systems, 251–64. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-05816-6_18.
Full textConference papers on the topic "Multi-Objective Markov Decision Processes"
Eddy, Duncan, and Mykel Kochenderfer. "Markov Decision Processes For Multi-Objective Satellite Task Planning." In 2020 IEEE Aerospace Conference. IEEE, 2020. http://dx.doi.org/10.1109/aero47225.2020.9172258.
Full textWiering, Marco A., and Edwin D. de Jong. "Computing Optimal Stationary Policies for Multi-Objective Markov Decision Processes." In 2007 IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning. IEEE, 2007. http://dx.doi.org/10.1109/adprl.2007.368183.
Full textScheftelowitsch, Dimitri, Peter Buchholz, Vahid Hashemi, and Holger Hermanns. "Multi-Objective Approaches to Markov Decision Processes with Uncertain Transition Parameters." In VALUETOOLS 2017: 11th EAI International Conference on Performance Evaluation Methodologies and Tools. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3150928.3150945.
Full textGimbert, Hugo, and Wieslaw Zielonka. "Limits of Multi-Discounted Markov Decision Processes." In 22nd Annual IEEE Symposium on Logic in Computer Science (LICS 2007). IEEE, 2007. http://dx.doi.org/10.1109/lics.2007.28.
Full textSui, Qi, and Hai-yang Wang. "A Dynamic Generation Algorithm for Meta Process Using Markov Decision Processes." In 2006 International Multi-Symposiums on Computer and Computational Sciences (IMSCCS). IEEE, 2006. http://dx.doi.org/10.1109/imsccs.2006.5.
Full textAbtahi, Farnaz, and Mohammad Reza Meybodi. "Solving Multi-Agent Markov Decision Processes using learning automata." In 2008 6th International Symposium on Intelligent Systems and Informatics (SISY 2008). IEEE, 2008. http://dx.doi.org/10.1109/sisy.2008.4664909.
Full textBertuccelli, Luca F., Brett Bethke, and Jonathan P. How. "Robust adaptive Markov Decision Processes in multi-vehicle applications." In 2009 American Control Conference. IEEE, 2009. http://dx.doi.org/10.1109/acc.2009.5160511.
Full textTahir, M., and R. Farrell. "Optimal Resource Control of Multi-Processor Multi-Radio Nodes Using Semi-Markov Decision Processes." In ICC 2010 - 2010 IEEE International Conference on Communications. IEEE, 2010. http://dx.doi.org/10.1109/icc.2010.5502144.
Full textZhiyuan Ren and B. H. Krogh. "Mode-matching control policies for multi-mode Markov decision processes." In Proceedings of American Control Conference. IEEE, 2001. http://dx.doi.org/10.1109/acc.2001.945521.
Full textPanigrahi, J. R., and S. Bhatnagar. "Hierarchical decision making in semiconductor fabs using multi-time scale Markov decision processes." In 2004 43rd IEEE Conference on Decision and Control (CDC) (IEEE Cat. No.04CH37601). IEEE, 2004. http://dx.doi.org/10.1109/cdc.2004.1429441.
Full textReports on the topic "Multi-Objective Markov Decision Processes"
Führ, Martin, Julian Schenten, and Silke Kleihauer. Integrating "Green Chemistry" into the Regulatory Framework of European Chemicals Policy. Sonderforschungsgruppe Institutionenanalyse, July 2019. http://dx.doi.org/10.46850/sofia.9783941627727.
Full textMotel-Klingebiel, Andreas, and Gerhard Naegele. Exclusion and inequality in late working life in the political context of the EU. Linköping University Electronic Press, November 2022. http://dx.doi.org/10.3384/9789179293215.
Full text