Tesi sul tema "Partially Observable Markov Decision Processes (POMDPs)"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Vedi i top-35 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Partially Observable Markov Decision Processes (POMDPs)".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.
Aberdeen, Douglas Alexander, and doug aberdeen@anu edu au. "Policy-Gradient Algorithms for Partially Observable Markov Decision Processes." The Australian National University. Research School of Information Sciences and Engineering, 2003. http://thesis.anu.edu.au./public/adt-ANU20030410.111006.
Testo completoOlafsson, Björgvin. "Partially Observable Markov Decision Processes for Faster Object Recognition." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-198632.
Testo completoLusena, Christopher. "Finite Memory Policies for Partially Observable Markov Decision Proesses." UKnowledge, 2001. http://uknowledge.uky.edu/gradschool_diss/323.
Testo completoSkoglund, Caroline. "Risk-aware Autonomous Driving Using POMDPs and Responsibility-Sensitive Safety." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300909.
Testo completoYou, Yang. "Probabilistic Decision-Making Models for Multi-Agent Systems and Human-Robot Collaboration." Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0014.
Testo completoCheng, Hsien-Te. "Algorithms for partially observable Markov decision processes." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/29073.
Testo completoJaulmes, Robin. "Active learning in partially observable Markov decision processes." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=98733.
Testo completoAberdeen, Douglas Alexander. "Policy-gradient algorithms for partially observable Markov decision processes /." View thesis entry in Australian Digital Theses Program, 2003. http://thesis.anu.edu.au/public/adt-ANU20030410.111006/index.html.
Testo completoZawaideh, Zaid. "Eliciting preferences sequentially using partially observable Markov decision processes." Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=18794.
Testo completoWilliams, Jason Douglas. "Partially observable Markov decision processes for spoken dialogue management." Thesis, University of Cambridge, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612754.
Testo completoLusena, Christopher. "Finite memory policies for partially observable Markov decision processes." Lexington, Ky. : [University of Kentucky Libraries], 2001. http://lib.uky.edu/ETD/ukycosc2001d00021/lusena01.pdf.
Testo completoYu, Huizhen Ph D. Massachusetts Institute of Technology. "Approximate solution methods for partially observable Markov and semi-Markov decision processes." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/35299.
Testo completoTobin, Ludovic. "A Stochastic Point-Based Algorithm for Partially Observable Markov Decision Processes." Thesis, Université Laval, 2008. http://www.theses.ulaval.ca/2008/25194/25194.pdf.
Testo completoOlsen, Alan. "Pond-Hindsight: Applying Hindsight Optimization to Partially-Observable Markov Decision Processes." DigitalCommons@USU, 2011. https://digitalcommons.usu.edu/etd/1035.
Testo completoHudson, Joshua. "A Partially Observable Markov Decision Process for Breast Cancer Screening." Thesis, Linköpings universitet, Statistik och maskininlärning, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-154437.
Testo completoCastro, Rivadeneira Pablo Samuel. "On planning, prediction and knowledge transfer in fully and partially observable Markov decision processes." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=104525.
Testo completoHorgan, Casey Vi. "Dealing with uncertainty : a comparison of robust optimization and partially observable Markov decision processes." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112410.
Testo completoCrook, Paul A. "Learning in a state of confusion : employing active perception and reinforcement learning in partially observable worlds." Thesis, University of Edinburgh, 2007. http://hdl.handle.net/1842/1471.
Testo completoOmidshafiei, Shayegan. "Decentralized control of multi-robot systems using partially observable Markov Decision Processes and belief space macro-actions." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/101447.
Testo completoFolsom-Kovarik, Jeremiah. "Leveraging Help Requests in POMDP Intelligent Tutors." Doctoral diss., University of Central Florida, 2012. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5210.
Testo completoPradhan, Neil. "Deep Reinforcement Learning for Autonomous Highway Driving Scenario." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-289444.
Testo completoMurugesan, Sugumar. "Opportunistic Scheduling Using Channel Memory in Markov-modeled Wireless Networks." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1282065836.
Testo completoIbrahim, Rita. "Utilisation des communications Device-to-Device pour améliorer l'efficacité des réseaux cellulaires." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLC002/document.
Testo completoGonçalves, Luciano Vargas. "Uma arquitetura de Agentes BDI para auto-regulação de Trocas Sociais em Sistemas Multiagentes Abertos." Universidade Catolica de Pelotas, 2009. http://tede.ucpel.edu.br:8080/jspui/handle/tede/105.
Testo completoSachan, Mohit. "Learning in Partially Observable Markov Decision Processes." 2013. http://hdl.handle.net/1805/3451.
Testo completoKoltunova, Veronika. "Active Sensing for Partially Observable Markov Decision Processes." Thesis, 2013. http://hdl.handle.net/10012/7222.
Testo completoAberdeen, Douglas. "Policy-Gradient Algorithms for Partially Observable Markov Decision Processes." Phd thesis, 2003. http://hdl.handle.net/1885/48180.
Testo completoKinathil, Shamin. "Closed-form Solutions to Sequential Decision Making within Markets." Phd thesis, 2018. http://hdl.handle.net/1885/186490.
Testo completoDaswani, Mayank. "Generic Reinforcement Learning Beyond Small MDPs." Phd thesis, 2015. http://hdl.handle.net/1885/110545.
Testo completoPoupart, Pascal. "Exploiting structure to efficiently solve large scale partially observable Markov decision processes." 2005. http://link.library.utoronto.ca/eir/EIRdetail.cfm?Resources__ID=232732&T=F.
Testo completoLeung, Siu-Ki. "Exploring partially observable Markov decision processes by exploting structure and heuristic information." Thesis, 1996. http://hdl.handle.net/2429/5772.
Testo completoPoupart, Pascal. "Approximate value-directed belief state monitoring for partially observable Markov decision processes." Thesis, 2000. http://hdl.handle.net/2429/11462.
Testo completoAmato, Christopher. "Increasing scalability in algorithms for centralized and decentralized partially observable Markov decision processes: Efficient decision-making and coordination in uncertain environments." 2010. https://scholarworks.umass.edu/dissertations/AAI3427492.
Testo completoGoswami, Anindya. "Semi-Markov Processes In Dynamic Games And Finance." Thesis, 2008. https://etd.iisc.ac.in/handle/2005/727.
Testo completoGoswami, Anindya. "Semi-Markov Processes In Dynamic Games And Finance." Thesis, 2008. http://hdl.handle.net/2005/727.
Testo completo