Literatura académica sobre el tema "Reinforcement Learning"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Reinforcement Learning".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Reinforcement Learning"
Deora, Merin y Sumit Mathur. "Reinforcement Learning". IJARCCE 6, n.º 4 (30 de abril de 2017): 178–81. http://dx.doi.org/10.17148/ijarcce.2017.6433.
Texto completoBarto, Andrew G. "Reinforcement Learning". IFAC Proceedings Volumes 31, n.º 29 (octubre de 1998): 5. http://dx.doi.org/10.1016/s1474-6670(17)38315-5.
Texto completoWoergoetter, Florentin y Bernd Porr. "Reinforcement learning". Scholarpedia 3, n.º 3 (2008): 1448. http://dx.doi.org/10.4249/scholarpedia.1448.
Texto completoMoore, Brett L., Anthony G. Doufas y Larry D. Pyeatt. "Reinforcement Learning". Anesthesia & Analgesia 112, n.º 2 (febrero de 2011): 360–67. http://dx.doi.org/10.1213/ane.0b013e31820334a7.
Texto completoLiaq, Mudassar y Yungcheol Byun. "Autonomous UAV Navigation Using Reinforcement Learning". International Journal of Machine Learning and Computing 9, n.º 6 (diciembre de 2019): 756–61. http://dx.doi.org/10.18178/ijmlc.2019.9.6.869.
Texto completoAlrammal, Muath y Munir Naveed. "Monte-Carlo Based Reinforcement Learning (MCRL)". International Journal of Machine Learning and Computing 10, n.º 2 (febrero de 2020): 227–32. http://dx.doi.org/10.18178/ijmlc.2020.10.2.924.
Texto completoNurmuhammet, Abdullayev. "DEEP REINFORCEMENT LEARNING ON STOCK DATA". Alatoo Academic Studies 23, n.º 2 (30 de junio de 2023): 505–18. http://dx.doi.org/10.17015/aas.2023.232.49.
Texto completoLikas, Aristidis. "A Reinforcement Learning Approach to Online Clustering". Neural Computation 11, n.º 8 (1 de noviembre de 1999): 1915–32. http://dx.doi.org/10.1162/089976699300016025.
Texto completoMardhatillah, Elsy. "Teacher’s Reinforcement in English Classroom in MTSS Darul Makmur Sungai Cubadak". Indonesian Research Journal On Education 3, n.º 1 (2 de enero de 2022): 825–32. http://dx.doi.org/10.31004/irje.v3i1.202.
Texto completoFan, ZiSheng. "An exploration of reinforcement learning and deep reinforcement learning". Applied and Computational Engineering 73, n.º 1 (5 de julio de 2024): 154–59. http://dx.doi.org/10.54254/2755-2721/73/20240386.
Texto completoTesis sobre el tema "Reinforcement Learning"
Izquierdo, Ayala Pablo. "Learning comparison: Reinforcement Learning vs Inverse Reinforcement Learning : How well does inverse reinforcement learning perform in simple markov decision processes in comparison to reinforcement learning?" Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-259371.
Texto completoDenna studie är en kvalitativ jämförelse mellan två olika inlärningsangreppssätt, “Reinforcement Learning” (RL) och “Inverse Reinforcement Learning” (IRL), om använder "Gridworld", en "Markov Decision-Process". Fokus ligger på den senare algoritmen, IRL, eftersom den anses relativt ny och få studier har i nuläget gjorts kring den. I studien är RL mer fördelaktig än IRL, som skapar en korrekt lösning i alla olika scenarier som presenteras i studien. Beteendet hos IRL-algoritmen kan dock förbättras vilket också visas och analyseras i denna studie.
Seymour, B. J. "Aversive reinforcement learning". Thesis, University College London (University of London), 2010. http://discovery.ucl.ac.uk/800107/.
Texto completoAkrour, Riad. "Robust Preference Learning-based Reinforcement Learning". Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112236/document.
Texto completoThe thesis contributions resolves around sequential decision taking and more precisely Reinforcement Learning (RL). Taking its root in Machine Learning in the same way as supervised and unsupervised learning, RL quickly grow in popularity within the last two decades due to a handful of achievements on both the theoretical and applicative front. RL supposes that the learning agent and its environment follow a stochastic Markovian decision process over a state and action space. The process is said of decision as the agent is asked to choose at each time step an action to take. It is said stochastic as the effect of selecting a given action in a given state does not systematically yield the same state but rather defines a distribution over the state space. It is said to be Markovian as this distribution only depends on the current state-action pair. Consequently to the choice of an action, the agent receives a reward. The RL goal is then to solve the underlying optimization problem of finding the behaviour that maximizes the sum of rewards all along the interaction of the agent with its environment. From an applicative point of view, a large spectrum of problems can be cast onto an RL one, from Backgammon (TD-Gammon, was one of Machine Learning first success giving rise to a world class player of advanced level) to decision problems in the industrial and medical world. However, the optimization problem solved by RL depends on the prevous definition of a reward function that requires a certain level of domain expertise and also knowledge of the internal quirks of RL algorithms. As such, the first contribution of the thesis was to propose a learning framework that lightens the requirements made to the user. The latter does not need anymore to know the exact solution of the problem but to only be able to choose between two behaviours exhibited by the agent, the one that matches more closely the solution. Learning is interactive between the agent and the user and resolves around the three main following points: i) The agent demonstrates a behaviour ii) The user compares it w.r.t. to the current best one iii) The agent uses this feedback to update its preference model of the user and uses it to find the next behaviour to demonstrate. To reduce the number of required interactions before finding the optimal behaviour, the second contribution of the thesis was to define a theoretically sound criterion making the trade-off between the sometimes contradicting desires of complying with the user's preferences and demonstrating sufficiently different behaviours. The last contribution was to ensure the robustness of the algorithm w.r.t. the feedback errors that the user might make. Which happens more often than not in practice, especially at the initial phase of the interaction, when all the behaviours are far from the expected solution
Tabell, Johnsson Marco y Ala Jafar. "Efficiency Comparison Between Curriculum Reinforcement Learning & Reinforcement Learning Using ML-Agents". Thesis, Blekinge Tekniska Högskola, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20218.
Texto completoYang, Zhaoyuan Yang. "Adversarial Reinforcement Learning for Control System Design: A Deep Reinforcement Learning Approach". The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu152411491981452.
Texto completoCortesi, Daniele. "Reinforcement Learning in Rogue". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16138/.
Texto completoGirgin, Sertan. "Abstraction In Reinforcement Learning". Phd thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608257/index.pdf.
Texto completoSuay, Halit Bener. "Reinforcement Learning from Demonstration". Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-dissertations/173.
Texto completoGao, Yang. "Argumentation accelerated reinforcement learning". Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/26603.
Texto completoAlexander, John W. "Transfer in reinforcement learning". Thesis, University of Aberdeen, 2015. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=227908.
Texto completoLibros sobre el tema "Reinforcement Learning"
S, Sutton Richard, ed. Reinforcement learning. Boston: Kluwer Academic Publishers, 1992.
Buscar texto completoSutton, Richard S. Reinforcement Learning. Boston, MA: Springer US, 1992.
Buscar texto completoWiering, Marco y Martijn van Otterlo, eds. Reinforcement Learning. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-27645-3.
Texto completoSutton, Richard S., ed. Reinforcement Learning. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3618-5.
Texto completoLorenz, Uwe. Reinforcement Learning. Berlin, Heidelberg: Springer Berlin Heidelberg, 2020. http://dx.doi.org/10.1007/978-3-662-61651-2.
Texto completoNandy, Abhishek y Manisha Biswas. Reinforcement Learning. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3285-9.
Texto completoLi, Jinna, Frank L. Lewis y Jialu Fan. Reinforcement Learning. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-28394-9.
Texto completoLorenz, Uwe. Reinforcement Learning. Berlin, Heidelberg: Springer Berlin Heidelberg, 2024. http://dx.doi.org/10.1007/978-3-662-68311-8.
Texto completoMerrick, Kathryn y Mary Lou Maher. Motivated Reinforcement Learning. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-540-89187-1.
Texto completoDong, Hao, Zihan Ding y Shanghang Zhang, eds. Deep Reinforcement Learning. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-4095-0.
Texto completoCapítulos de libros sobre el tema "Reinforcement Learning"
Sutton, Richard S. "Introduction: The Challenge of Reinforcement Learning". En Reinforcement Learning, 1–3. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3618-5_1.
Texto completoWilliams, Ronald J. "Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning". En Reinforcement Learning, 5–32. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3618-5_2.
Texto completoTesauro, Gerald. "Practical Issues in Temporal Difference Learning". En Reinforcement Learning, 33–53. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3618-5_3.
Texto completoWatkins, Christopher J. C. H. y Peter Dayan. "Technical Note". En Reinforcement Learning, 55–68. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3618-5_4.
Texto completoLin, Long-Ji. "Self-Improving Reactive Agents Based on Reinforcement Learning, Planning and Teaching". En Reinforcement Learning, 69–97. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3618-5_5.
Texto completoSingh, Satinder Pal. "Transfer of Learning by Composing Solutions of Elemental Sequential Tasks". En Reinforcement Learning, 99–115. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3618-5_6.
Texto completoDayan, Peter. "The Convergence of TD(λ) for General λ". En Reinforcement Learning, 117–38. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3618-5_7.
Texto completoMillán, José R. y Carme Torras. "A Reinforcement Connectionist Approach to Robot Path Finding in Non-Maze-Like Environments". En Reinforcement Learning, 139–71. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3618-5_8.
Texto completoLorenz, Uwe. "Bestärkendes Lernen als Teilgebiet des Maschinellen Lernens". En Reinforcement Learning, 1–11. Berlin, Heidelberg: Springer Berlin Heidelberg, 2020. http://dx.doi.org/10.1007/978-3-662-61651-2_1.
Texto completoLorenz, Uwe. "Grundbegriffe des Bestärkenden Lernens". En Reinforcement Learning, 13–20. Berlin, Heidelberg: Springer Berlin Heidelberg, 2020. http://dx.doi.org/10.1007/978-3-662-61651-2_2.
Texto completoActas de conferencias sobre el tema "Reinforcement Learning"
Yang, Kun, Chengshuai Shi y Cong Shen. "Teaching Reinforcement Learning Agents via Reinforcement Learning". En 2023 57th Annual Conference on Information Sciences and Systems (CISS). IEEE, 2023. http://dx.doi.org/10.1109/ciss56502.2023.10089695.
Texto completoDoshi, Finale, Joelle Pineau y Nicholas Roy. "Reinforcement learning with limited reinforcement". En the 25th international conference. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1390156.1390189.
Texto completoLi, Zhiyi. "Reinforcement Learning". En SIGCSE '19: The 50th ACM Technical Symposium on Computer Science Education. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3287324.3293703.
Texto completoShen, Shitian y Min Chi. "Reinforcement Learning". En UMAP '16: User Modeling, Adaptation and Personalization Conference. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2930238.2930247.
Texto completoKuroe, Yasuaki y Kenya Takeuchi. "Sophisticated Swarm Reinforcement Learning by Incorporating Inverse Reinforcement Learning". En 2023 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2023. http://dx.doi.org/10.1109/smc53992.2023.10394525.
Texto completoLyu, Le, Yang Shen y Sicheng Zhang. "The Advance of Reinforcement Learning and Deep Reinforcement Learning". En 2022 IEEE International Conference on Electrical Engineering, Big Data and Algorithms (EEBDA). IEEE, 2022. http://dx.doi.org/10.1109/eebda53927.2022.9744760.
Texto completoEpshteyn, Arkady, Adam Vogel y Gerald DeJong. "Active reinforcement learning". En the 25th international conference. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1390156.1390194.
Texto completoEpshteyn, Arkady y Gerald DeJong. "Qualitative reinforcement learning". En the 23rd international conference. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1143844.1143883.
Texto completoVargas, Danilo Vasconcellos. "Evolutionary reinforcement learning". En GECCO '18: Genetic and Evolutionary Computation Conference. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3205651.3207865.
Texto completoLangford, John. "Contextual reinforcement learning". En 2017 IEEE International Conference on Big Data (Big Data). IEEE, 2017. http://dx.doi.org/10.1109/bigdata.2017.8257902.
Texto completoInformes sobre el tema "Reinforcement Learning"
Singh, Satinder, Andrew G. Barto y Nuttapong Chentanez. Intrinsically Motivated Reinforcement Learning. Fort Belvoir, VA: Defense Technical Information Center, enero de 2005. http://dx.doi.org/10.21236/ada440280.
Texto completoGhavamzadeh, Mohammad y Sridhar Mahadevan. Hierarchical Multiagent Reinforcement Learning. Fort Belvoir, VA: Defense Technical Information Center, enero de 2004. http://dx.doi.org/10.21236/ada440418.
Texto completoHarmon, Mance E. y Stephanie S. Harmon. Reinforcement Learning: A Tutorial. Fort Belvoir, VA: Defense Technical Information Center, enero de 1997. http://dx.doi.org/10.21236/ada323194.
Texto completoTadepalli, Prasad y Alan Fern. Partial Planning Reinforcement Learning. Fort Belvoir, VA: Defense Technical Information Center, agosto de 2012. http://dx.doi.org/10.21236/ada574717.
Texto completoGhavamzadeh, Mohammad y Sridhar Mahadevan. Hierarchical Average Reward Reinforcement Learning. Fort Belvoir, VA: Defense Technical Information Center, junio de 2003. http://dx.doi.org/10.21236/ada445728.
Texto completoJohnson, Daniel W. Drive-Reinforcement Learning System Applications. Fort Belvoir, VA: Defense Technical Information Center, julio de 1992. http://dx.doi.org/10.21236/ada264514.
Texto completoCleland, Andrew. Bounding Box Improvement With Reinforcement Learning. Portland State University Library, enero de 2000. http://dx.doi.org/10.15760/etd.6322.
Texto completoLi, Jiajie. Learning Financial Investment Strategies using Reinforcement Learning and 'Chan theory'. Ames (Iowa): Iowa State University, agosto de 2022. http://dx.doi.org/10.31274/cc-20240624-946.
Texto completoBaird, III, Klopf Leemon C. y A. H. Reinforcement Learning With High-Dimensional, Continuous Actions. Fort Belvoir, VA: Defense Technical Information Center, noviembre de 1993. http://dx.doi.org/10.21236/ada280844.
Texto completoObert, James y Angie Shia. Optimizing Dynamic Timing Analysis with Reinforcement Learning. Office of Scientific and Technical Information (OSTI), noviembre de 2019. http://dx.doi.org/10.2172/1573933.
Texto completo