Academic literature on the topic 'Reinforcement Learning'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Reinforcement Learning.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Reinforcement Learning"
Deora, Merin, and Sumit Mathur. "Reinforcement Learning." IJARCCE 6, no. 4 (April 30, 2017): 178–81. http://dx.doi.org/10.17148/ijarcce.2017.6433.
Full textBarto, Andrew G. "Reinforcement Learning." IFAC Proceedings Volumes 31, no. 29 (October 1998): 5. http://dx.doi.org/10.1016/s1474-6670(17)38315-5.
Full textWoergoetter, Florentin, and Bernd Porr. "Reinforcement learning." Scholarpedia 3, no. 3 (2008): 1448. http://dx.doi.org/10.4249/scholarpedia.1448.
Full textMoore, Brett L., Anthony G. Doufas, and Larry D. Pyeatt. "Reinforcement Learning." Anesthesia & Analgesia 112, no. 2 (February 2011): 360–67. http://dx.doi.org/10.1213/ane.0b013e31820334a7.
Full textLikas, Aristidis. "A Reinforcement Learning Approach to Online Clustering." Neural Computation 11, no. 8 (November 1, 1999): 1915–32. http://dx.doi.org/10.1162/089976699300016025.
Full textLiaq, Mudassar, and Yungcheol Byun. "Autonomous UAV Navigation Using Reinforcement Learning." International Journal of Machine Learning and Computing 9, no. 6 (December 2019): 756–61. http://dx.doi.org/10.18178/ijmlc.2019.9.6.869.
Full textAlrammal, Muath, and Munir Naveed. "Monte-Carlo Based Reinforcement Learning (MCRL)." International Journal of Machine Learning and Computing 10, no. 2 (February 2020): 227–32. http://dx.doi.org/10.18178/ijmlc.2020.10.2.924.
Full textNurmuhammet, Abdullayev. "DEEP REINFORCEMENT LEARNING ON STOCK DATA." Alatoo Academic Studies 23, no. 2 (June 30, 2023): 505–18. http://dx.doi.org/10.17015/aas.2023.232.49.
Full textMardhatillah, Elsy. "Teacher’s Reinforcement in English Classroom in MTSS Darul Makmur Sungai Cubadak." Indonesian Research Journal On Education 3, no. 1 (January 2, 2022): 825–32. http://dx.doi.org/10.31004/irje.v3i1.202.
Full textFan, ZiSheng. "An exploration of reinforcement learning and deep reinforcement learning." Applied and Computational Engineering 73, no. 1 (July 5, 2024): 154–59. http://dx.doi.org/10.54254/2755-2721/73/20240386.
Full textDissertations / Theses on the topic "Reinforcement Learning"
Izquierdo, Ayala Pablo. "Learning comparison: Reinforcement Learning vs Inverse Reinforcement Learning : How well does inverse reinforcement learning perform in simple markov decision processes in comparison to reinforcement learning?" Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-259371.
Full textDenna studie är en kvalitativ jämförelse mellan två olika inlärningsangreppssätt, “Reinforcement Learning” (RL) och “Inverse Reinforcement Learning” (IRL), om använder "Gridworld", en "Markov Decision-Process". Fokus ligger på den senare algoritmen, IRL, eftersom den anses relativt ny och få studier har i nuläget gjorts kring den. I studien är RL mer fördelaktig än IRL, som skapar en korrekt lösning i alla olika scenarier som presenteras i studien. Beteendet hos IRL-algoritmen kan dock förbättras vilket också visas och analyseras i denna studie.
Seymour, B. J. "Aversive reinforcement learning." Thesis, University College London (University of London), 2010. http://discovery.ucl.ac.uk/800107/.
Full textAkrour, Riad. "Robust Preference Learning-based Reinforcement Learning." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112236/document.
Full textThe thesis contributions resolves around sequential decision taking and more precisely Reinforcement Learning (RL). Taking its root in Machine Learning in the same way as supervised and unsupervised learning, RL quickly grow in popularity within the last two decades due to a handful of achievements on both the theoretical and applicative front. RL supposes that the learning agent and its environment follow a stochastic Markovian decision process over a state and action space. The process is said of decision as the agent is asked to choose at each time step an action to take. It is said stochastic as the effect of selecting a given action in a given state does not systematically yield the same state but rather defines a distribution over the state space. It is said to be Markovian as this distribution only depends on the current state-action pair. Consequently to the choice of an action, the agent receives a reward. The RL goal is then to solve the underlying optimization problem of finding the behaviour that maximizes the sum of rewards all along the interaction of the agent with its environment. From an applicative point of view, a large spectrum of problems can be cast onto an RL one, from Backgammon (TD-Gammon, was one of Machine Learning first success giving rise to a world class player of advanced level) to decision problems in the industrial and medical world. However, the optimization problem solved by RL depends on the prevous definition of a reward function that requires a certain level of domain expertise and also knowledge of the internal quirks of RL algorithms. As such, the first contribution of the thesis was to propose a learning framework that lightens the requirements made to the user. The latter does not need anymore to know the exact solution of the problem but to only be able to choose between two behaviours exhibited by the agent, the one that matches more closely the solution. Learning is interactive between the agent and the user and resolves around the three main following points: i) The agent demonstrates a behaviour ii) The user compares it w.r.t. to the current best one iii) The agent uses this feedback to update its preference model of the user and uses it to find the next behaviour to demonstrate. To reduce the number of required interactions before finding the optimal behaviour, the second contribution of the thesis was to define a theoretically sound criterion making the trade-off between the sometimes contradicting desires of complying with the user's preferences and demonstrating sufficiently different behaviours. The last contribution was to ensure the robustness of the algorithm w.r.t. the feedback errors that the user might make. Which happens more often than not in practice, especially at the initial phase of the interaction, when all the behaviours are far from the expected solution
Tabell, Johnsson Marco, and Ala Jafar. "Efficiency Comparison Between Curriculum Reinforcement Learning & Reinforcement Learning Using ML-Agents." Thesis, Blekinge Tekniska Högskola, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20218.
Full textYang, Zhaoyuan Yang. "Adversarial Reinforcement Learning for Control System Design: A Deep Reinforcement Learning Approach." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu152411491981452.
Full textCortesi, Daniele. "Reinforcement Learning in Rogue." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16138/.
Full textGirgin, Sertan. "Abstraction In Reinforcement Learning." Phd thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608257/index.pdf.
Full textSuay, Halit Bener. "Reinforcement Learning from Demonstration." Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-dissertations/173.
Full textGao, Yang. "Argumentation accelerated reinforcement learning." Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/26603.
Full textAlexander, John W. "Transfer in reinforcement learning." Thesis, University of Aberdeen, 2015. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=227908.
Full textBooks on the topic "Reinforcement Learning"
Sutton, Richard S. Reinforcement Learning. Boston, MA: Springer US, 1992.
Find full textWiering, Marco, and Martijn van Otterlo, eds. Reinforcement Learning. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-27645-3.
Full textSutton, Richard S., ed. Reinforcement Learning. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3618-5.
Full textLorenz, Uwe. Reinforcement Learning. Berlin, Heidelberg: Springer Berlin Heidelberg, 2020. http://dx.doi.org/10.1007/978-3-662-61651-2.
Full textNandy, Abhishek, and Manisha Biswas. Reinforcement Learning. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3285-9.
Full textS, Sutton Richard, ed. Reinforcement learning. Boston: Kluwer Academic Publishers, 1992.
Find full textLorenz, Uwe. Reinforcement Learning. Berlin, Heidelberg: Springer Berlin Heidelberg, 2024. http://dx.doi.org/10.1007/978-3-662-68311-8.
Full textLi, Jinna, Frank L. Lewis, and Jialu Fan. Reinforcement Learning. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-28394-9.
Full textXiao, Zhiqing. Reinforcement Learning. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-19-4933-3.
Full textMerrick, Kathryn, and Mary Lou Maher. Motivated Reinforcement Learning. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-540-89187-1.
Full textBook chapters on the topic "Reinforcement Learning"
Sutton, Richard S. "Introduction: The Challenge of Reinforcement Learning." In Reinforcement Learning, 1–3. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3618-5_1.
Full textWilliams, Ronald J. "Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning." In Reinforcement Learning, 5–32. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3618-5_2.
Full textTesauro, Gerald. "Practical Issues in Temporal Difference Learning." In Reinforcement Learning, 33–53. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3618-5_3.
Full textWatkins, Christopher J. C. H., and Peter Dayan. "Technical Note." In Reinforcement Learning, 55–68. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3618-5_4.
Full textLin, Long-Ji. "Self-Improving Reactive Agents Based on Reinforcement Learning, Planning and Teaching." In Reinforcement Learning, 69–97. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3618-5_5.
Full textSingh, Satinder Pal. "Transfer of Learning by Composing Solutions of Elemental Sequential Tasks." In Reinforcement Learning, 99–115. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3618-5_6.
Full textDayan, Peter. "The Convergence of TD(λ) for General λ." In Reinforcement Learning, 117–38. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3618-5_7.
Full textMillán, José R., and Carme Torras. "A Reinforcement Connectionist Approach to Robot Path Finding in Non-Maze-Like Environments." In Reinforcement Learning, 139–71. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3618-5_8.
Full textLorenz, Uwe. "Bestärkendes Lernen als Teilgebiet des Maschinellen Lernens." In Reinforcement Learning, 1–11. Berlin, Heidelberg: Springer Berlin Heidelberg, 2020. http://dx.doi.org/10.1007/978-3-662-61651-2_1.
Full textLorenz, Uwe. "Grundbegriffe des Bestärkenden Lernens." In Reinforcement Learning, 13–20. Berlin, Heidelberg: Springer Berlin Heidelberg, 2020. http://dx.doi.org/10.1007/978-3-662-61651-2_2.
Full textConference papers on the topic "Reinforcement Learning"
Yang, Kun, Chengshuai Shi, and Cong Shen. "Teaching Reinforcement Learning Agents via Reinforcement Learning." In 2023 57th Annual Conference on Information Sciences and Systems (CISS). IEEE, 2023. http://dx.doi.org/10.1109/ciss56502.2023.10089695.
Full textDoshi, Finale, Joelle Pineau, and Nicholas Roy. "Reinforcement learning with limited reinforcement." In the 25th international conference. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1390156.1390189.
Full textLi, Zhiyi. "Reinforcement Learning." In SIGCSE '19: The 50th ACM Technical Symposium on Computer Science Education. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3287324.3293703.
Full textShen, Shitian, and Min Chi. "Reinforcement Learning." In UMAP '16: User Modeling, Adaptation and Personalization Conference. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2930238.2930247.
Full textKuroe, Yasuaki, and Kenya Takeuchi. "Sophisticated Swarm Reinforcement Learning by Incorporating Inverse Reinforcement Learning." In 2023 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2023. http://dx.doi.org/10.1109/smc53992.2023.10394525.
Full textLyu, Le, Yang Shen, and Sicheng Zhang. "The Advance of Reinforcement Learning and Deep Reinforcement Learning." In 2022 IEEE International Conference on Electrical Engineering, Big Data and Algorithms (EEBDA). IEEE, 2022. http://dx.doi.org/10.1109/eebda53927.2022.9744760.
Full textEpshteyn, Arkady, Adam Vogel, and Gerald DeJong. "Active reinforcement learning." In the 25th international conference. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1390156.1390194.
Full textEpshteyn, Arkady, and Gerald DeJong. "Qualitative reinforcement learning." In the 23rd international conference. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1143844.1143883.
Full textVargas, Danilo Vasconcellos. "Evolutionary reinforcement learning." In GECCO '18: Genetic and Evolutionary Computation Conference. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3205651.3207865.
Full textLangford, John. "Contextual reinforcement learning." In 2017 IEEE International Conference on Big Data (Big Data). IEEE, 2017. http://dx.doi.org/10.1109/bigdata.2017.8257902.
Full textReports on the topic "Reinforcement Learning"
Singh, Satinder, Andrew G. Barto, and Nuttapong Chentanez. Intrinsically Motivated Reinforcement Learning. Fort Belvoir, VA: Defense Technical Information Center, January 2005. http://dx.doi.org/10.21236/ada440280.
Full textGhavamzadeh, Mohammad, and Sridhar Mahadevan. Hierarchical Multiagent Reinforcement Learning. Fort Belvoir, VA: Defense Technical Information Center, January 2004. http://dx.doi.org/10.21236/ada440418.
Full textHarmon, Mance E., and Stephanie S. Harmon. Reinforcement Learning: A Tutorial. Fort Belvoir, VA: Defense Technical Information Center, January 1997. http://dx.doi.org/10.21236/ada323194.
Full textTadepalli, Prasad, and Alan Fern. Partial Planning Reinforcement Learning. Fort Belvoir, VA: Defense Technical Information Center, August 2012. http://dx.doi.org/10.21236/ada574717.
Full textGhavamzadeh, Mohammad, and Sridhar Mahadevan. Hierarchical Average Reward Reinforcement Learning. Fort Belvoir, VA: Defense Technical Information Center, June 2003. http://dx.doi.org/10.21236/ada445728.
Full textJohnson, Daniel W. Drive-Reinforcement Learning System Applications. Fort Belvoir, VA: Defense Technical Information Center, July 1992. http://dx.doi.org/10.21236/ada264514.
Full textCleland, Andrew. Bounding Box Improvement With Reinforcement Learning. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.6322.
Full textLi, Jiajie. Learning Financial Investment Strategies using Reinforcement Learning and 'Chan theory'. Ames (Iowa): Iowa State University, August 2022. http://dx.doi.org/10.31274/cc-20240624-946.
Full textBaird, III, Klopf Leemon C., and A. H. Reinforcement Learning With High-Dimensional, Continuous Actions. Fort Belvoir, VA: Defense Technical Information Center, November 1993. http://dx.doi.org/10.21236/ada280844.
Full textObert, James, and Angie Shia. Optimizing Dynamic Timing Analysis with Reinforcement Learning. Office of Scientific and Technical Information (OSTI), November 2019. http://dx.doi.org/10.2172/1573933.
Full text