Letteratura scientifica selezionata sul tema "Reinforcement Learning"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Reinforcement Learning".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Articoli di riviste sul tema "Reinforcement Learning"
Deora, Merin, e Sumit Mathur. "Reinforcement Learning". IJARCCE 6, n. 4 (30 aprile 2017): 178–81. http://dx.doi.org/10.17148/ijarcce.2017.6433.
Testo completoBarto, Andrew G. "Reinforcement Learning". IFAC Proceedings Volumes 31, n. 29 (ottobre 1998): 5. http://dx.doi.org/10.1016/s1474-6670(17)38315-5.
Testo completoWoergoetter, Florentin, e Bernd Porr. "Reinforcement learning". Scholarpedia 3, n. 3 (2008): 1448. http://dx.doi.org/10.4249/scholarpedia.1448.
Testo completoMoore, Brett L., Anthony G. Doufas e Larry D. Pyeatt. "Reinforcement Learning". Anesthesia & Analgesia 112, n. 2 (febbraio 2011): 360–67. http://dx.doi.org/10.1213/ane.0b013e31820334a7.
Testo completoLikas, Aristidis. "A Reinforcement Learning Approach to Online Clustering". Neural Computation 11, n. 8 (1 novembre 1999): 1915–32. http://dx.doi.org/10.1162/089976699300016025.
Testo completoLiaq, Mudassar, e Yungcheol Byun. "Autonomous UAV Navigation Using Reinforcement Learning". International Journal of Machine Learning and Computing 9, n. 6 (dicembre 2019): 756–61. http://dx.doi.org/10.18178/ijmlc.2019.9.6.869.
Testo completoAlrammal, Muath, e Munir Naveed. "Monte-Carlo Based Reinforcement Learning (MCRL)". International Journal of Machine Learning and Computing 10, n. 2 (febbraio 2020): 227–32. http://dx.doi.org/10.18178/ijmlc.2020.10.2.924.
Testo completoNurmuhammet, Abdullayev. "DEEP REINFORCEMENT LEARNING ON STOCK DATA". Alatoo Academic Studies 23, n. 2 (30 giugno 2023): 505–18. http://dx.doi.org/10.17015/aas.2023.232.49.
Testo completoMardhatillah, Elsy. "Teacher’s Reinforcement in English Classroom in MTSS Darul Makmur Sungai Cubadak". Indonesian Research Journal On Education 3, n. 1 (2 gennaio 2022): 825–32. http://dx.doi.org/10.31004/irje.v3i1.202.
Testo completoFan, ZiSheng. "An exploration of reinforcement learning and deep reinforcement learning". Applied and Computational Engineering 73, n. 1 (5 luglio 2024): 154–59. http://dx.doi.org/10.54254/2755-2721/73/20240386.
Testo completoTesi sul tema "Reinforcement Learning"
Izquierdo, Ayala Pablo. "Learning comparison: Reinforcement Learning vs Inverse Reinforcement Learning : How well does inverse reinforcement learning perform in simple markov decision processes in comparison to reinforcement learning?" Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-259371.
Testo completoDenna studie är en kvalitativ jämförelse mellan två olika inlärningsangreppssätt, “Reinforcement Learning” (RL) och “Inverse Reinforcement Learning” (IRL), om använder "Gridworld", en "Markov Decision-Process". Fokus ligger på den senare algoritmen, IRL, eftersom den anses relativt ny och få studier har i nuläget gjorts kring den. I studien är RL mer fördelaktig än IRL, som skapar en korrekt lösning i alla olika scenarier som presenteras i studien. Beteendet hos IRL-algoritmen kan dock förbättras vilket också visas och analyseras i denna studie.
Seymour, B. J. "Aversive reinforcement learning". Thesis, University College London (University of London), 2010. http://discovery.ucl.ac.uk/800107/.
Testo completoAkrour, Riad. "Robust Preference Learning-based Reinforcement Learning". Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112236/document.
Testo completoThe thesis contributions resolves around sequential decision taking and more precisely Reinforcement Learning (RL). Taking its root in Machine Learning in the same way as supervised and unsupervised learning, RL quickly grow in popularity within the last two decades due to a handful of achievements on both the theoretical and applicative front. RL supposes that the learning agent and its environment follow a stochastic Markovian decision process over a state and action space. The process is said of decision as the agent is asked to choose at each time step an action to take. It is said stochastic as the effect of selecting a given action in a given state does not systematically yield the same state but rather defines a distribution over the state space. It is said to be Markovian as this distribution only depends on the current state-action pair. Consequently to the choice of an action, the agent receives a reward. The RL goal is then to solve the underlying optimization problem of finding the behaviour that maximizes the sum of rewards all along the interaction of the agent with its environment. From an applicative point of view, a large spectrum of problems can be cast onto an RL one, from Backgammon (TD-Gammon, was one of Machine Learning first success giving rise to a world class player of advanced level) to decision problems in the industrial and medical world. However, the optimization problem solved by RL depends on the prevous definition of a reward function that requires a certain level of domain expertise and also knowledge of the internal quirks of RL algorithms. As such, the first contribution of the thesis was to propose a learning framework that lightens the requirements made to the user. The latter does not need anymore to know the exact solution of the problem but to only be able to choose between two behaviours exhibited by the agent, the one that matches more closely the solution. Learning is interactive between the agent and the user and resolves around the three main following points: i) The agent demonstrates a behaviour ii) The user compares it w.r.t. to the current best one iii) The agent uses this feedback to update its preference model of the user and uses it to find the next behaviour to demonstrate. To reduce the number of required interactions before finding the optimal behaviour, the second contribution of the thesis was to define a theoretically sound criterion making the trade-off between the sometimes contradicting desires of complying with the user's preferences and demonstrating sufficiently different behaviours. The last contribution was to ensure the robustness of the algorithm w.r.t. the feedback errors that the user might make. Which happens more often than not in practice, especially at the initial phase of the interaction, when all the behaviours are far from the expected solution
Tabell, Johnsson Marco, e Ala Jafar. "Efficiency Comparison Between Curriculum Reinforcement Learning & Reinforcement Learning Using ML-Agents". Thesis, Blekinge Tekniska Högskola, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20218.
Testo completoYang, Zhaoyuan Yang. "Adversarial Reinforcement Learning for Control System Design: A Deep Reinforcement Learning Approach". The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu152411491981452.
Testo completoCortesi, Daniele. "Reinforcement Learning in Rogue". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16138/.
Testo completoGirgin, Sertan. "Abstraction In Reinforcement Learning". Phd thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608257/index.pdf.
Testo completoSuay, Halit Bener. "Reinforcement Learning from Demonstration". Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-dissertations/173.
Testo completoGao, Yang. "Argumentation accelerated reinforcement learning". Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/26603.
Testo completoAlexander, John W. "Transfer in reinforcement learning". Thesis, University of Aberdeen, 2015. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=227908.
Testo completoLibri sul tema "Reinforcement Learning"
Sutton, Richard S. Reinforcement Learning. Boston, MA: Springer US, 1992.
Cerca il testo completoWiering, Marco, e Martijn van Otterlo, a cura di. Reinforcement Learning. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-27645-3.
Testo completoSutton, Richard S., a cura di. Reinforcement Learning. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3618-5.
Testo completoLorenz, Uwe. Reinforcement Learning. Berlin, Heidelberg: Springer Berlin Heidelberg, 2020. http://dx.doi.org/10.1007/978-3-662-61651-2.
Testo completoNandy, Abhishek, e Manisha Biswas. Reinforcement Learning. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3285-9.
Testo completoS, Sutton Richard, a cura di. Reinforcement learning. Boston: Kluwer Academic Publishers, 1992.
Cerca il testo completoLorenz, Uwe. Reinforcement Learning. Berlin, Heidelberg: Springer Berlin Heidelberg, 2024. http://dx.doi.org/10.1007/978-3-662-68311-8.
Testo completoLi, Jinna, Frank L. Lewis e Jialu Fan. Reinforcement Learning. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-28394-9.
Testo completoXiao, Zhiqing. Reinforcement Learning. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-19-4933-3.
Testo completoMerrick, Kathryn, e Mary Lou Maher. Motivated Reinforcement Learning. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-540-89187-1.
Testo completoCapitoli di libri sul tema "Reinforcement Learning"
Sutton, Richard S. "Introduction: The Challenge of Reinforcement Learning". In Reinforcement Learning, 1–3. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3618-5_1.
Testo completoWilliams, Ronald J. "Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning". In Reinforcement Learning, 5–32. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3618-5_2.
Testo completoTesauro, Gerald. "Practical Issues in Temporal Difference Learning". In Reinforcement Learning, 33–53. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3618-5_3.
Testo completoWatkins, Christopher J. C. H., e Peter Dayan. "Technical Note". In Reinforcement Learning, 55–68. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3618-5_4.
Testo completoLin, Long-Ji. "Self-Improving Reactive Agents Based on Reinforcement Learning, Planning and Teaching". In Reinforcement Learning, 69–97. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3618-5_5.
Testo completoSingh, Satinder Pal. "Transfer of Learning by Composing Solutions of Elemental Sequential Tasks". In Reinforcement Learning, 99–115. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3618-5_6.
Testo completoDayan, Peter. "The Convergence of TD(λ) for General λ". In Reinforcement Learning, 117–38. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3618-5_7.
Testo completoMillán, José R., e Carme Torras. "A Reinforcement Connectionist Approach to Robot Path Finding in Non-Maze-Like Environments". In Reinforcement Learning, 139–71. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3618-5_8.
Testo completoLorenz, Uwe. "Bestärkendes Lernen als Teilgebiet des Maschinellen Lernens". In Reinforcement Learning, 1–11. Berlin, Heidelberg: Springer Berlin Heidelberg, 2020. http://dx.doi.org/10.1007/978-3-662-61651-2_1.
Testo completoLorenz, Uwe. "Grundbegriffe des Bestärkenden Lernens". In Reinforcement Learning, 13–20. Berlin, Heidelberg: Springer Berlin Heidelberg, 2020. http://dx.doi.org/10.1007/978-3-662-61651-2_2.
Testo completoAtti di convegni sul tema "Reinforcement Learning"
Yang, Kun, Chengshuai Shi e Cong Shen. "Teaching Reinforcement Learning Agents via Reinforcement Learning". In 2023 57th Annual Conference on Information Sciences and Systems (CISS). IEEE, 2023. http://dx.doi.org/10.1109/ciss56502.2023.10089695.
Testo completoDoshi, Finale, Joelle Pineau e Nicholas Roy. "Reinforcement learning with limited reinforcement". In the 25th international conference. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1390156.1390189.
Testo completoLi, Zhiyi. "Reinforcement Learning". In SIGCSE '19: The 50th ACM Technical Symposium on Computer Science Education. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3287324.3293703.
Testo completoShen, Shitian, e Min Chi. "Reinforcement Learning". In UMAP '16: User Modeling, Adaptation and Personalization Conference. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2930238.2930247.
Testo completoKuroe, Yasuaki, e Kenya Takeuchi. "Sophisticated Swarm Reinforcement Learning by Incorporating Inverse Reinforcement Learning". In 2023 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2023. http://dx.doi.org/10.1109/smc53992.2023.10394525.
Testo completoLyu, Le, Yang Shen e Sicheng Zhang. "The Advance of Reinforcement Learning and Deep Reinforcement Learning". In 2022 IEEE International Conference on Electrical Engineering, Big Data and Algorithms (EEBDA). IEEE, 2022. http://dx.doi.org/10.1109/eebda53927.2022.9744760.
Testo completoEpshteyn, Arkady, Adam Vogel e Gerald DeJong. "Active reinforcement learning". In the 25th international conference. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1390156.1390194.
Testo completoEpshteyn, Arkady, e Gerald DeJong. "Qualitative reinforcement learning". In the 23rd international conference. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1143844.1143883.
Testo completoVargas, Danilo Vasconcellos. "Evolutionary reinforcement learning". In GECCO '18: Genetic and Evolutionary Computation Conference. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3205651.3207865.
Testo completoLangford, John. "Contextual reinforcement learning". In 2017 IEEE International Conference on Big Data (Big Data). IEEE, 2017. http://dx.doi.org/10.1109/bigdata.2017.8257902.
Testo completoRapporti di organizzazioni sul tema "Reinforcement Learning"
Singh, Satinder, Andrew G. Barto e Nuttapong Chentanez. Intrinsically Motivated Reinforcement Learning. Fort Belvoir, VA: Defense Technical Information Center, gennaio 2005. http://dx.doi.org/10.21236/ada440280.
Testo completoGhavamzadeh, Mohammad, e Sridhar Mahadevan. Hierarchical Multiagent Reinforcement Learning. Fort Belvoir, VA: Defense Technical Information Center, gennaio 2004. http://dx.doi.org/10.21236/ada440418.
Testo completoHarmon, Mance E., e Stephanie S. Harmon. Reinforcement Learning: A Tutorial. Fort Belvoir, VA: Defense Technical Information Center, gennaio 1997. http://dx.doi.org/10.21236/ada323194.
Testo completoTadepalli, Prasad, e Alan Fern. Partial Planning Reinforcement Learning. Fort Belvoir, VA: Defense Technical Information Center, agosto 2012. http://dx.doi.org/10.21236/ada574717.
Testo completoGhavamzadeh, Mohammad, e Sridhar Mahadevan. Hierarchical Average Reward Reinforcement Learning. Fort Belvoir, VA: Defense Technical Information Center, giugno 2003. http://dx.doi.org/10.21236/ada445728.
Testo completoJohnson, Daniel W. Drive-Reinforcement Learning System Applications. Fort Belvoir, VA: Defense Technical Information Center, luglio 1992. http://dx.doi.org/10.21236/ada264514.
Testo completoCleland, Andrew. Bounding Box Improvement With Reinforcement Learning. Portland State University Library, gennaio 2000. http://dx.doi.org/10.15760/etd.6322.
Testo completoLi, Jiajie. Learning Financial Investment Strategies using Reinforcement Learning and 'Chan theory'. Ames (Iowa): Iowa State University, agosto 2022. http://dx.doi.org/10.31274/cc-20240624-946.
Testo completoBaird, III, Klopf Leemon C. e A. H. Reinforcement Learning With High-Dimensional, Continuous Actions. Fort Belvoir, VA: Defense Technical Information Center, novembre 1993. http://dx.doi.org/10.21236/ada280844.
Testo completoObert, James, e Angie Shia. Optimizing Dynamic Timing Analysis with Reinforcement Learning. Office of Scientific and Technical Information (OSTI), novembre 2019. http://dx.doi.org/10.2172/1573933.
Testo completo