Letteratura scientifica selezionata sul tema "Reinforcement learning (Machine learning)"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Reinforcement learning (Machine learning)".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Articoli di riviste sul tema "Reinforcement learning (Machine learning)"
Ishii, Shin, e Wako Yoshida. "Part 4: Reinforcement learning: Machine learning and natural learning". New Generation Computing 24, n. 3 (settembre 2006): 325–50. http://dx.doi.org/10.1007/bf03037338.
Testo completoWang, Zizhuang. "Temporal-Related Convolutional-Restricted-Boltzmann-Machine Capable of Learning Relational Order via Reinforcement Learning Procedure". International Journal of Machine Learning and Computing 7, n. 1 (febbraio 2017): 1–8. http://dx.doi.org/10.18178/ijmlc.2017.7.1.610.
Testo completoButlin, Patrick. "Machine Learning, Functions and Goals". Croatian journal of philosophy 22, n. 66 (27 dicembre 2022): 351–70. http://dx.doi.org/10.52685/cjp.22.66.5.
Testo completoMartín-Guerrero, José D., e Lucas Lamata. "Reinforcement Learning and Physics". Applied Sciences 11, n. 18 (16 settembre 2021): 8589. http://dx.doi.org/10.3390/app11188589.
Testo completoLiu, Yicen, Yu Lu, Xi Li, Wenxin Qiao, Zhiwei Li e Donghao Zhao. "SFC Embedding Meets Machine Learning: Deep Reinforcement Learning Approaches". IEEE Communications Letters 25, n. 6 (giugno 2021): 1926–30. http://dx.doi.org/10.1109/lcomm.2021.3061991.
Testo completoPopkov, Yuri S., Yuri A. Dubnov e Alexey Yu Popkov. "Reinforcement Procedure for Randomized Machine Learning". Mathematics 11, n. 17 (23 agosto 2023): 3651. http://dx.doi.org/10.3390/math11173651.
Testo completoCrawford, Daniel, Anna Levit, Navid Ghadermarzy, Jaspreet S. Oberoi e Pooya Ronagh. "Reinforcement learning using quantum Boltzmann machines". Quantum Information and Computation 18, n. 1&2 (febbraio 2018): 51–74. http://dx.doi.org/10.26421/qic18.1-2-3.
Testo completoLamata, Lucas. "Quantum Reinforcement Learning with Quantum Photonics". Photonics 8, n. 2 (28 gennaio 2021): 33. http://dx.doi.org/10.3390/photonics8020033.
Testo completoSahu, Santosh Kumar, Anil Mokhade e Neeraj Dhanraj Bokde. "An Overview of Machine Learning, Deep Learning, and Reinforcement Learning-Based Techniques in Quantitative Finance: Recent Progress and Challenges". Applied Sciences 13, n. 3 (2 febbraio 2023): 1956. http://dx.doi.org/10.3390/app13031956.
Testo completoFang, Qiang, Wenzhuo Zhang e Xitong Wang. "Visual Navigation Using Inverse Reinforcement Learning and an Extreme Learning Machine". Electronics 10, n. 16 (18 agosto 2021): 1997. http://dx.doi.org/10.3390/electronics10161997.
Testo completoTesi sul tema "Reinforcement learning (Machine learning)"
Hengst, Bernhard Computer Science & Engineering Faculty of Engineering UNSW. "Discovering hierarchy in reinforcement learning". Awarded by:University of New South Wales. Computer Science and Engineering, 2003. http://handle.unsw.edu.au/1959.4/20497.
Testo completoTabell, Johnsson Marco, e Ala Jafar. "Efficiency Comparison Between Curriculum Reinforcement Learning & Reinforcement Learning Using ML-Agents". Thesis, Blekinge Tekniska Högskola, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20218.
Testo completoAkrour, Riad. "Robust Preference Learning-based Reinforcement Learning". Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112236/document.
Testo completoThe thesis contributions resolves around sequential decision taking and more precisely Reinforcement Learning (RL). Taking its root in Machine Learning in the same way as supervised and unsupervised learning, RL quickly grow in popularity within the last two decades due to a handful of achievements on both the theoretical and applicative front. RL supposes that the learning agent and its environment follow a stochastic Markovian decision process over a state and action space. The process is said of decision as the agent is asked to choose at each time step an action to take. It is said stochastic as the effect of selecting a given action in a given state does not systematically yield the same state but rather defines a distribution over the state space. It is said to be Markovian as this distribution only depends on the current state-action pair. Consequently to the choice of an action, the agent receives a reward. The RL goal is then to solve the underlying optimization problem of finding the behaviour that maximizes the sum of rewards all along the interaction of the agent with its environment. From an applicative point of view, a large spectrum of problems can be cast onto an RL one, from Backgammon (TD-Gammon, was one of Machine Learning first success giving rise to a world class player of advanced level) to decision problems in the industrial and medical world. However, the optimization problem solved by RL depends on the prevous definition of a reward function that requires a certain level of domain expertise and also knowledge of the internal quirks of RL algorithms. As such, the first contribution of the thesis was to propose a learning framework that lightens the requirements made to the user. The latter does not need anymore to know the exact solution of the problem but to only be able to choose between two behaviours exhibited by the agent, the one that matches more closely the solution. Learning is interactive between the agent and the user and resolves around the three main following points: i) The agent demonstrates a behaviour ii) The user compares it w.r.t. to the current best one iii) The agent uses this feedback to update its preference model of the user and uses it to find the next behaviour to demonstrate. To reduce the number of required interactions before finding the optimal behaviour, the second contribution of the thesis was to define a theoretically sound criterion making the trade-off between the sometimes contradicting desires of complying with the user's preferences and demonstrating sufficiently different behaviours. The last contribution was to ensure the robustness of the algorithm w.r.t. the feedback errors that the user might make. Which happens more often than not in practice, especially at the initial phase of the interaction, when all the behaviours are far from the expected solution
Lee, Siu-keung, e 李少強. "Reinforcement learning for intelligent assembly automation". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2002. http://hub.hku.hk/bib/B31244397.
Testo completoTebbifakhr, Amirhossein. "Machine Translation For Machines". Doctoral thesis, Università degli studi di Trento, 2021. http://hdl.handle.net/11572/320504.
Testo completoYang, Zhaoyuan Yang. "Adversarial Reinforcement Learning for Control System Design: A Deep Reinforcement Learning Approach". The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu152411491981452.
Testo completoScholz, Jonathan. "Physics-based reinforcement learning for autonomous manipulation". Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54366.
Testo completoCleland, Andrew Lewis. "Bounding Box Improvement with Reinforcement Learning". PDXScholar, 2018. https://pdxscholar.library.pdx.edu/open_access_etds/4438.
Testo completoPiano, Francesco. "Deep Reinforcement Learning con PyTorch". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25340/.
Testo completoSuggs, Sterling. "Reinforcement Learning with Auxiliary Memory". BYU ScholarsArchive, 2021. https://scholarsarchive.byu.edu/etd/9028.
Testo completoLibri sul tema "Reinforcement learning (Machine learning)"
S, Sutton Richard, a cura di. Reinforcement learning. Boston: Kluwer Academic Publishers, 1992.
Cerca il testo completoSutton, Richard S. Reinforcement Learning. Boston, MA: Springer US, 1992.
Cerca il testo completoPack, Kaelbling Leslie, a cura di. Recent advances in reinforcement learning. Boston: Kluwer Academic, 1996.
Cerca il testo completoSzepesvári, Csaba. Algorithms for reinforcement learning. San Rafael, Calif. (1537 Fourth Street, San Rafael, CA 94901 USA): Morgan & Claypool, 2010.
Cerca il testo completoKaelbling, Leslie Pack. Recent advances in reinforcement learning. Boston: Kluwer Academic, 1996.
Cerca il testo completoSutton, Richard S. Reinforcement learning: An introduction. Cambridge, Mass: MIT Press, 1998.
Cerca il testo completoKulkarni, Parag. Reinforcement and systemic machine learning for decision making. Hoboken, NJ: John Wiley & Sons, 2012.
Cerca il testo completoKulkarni, Parag. Reinforcement and Systemic Machine Learning for Decision Making. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2012. http://dx.doi.org/10.1002/9781118266502.
Testo completoWhiteson, Shimon. Adaptive representations for reinforcement learning. Berlin: Springer Verlag, 2010.
Cerca il testo completoIWLCS 2006 (2006 Seattle, Wash.). Learning classifier systems: 10th international workshop, IWLCS 2006, Seattle, MA, USA, July 8, 2006, and 11th international workshop, IWLCS 2007, London, UK, July 8, 2007 : revised selected papers. Berlin: Springer, 2008.
Cerca il testo completoCapitoli di libri sul tema "Reinforcement learning (Machine learning)"
Kalita, Jugal. "Reinforcement Learning". In Machine Learning, 193–230. Boca Raton: Chapman and Hall/CRC, 2022. http://dx.doi.org/10.1201/9781003002611-5.
Testo completoZhou, Zhi-Hua. "Reinforcement Learning". In Machine Learning, 399–430. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-1967-3_16.
Testo completoGeetha, T. V., e S. Sendhilkumar. "Reinforcement Learning". In Machine Learning, 271–94. Boca Raton: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003290100-11.
Testo completoJo, Taeho. "Reinforcement Learning". In Machine Learning Foundations, 359–84. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-65900-4_16.
Testo completoBuhmann, M. D., Prem Melville, Vikas Sindhwani, Novi Quadrianto, Wray L. Buntine, Luís Torgo, Xinhua Zhang et al. "Reinforcement Learning". In Encyclopedia of Machine Learning, 849–51. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_714.
Testo completoKubat, Miroslav. "Reinforcement Learning". In An Introduction to Machine Learning, 277–86. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-20010-1_14.
Testo completoKubat, Miroslav. "Reinforcement Learning". In An Introduction to Machine Learning, 331–39. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63913-0_17.
Testo completoLabaca Castro, Raphael. "Reinforcement Learning". In Machine Learning under Malware Attack, 51–60. Wiesbaden: Springer Fachmedien Wiesbaden, 2023. http://dx.doi.org/10.1007/978-3-658-40442-0_6.
Testo completoCoqueret, Guillaume, e Tony Guida. "Reinforcement learning". In Machine Learning for Factor Investing, 257–72. Boca Raton: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003121596-20.
Testo completoNorris, Donald J. "Reinforcement learning". In Machine Learning with the Raspberry Pi, 501–53. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-5174-4_9.
Testo completoAtti di convegni sul tema "Reinforcement learning (Machine learning)"
"PREDICTION FOR CONTROL DELAY ON REINFORCEMENT LEARNING". In Special Session on Machine Learning. SciTePress - Science and and Technology Publications, 2011. http://dx.doi.org/10.5220/0003883405790586.
Testo completoFu, Cailing, Jochen Stollenwerk e Carlo Holly. "Reinforcement learning for guiding optimization processes in optical design". In Applications of Machine Learning 2022, a cura di Michael E. Zelinski, Tarek M. Taha e Jonathan Howe. SPIE, 2022. http://dx.doi.org/10.1117/12.2632425.
Testo completoTittaferrante, Andrew, e Abdulsalam Yassine. "Benchmarking Offline Reinforcement Learning". In 2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE, 2022. http://dx.doi.org/10.1109/icmla55696.2022.00044.
Testo completoBernstein, Alexander V., e E. V. Burnaev. "Reinforcement learning in computer vision". In Tenth International Conference on Machine Vision (ICMV 2017), a cura di Jianhong Zhou, Petia Radeva, Dmitry Nikolaev e Antanas Verikas. SPIE, 2018. http://dx.doi.org/10.1117/12.2309945.
Testo completoNatarajan, Sriraam, Gautam Kunapuli, Kshitij Judah, Prasad Tadepalli, Kristian Kersting e Jude Shavlik. "Multi-Agent Inverse Reinforcement Learning". In 2010 International Conference on Machine Learning and Applications (ICMLA). IEEE, 2010. http://dx.doi.org/10.1109/icmla.2010.65.
Testo completoXue, Jianyong, e Frédéric Alexandre. "Developmental Modular Reinforcement Learning". In ESANN 2022 - European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. Louvain-la-Neuve (Belgium): Ciaco - i6doc.com, 2022. http://dx.doi.org/10.14428/esann/2022.es2022-19.
Testo completoUrmanov, Marat, Madina Alimanova e Askar Nurkey. "Training Unity Machine Learning Agents using reinforcement learning method". In 2019 15th International Conference on Electronics, Computer and Computation (ICECCO). IEEE, 2019. http://dx.doi.org/10.1109/icecco48375.2019.9043194.
Testo completoJin, Zhuo-Jun, Hui Qian e Miao-Liang Zhu. "Gaussian processes in inverse reinforcement learning". In 2010 International Conference on Machine Learning and Cybernetics (ICMLC). IEEE, 2010. http://dx.doi.org/10.1109/icmlc.2010.5581063.
Testo completoArques Corrales, Pilar, e Fidel Aznar Gregori. "Swarm AGV Optimization Using Deep Reinforcement Learning". In MLMI '20: 2020 The 3rd International Conference on Machine Learning and Machine Intelligence. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3426826.3426839.
Testo completoLeopold, T., G. Kern-Isberner e G. Peters. "Combining Reinforcement Learning and Belief Revision - A Learning System for Active Vision". In British Machine Vision Conference 2008. British Machine Vision Association, 2008. http://dx.doi.org/10.5244/c.22.48.
Testo completoRapporti di organizzazioni sul tema "Reinforcement learning (Machine learning)"
Singh, Satinder, Andrew G. Barto e Nuttapong Chentanez. Intrinsically Motivated Reinforcement Learning. Fort Belvoir, VA: Defense Technical Information Center, gennaio 2005. http://dx.doi.org/10.21236/ada440280.
Testo completoGhavamzadeh, Mohammad, e Sridhar Mahadevan. Hierarchical Multiagent Reinforcement Learning. Fort Belvoir, VA: Defense Technical Information Center, gennaio 2004. http://dx.doi.org/10.21236/ada440418.
Testo completoHarmon, Mance E., e Stephanie S. Harmon. Reinforcement Learning: A Tutorial. Fort Belvoir, VA: Defense Technical Information Center, gennaio 1997. http://dx.doi.org/10.21236/ada323194.
Testo completoTadepalli, Prasad, e Alan Fern. Partial Planning Reinforcement Learning. Fort Belvoir, VA: Defense Technical Information Center, agosto 2012. http://dx.doi.org/10.21236/ada574717.
Testo completoVesselinov, Velimir Valentinov. Machine Learning. Office of Scientific and Technical Information (OSTI), gennaio 2019. http://dx.doi.org/10.2172/1492563.
Testo completoValiant, L. G. Machine Learning. Fort Belvoir, VA: Defense Technical Information Center, gennaio 1993. http://dx.doi.org/10.21236/ada283386.
Testo completoChase, Melissa P. Machine Learning. Fort Belvoir, VA: Defense Technical Information Center, aprile 1990. http://dx.doi.org/10.21236/ada223732.
Testo completoGhavamzadeh, Mohammad, e Sridhar Mahadevan. Hierarchical Average Reward Reinforcement Learning. Fort Belvoir, VA: Defense Technical Information Center, giugno 2003. http://dx.doi.org/10.21236/ada445728.
Testo completoJohnson, Daniel W. Drive-Reinforcement Learning System Applications. Fort Belvoir, VA: Defense Technical Information Center, luglio 1992. http://dx.doi.org/10.21236/ada264514.
Testo completoKagie, Matthew J., e Park Hays. FORTE Machine Learning. Office of Scientific and Technical Information (OSTI), agosto 2016. http://dx.doi.org/10.2172/1561828.
Testo completo