Artykuły w czasopismach na temat „Dynamic optimal learning rate”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Dynamic optimal learning rate”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.
Chinrungrueng, C., i C. H. Sequin. "Optimal adaptive k-means algorithm with dynamic adjustment of learning rate". IEEE Transactions on Neural Networks 6, nr 1 (1995): 157–69. http://dx.doi.org/10.1109/72.363440.
Pełny tekst źródłaZhu, Yingqiu, Danyang Huang, Yuan Gao, Rui Wu, Yu Chen, Bo Zhang i Hansheng Wang. "Automatic, dynamic, and nearly optimal learning rate specification via local quadratic approximation". Neural Networks 141 (wrzesień 2021): 11–29. http://dx.doi.org/10.1016/j.neunet.2021.03.025.
Pełny tekst źródłaLeen, Todd K., Bernhard Schottky i David Saad. "Optimal asymptotic learning rate: Macroscopic versus microscopic dynamics". Physical Review E 59, nr 1 (1.01.1999): 985–91. http://dx.doi.org/10.1103/physreve.59.985.
Pełny tekst źródłaKalvit, Anand, i Assaf Zeevi. "Dynamic Learning in Large Matching Markets". ACM SIGMETRICS Performance Evaluation Review 50, nr 2 (30.08.2022): 18–20. http://dx.doi.org/10.1145/3561074.3561081.
Pełny tekst źródłaZheng, Jiangbo, Yanhong Gan, Ying Liang, Qingqing Jiang i Jiatai Chang. "Joint Strategy of Dynamic Ordering and Pricing for Competing Perishables with Q-Learning Algorithm". Wireless Communications and Mobile Computing 2021 (13.03.2021): 1–19. http://dx.doi.org/10.1155/2021/6643195.
Pełny tekst źródłaChen, Zhigang, Rongwei Xu i Yongxi Yi. "Dynamic Optimal Control of Transboundary Pollution Abatement under Learning-by-Doing Depreciation". Complexity 2020 (9.06.2020): 1–17. http://dx.doi.org/10.1155/2020/3763684.
Pełny tekst źródłaDe, Shipra, i Darryl A. Seale. "Dynamic Decision Making and Race Games". ISRN Operations Research 2013 (7.08.2013): 1–15. http://dx.doi.org/10.1155/2013/452162.
Pełny tekst źródłaYao, Yuhang, i Carlee Joe-Wong. "Interpretable Clustering on Dynamic Graphs with Recurrent Graph Neural Networks". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 5 (18.05.2021): 4608–16. http://dx.doi.org/10.1609/aaai.v35i5.16590.
Pełny tekst źródłaLiu, Haijun. "A Study of an IT-Assisted Higher Education Model Based on Distributed Hardware-Assisted Tracking Intervention". Occupational Therapy International 2022 (8.04.2022): 1–12. http://dx.doi.org/10.1155/2022/8862716.
Pełny tekst źródłaLi, Ao, Zhaoman Wan i Zhong Wan. "Optimal Design of Online Sequential Buy-Price Auctions with Consumer Valuation Learning". Asia-Pacific Journal of Operational Research 37, nr 03 (29.04.2020): 2050012. http://dx.doi.org/10.1142/s0217595920500128.
Pełny tekst źródłaWang, Xing-Ju, Xiao-Ming Xi i Gui-Feng Gao. "Reinforcement Learning Ramp Metering without Complete Information". Journal of Control Science and Engineering 2012 (2012): 1–8. http://dx.doi.org/10.1155/2012/208456.
Pełny tekst źródłaShi, Yuanji, Zhiwei Yuan, Xiaorong Zhu i Hongbo Zhu. "An Adaptive Routing Algorithm for Inter-Satellite Networks Based on the Combination of Multipath Transmission and Q-Learning". Processes 11, nr 1 (5.01.2023): 167. http://dx.doi.org/10.3390/pr11010167.
Pełny tekst źródłaMinghai Yuan, Minghai Yuan, Chenxi Zhang Minghai Yuan, Kaiwen Zhou Chenxi Zhang i Fengque Pei Kaiwen Zhou. "Real-time Allocation of Shared Parking Spaces Based on Deep Reinforcement Learning". 網際網路技術學刊 24, nr 1 (styczeń 2023): 035–43. http://dx.doi.org/10.53106/160792642023012401004.
Pełny tekst źródłaJepma, Marieke, Stephen B. R. E. Brown, Peter R. Murphy, Stephany C. Koelewijn, Boukje de Vries, Arn M. van den Maagdenberg i Sander Nieuwenhuis. "Noradrenergic and Cholinergic Modulation of Belief Updating". Journal of Cognitive Neuroscience 30, nr 12 (grudzień 2018): 1803–20. http://dx.doi.org/10.1162/jocn_a_01317.
Pełny tekst źródłaXiang, Yao, Jingling Yuan, Ruiqi Luo, Xian Zhong i Tao Li. "An Energy Dynamic Control Algorithm Based on Reinforcement Learning for Data Centers". International Journal of Pattern Recognition and Artificial Intelligence 33, nr 13 (15.12.2019): 1951009. http://dx.doi.org/10.1142/s0218001419510091.
Pełny tekst źródłaChiu, Kai-Cheng, Chien-Chang Liu i Li-Der Chou. "Reinforcement Learning-Based Service-Oriented Dynamic Multipath Routing in SDN". Wireless Communications and Mobile Computing 2022 (31.01.2022): 1–16. http://dx.doi.org/10.1155/2022/1330993.
Pełny tekst źródłaDing, Fan, Yongyi Zhang, Rui Chen, Zhanwen Liu i Huachun Tan. "A Deep Learning Based Traffic State Estimation Method for Mixed Traffic Flow Environment". Journal of Advanced Transportation 2022 (7.04.2022): 1–12. http://dx.doi.org/10.1155/2022/2166345.
Pełny tekst źródłaChan, Felix T. S., Zhengxu Wang, Yashveer Singh, X. P. Wang, J. H. Ruan i M. K. Tiwari. "Activity scheduling and resource allocation with uncertainties and learning in activities". Industrial Management & Data Systems 119, nr 6 (8.07.2019): 1289–320. http://dx.doi.org/10.1108/imds-01-2019-0002.
Pełny tekst źródłaStarling, Carlos, Jackson Machado-Pinto, Unaí Tupinambás, Estevão Urbano Silva i Bráulio R. G. M. Couto. "404. COVID-19 Normality Rate: Criteria for Optimal Time to Return to In-person Learning". Open Forum Infectious Diseases 8, Supplement_1 (1.11.2021): S303—S304. http://dx.doi.org/10.1093/ofid/ofab466.605.
Pełny tekst źródłaThanh, Pham Duy, Tran Nhut Khai Hoan, Hoang Thi Huong Giang i Insoo Koo. "Cache-Enabled Data Rate Maximization for Solar-Powered UAV Communication Systems". Electronics 9, nr 11 (20.11.2020): 1961. http://dx.doi.org/10.3390/electronics9111961.
Pełny tekst źródłaWei, Kefeng, Lincong Zhang, Xin Jiang i Yi Guo. "Q -Learning-Based High Credibility and Stability Routing Algorithm for Internet of Medical Things". Wireless Communications and Mobile Computing 2020 (26.12.2020): 1–10. http://dx.doi.org/10.1155/2020/8856271.
Pełny tekst źródłaCao, Huazhen, Chong Gao, Xuan He, Yang Li i Tao Yu. "Multi-Agent Cooperation Based Reduced-Dimension Q(λ) Learning for Optimal Carbon-Energy Combined-Flow". Energies 13, nr 18 (14.09.2020): 4778. http://dx.doi.org/10.3390/en13184778.
Pełny tekst źródłaRodriguez, Renato, Yan Wang, Joseph Ozanne, Dogan Sumer, Dimitar Filev i Damoon Soudbakhsh. "Adaptive Takeoff Maneuver Optimization of a Sailing Boat for America’s Cup". Journal of Sailing Technology 7, nr 01 (17.10.2022): 88–103. http://dx.doi.org/10.5957/jst/2022.7.4.88.
Pełny tekst źródłaDE FRANCO, CARMINE, JOHANN NICOLLE i HUYÊN PHAM. "BAYESIAN LEARNING FOR THE MARKOWITZ PORTFOLIO SELECTION PROBLEM". International Journal of Theoretical and Applied Finance 22, nr 07 (listopad 2019): 1950037. http://dx.doi.org/10.1142/s0219024919500377.
Pełny tekst źródłaWang, Yi, i Junhai Sun. "Design and Implementation of Virtual Reality Interactive Product Software Based on Artificial Intelligence Deep Learning Algorithm". Advances in Multimedia 2022 (26.04.2022): 1–7. http://dx.doi.org/10.1155/2022/9104743.
Pełny tekst źródłaShi, Junqing, Fengxiang Qiao, Qing Li, Lei Yu i Yongju Hu. "Application and Evaluation of the Reinforcement Learning Approach to Eco-Driving at Intersections under Infrastructure-to-Vehicle Communications". Transportation Research Record: Journal of the Transportation Research Board 2672, nr 25 (1.10.2018): 89–98. http://dx.doi.org/10.1177/0361198118796939.
Pełny tekst źródłaZhang, Xiyue, i Guiping Chen. "Machine Learning Model-Based English Project Learning and Functional Research". Wireless Communications and Mobile Computing 2022 (4.04.2022): 1–11. http://dx.doi.org/10.1155/2022/1940375.
Pełny tekst źródłaChen, Jinyu, Ziqi Zhong, Qindi Feng i Lei Liu. "The Multimodal Emotion Information Analysis of E-Commerce Online Pricing in Electronic Word of Mouth". Journal of Global Information Management 30, nr 11 (7.04.2022): 1–17. http://dx.doi.org/10.4018/jgim.315322.
Pełny tekst źródłaZhou, Tao, Zengchuan Dong, Xiuxiu Chen i Qihua Ran. "Decision Support Model for Ecological Operation of Reservoirs Based on Dynamic Bayesian Network". Water 13, nr 12 (14.06.2021): 1658. http://dx.doi.org/10.3390/w13121658.
Pełny tekst źródłaLouta, M., P. Sarigiannidis, S. Misra, P. Nicopolitidis i G. Papadimitriou. "RLAM: A Dynamic and Efficient Reinforcement Learning-Based Adaptive Mapping Scheme in Mobile WiMAX Networks". Mobile Information Systems 10, nr 2 (2014): 173–96. http://dx.doi.org/10.1155/2014/213056.
Pełny tekst źródłaOu, Minghui, Hua Wei, Yiyi Zhang i Jiancheng Tan. "A Dynamic Adam Based Deep Neural Network for Fault Diagnosis of Oil-Immersed Power Transformers". Energies 12, nr 6 (14.03.2019): 995. http://dx.doi.org/10.3390/en12060995.
Pełny tekst źródłaWang, Ziwei, Xin Wang, Yijie Tang, Ying Liu i Jun Hu. "Optimal Tracking Control of a Nonlinear Multiagent System Using Q-Learning via Event-Triggered Reinforcement Learning". Entropy 25, nr 2 (5.02.2023): 299. http://dx.doi.org/10.3390/e25020299.
Pełny tekst źródłaWang, Huitao, Ruopeng Yang, Changsheng Yin, Xiaofei Zou i Xuefeng Wang. "Research on the Difficulty of Mobile Node Deployment’s Self-Play in Wireless Ad Hoc Networks Based on Deep Reinforcement Learning". Wireless Communications and Mobile Computing 2021 (9.03.2021): 1–13. http://dx.doi.org/10.1155/2021/4361650.
Pełny tekst źródłaSaleem, Muhammad, Yasir Saleem, H. M. Shahzad Asif i M. Saleem Mian. "Quality Enhanced Multimedia Content Delivery for Mobile Cloud with Deep Reinforcement Learning". Wireless Communications and Mobile Computing 2019 (18.07.2019): 1–15. http://dx.doi.org/10.1155/2019/5038758.
Pełny tekst źródłaWang, Qiulin, Baole Tao, Fulei Han i Wenting Wei. "Extraction and Recognition Method of Basketball Players’ Dynamic Human Actions Based on Deep Learning". Mobile Information Systems 2021 (26.06.2021): 1–6. http://dx.doi.org/10.1155/2021/4437146.
Pełny tekst źródłaMaldonado, Bryan P., Nan Li, Ilya Kolmanovsky i Anna G. Stefanopoulou. "Learning reference governor for cycle-to-cycle combustion control with misfire avoidance in spark-ignition engines at high exhaust gas recirculation–diluted conditions". International Journal of Engine Research 21, nr 10 (26.06.2020): 1819–34. http://dx.doi.org/10.1177/1468087420929109.
Pełny tekst źródłaMarsetič, Rok, Darja Šemrov i Marijan Žura. "Road Artery Traffic Light Optimization with Use of the Reinforcement Learning". PROMET - Traffic&Transportation 26, nr 2 (26.04.2014): 101–8. http://dx.doi.org/10.7307/ptt.v26i2.1318.
Pełny tekst źródłaMayer, Polina N., Victor V. Pogorelko, Dmitry S. Voronin i Alexander E. Mayer. "Spall Fracture of Solid and Molten Copper: Molecular Dynamics, Mechanical Model and Strain Rate Dependence". Metals 12, nr 11 (3.11.2022): 1878. http://dx.doi.org/10.3390/met12111878.
Pełny tekst źródłaYazid, Yassine, Antonio Guerrero-González, Imad Ez-Zazi, Ahmed El Oualkadi i Mounir Arioua. "A Reinforcement Learning Based Transmission Parameter Selection and Energy Management for Long Range Internet of Things". Sensors 22, nr 15 (28.07.2022): 5662. http://dx.doi.org/10.3390/s22155662.
Pełny tekst źródłaChang, Chung-Ho, i Jen-Ming Chen. "Capacity Policy for an OEM under Production Ramp-Up and Demand Diffusion". Mathematical Problems in Engineering 2022 (26.05.2022): 1–22. http://dx.doi.org/10.1155/2022/9510184.
Pełny tekst źródłaLi, Shu, Jiong Yu, Xusheng Du, Yi Lu i Rui Qiu. "Fair Outlier Detection Based on Adversarial Representation Learning". Symmetry 14, nr 2 (9.02.2022): 347. http://dx.doi.org/10.3390/sym14020347.
Pełny tekst źródłaZhang, Zhen, i Dongqing Wang. "EAQR: A Multiagent Q-Learning Algorithm for Coordination of Multiple Agents". Complexity 2018 (28.08.2018): 1–14. http://dx.doi.org/10.1155/2018/7172614.
Pełny tekst źródłaKim, Sang-Ho, Deog-Yeong Park i Ki-Hoon Lee. "Hybrid Deep Reinforcement Learning for Pairs Trading". Applied Sciences 12, nr 3 (18.01.2022): 944. http://dx.doi.org/10.3390/app12030944.
Pełny tekst źródłaHoppe, David, i Constantin A. Rothkopf. "Learning rational temporal eye movement strategies". Proceedings of the National Academy of Sciences 113, nr 29 (5.07.2016): 8332–37. http://dx.doi.org/10.1073/pnas.1601305113.
Pełny tekst źródłaAbdalla, Hemn Barzan, Awder M. Ahmed, Subhi R. M. Zeebaree, Ahmed Alkhayyat i Baha Ihnaini. "Rider weed deep residual network-based incremental model for text classification using multidimensional features and MapReduce". PeerJ Computer Science 8 (31.03.2022): e937. http://dx.doi.org/10.7717/peerj-cs.937.
Pełny tekst źródłaKhanh, Tran Trong, Tran Hoang Hai, Md Delowar Hossain i Eui-Nam Huh. "Fuzzy-Assisted Mobile Edge Orchestrator and SARSA Learning for Flexible Offloading in Heterogeneous IoT Environment". Sensors 22, nr 13 (23.06.2022): 4727. http://dx.doi.org/10.3390/s22134727.
Pełny tekst źródłaJegminat, Jannes, Simone Carlo Surace i Jean-Pascal Pfister. "Learning as filtering: Implications for spike-based plasticity". PLOS Computational Biology 18, nr 2 (23.02.2022): e1009721. http://dx.doi.org/10.1371/journal.pcbi.1009721.
Pełny tekst źródłaHrizi, Olfa, Karim Gasmi, Ibtihel Ben Ltaifa, Hamoud Alshammari, Hanen Karamti, Moez Krichen, Lassaad Ben Ammar i Mahmood A. Mahmood. "Tuberculosis Disease Diagnosis Based on an Optimized Machine Learning Model". Journal of Healthcare Engineering 2022 (21.03.2022): 1–13. http://dx.doi.org/10.1155/2022/8950243.
Pełny tekst źródłaZhang, Huanan, i Stefanus Jasin. "Online Learning and Optimization of (Some) Cyclic Pricing Policies in the Presence of Patient Customers". Manufacturing & Service Operations Management 24, nr 2 (marzec 2022): 1165–82. http://dx.doi.org/10.1287/msom.2021.0979.
Pełny tekst źródłaZheng, Shaoxiong, Peng Gao, Weixing Wang i Xiangjun Zou. "A Highly Accurate Forest Fire Prediction Model Based on an Improved Dynamic Convolutional Neural Network". Applied Sciences 12, nr 13 (2.07.2022): 6721. http://dx.doi.org/10.3390/app12136721.
Pełny tekst źródła