Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Distributed optimization and learning“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Distributed optimization and learning" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Distributed optimization and learning"
Kamalesh, Kamalesh, und Dr Gobi Natesan. „Machine Learning-Driven Analysis of Distributed Computing Systems: Exploring Optimization and Efficiency“. International Journal of Research Publication and Reviews 5, Nr. 3 (09.03.2024): 3979–83. http://dx.doi.org/10.55248/gengpi.5.0324.0786.
Der volle Inhalt der QuelleMertikopoulos, Panayotis, E. Veronica Belmega, Romain Negrel und Luca Sanguinetti. „Distributed Stochastic Optimization via Matrix Exponential Learning“. IEEE Transactions on Signal Processing 65, Nr. 9 (01.05.2017): 2277–90. http://dx.doi.org/10.1109/tsp.2017.2656847.
Der volle Inhalt der QuelleGratton, Cristiano, Naveen K. D. Venkategowda, Reza Arablouei und Stefan Werner. „Privacy-Preserved Distributed Learning With Zeroth-Order Optimization“. IEEE Transactions on Information Forensics and Security 17 (2022): 265–79. http://dx.doi.org/10.1109/tifs.2021.3139267.
Der volle Inhalt der QuelleBlot, Michael, David Picard, Nicolas Thome und Matthieu Cord. „Distributed optimization for deep learning with gossip exchange“. Neurocomputing 330 (Februar 2019): 287–96. http://dx.doi.org/10.1016/j.neucom.2018.11.002.
Der volle Inhalt der QuelleYoung, M. Todd, Jacob D. Hinkle, Ramakrishnan Kannan und Arvind Ramanathan. „Distributed Bayesian optimization of deep reinforcement learning algorithms“. Journal of Parallel and Distributed Computing 139 (Mai 2020): 43–52. http://dx.doi.org/10.1016/j.jpdc.2019.07.008.
Der volle Inhalt der QuelleNedic, Angelia. „Distributed Gradient Methods for Convex Machine Learning Problems in Networks: Distributed Optimization“. IEEE Signal Processing Magazine 37, Nr. 3 (Mai 2020): 92–101. http://dx.doi.org/10.1109/msp.2020.2975210.
Der volle Inhalt der QuelleLin, I.-Cheng. „Learning and Optimization over Robust Networked Systems“. ACM SIGMETRICS Performance Evaluation Review 52, Nr. 3 (09.01.2025): 23–26. https://doi.org/10.1145/3712170.3712179.
Der volle Inhalt der QuelleGao, Hongchang. „Distributed Stochastic Nested Optimization for Emerging Machine Learning Models: Algorithm and Theory“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 13 (26.06.2023): 15437. http://dx.doi.org/10.1609/aaai.v37i13.26804.
Der volle Inhalt der QuelleChoi, Dojin, Jiwon Wee, Sangho Song, Hyeonbyeong Lee, Jongtae Lim, Kyoungsoo Bok und Jaesoo Yoo. „k-NN Query Optimization for High-Dimensional Index Using Machine Learning“. Electronics 12, Nr. 11 (24.05.2023): 2375. http://dx.doi.org/10.3390/electronics12112375.
Der volle Inhalt der QuelleYang, Peng, und Ping Li. „Distributed Primal-Dual Optimization for Online Multi-Task Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 04 (03.04.2020): 6631–38. http://dx.doi.org/10.1609/aaai.v34i04.6139.
Der volle Inhalt der QuelleDissertationen zum Thema "Distributed optimization and learning"
Funkquist, Mikaela, und Minghua Lu. „Distributed Optimization Through Deep Reinforcement Learning“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-293878.
Der volle Inhalt der QuelleFörstärkningsinlärningsmetoder tillåter självlärande enheter att spela video- och brädspel autonomt. Projektet siktar på att studera effektiviteten hos förstärkningsinlärningsmetoderna Q-learning och deep Q-learning i dynamiska problem. Målet är att träna upp robotar så att de kan röra sig genom ett varuhus på bästa sätt utan att kollidera. En virtuell miljö skapades, i vilken algoritmerna testades genom att simulera agenter som rörde sig. Algoritmernas effektivitet utvärderades av hur snabbt agenterna lärde sig att utföra förutbestämda uppgifter. Resultatet visar att Q-learning fungerar bra för enkla problem med få agenter, där system med två aktiva agenter löstes snabbt. Deep Q-learning fungerar bättre för mer komplexa system som innehåller fler agenter, men fall med suboptimala rörelser uppstod. Båda algoritmerna visade god potential inom deras respektive områden, däremot måste förbättringar göras innan de kan användas i verkligheten.
Kandidatexjobb i elektroteknik 2020, KTH, Stockholm
Konečný, Jakub. „Stochastic, distributed and federated optimization for machine learning“. Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/31478.
Der volle Inhalt der QuelleArmond, Kenneth C. Jr. „Distributed Support Vector Machine Learning“. ScholarWorks@UNO, 2008. http://scholarworks.uno.edu/td/711.
Der volle Inhalt der QuellePatvarczki, Jozsef. „Layout Optimization for Distributed Relational Databases Using Machine Learning“. Digital WPI, 2012. https://digitalcommons.wpi.edu/etd-dissertations/291.
Der volle Inhalt der QuelleOuyang, Hua. „Optimal stochastic and distributed algorithms for machine learning“. Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49091.
Der volle Inhalt der QuelleEl, Gamal Mostafa. „Distributed Statistical Learning under Communication Constraints“. Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-dissertations/314.
Der volle Inhalt der QuelleDai, Wei. „Learning with Staleness“. Research Showcase @ CMU, 2018. http://repository.cmu.edu/dissertations/1209.
Der volle Inhalt der QuelleLu, Yumao. „Kernel optimization and distributed learning algorithms for support vector machines“. Diss., Restricted to subscribing institutions, 2005. http://uclibs.org/PID/11984.
Der volle Inhalt der QuelleDinh, The Canh. „Distributed Algorithms for Fast and Personalized Federated Learning“. Thesis, The University of Sydney, 2023. https://hdl.handle.net/2123/30019.
Der volle Inhalt der QuelleReddi, Sashank Jakkam. „New Optimization Methods for Modern Machine Learning“. Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/1116.
Der volle Inhalt der QuelleBücher zum Thema "Distributed optimization and learning"
Jiang, Jiawei, Bin Cui und Ce Zhang. Distributed Machine Learning and Gradient Optimization. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-3420-8.
Der volle Inhalt der QuelleWang, Huiwei, Huaqing Li und Bo Zhou. Distributed Optimization, Game and Learning Algorithms. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-4528-7.
Der volle Inhalt der QuelleJoshi, Gauri. Optimization Algorithms for Distributed Machine Learning. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-19067-4.
Der volle Inhalt der QuelleTatarenko, Tatiana. Game-Theoretic Learning and Distributed Optimization in Memoryless Multi-Agent Systems. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-65479-9.
Der volle Inhalt der QuelleOblinger, Diana G. Distributed learning. Boulder, Colo: CAUSE, 1996.
Den vollen Inhalt der Quelle findenMajhi, Sudhan, Rocío Pérez de Prado und Chandrappa Dasanapura Nanjundaiah, Hrsg. Distributed Computing and Optimization Techniques. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-2281-7.
Der volle Inhalt der QuelleGiselsson, Pontus, und Anders Rantzer, Hrsg. Large-Scale and Distributed Optimization. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-97478-1.
Der volle Inhalt der QuelleLü, Qingguo, Xiaofeng Liao, Huaqing Li, Shaojiang Deng und Shanfu Gao. Distributed Optimization in Networked Systems. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-8559-1.
Der volle Inhalt der QuelleAbdulrahman Younis Ali Younis Kalbat. Distributed and Large-Scale Optimization. [New York, N.Y.?]: [publisher not identified], 2016.
Den vollen Inhalt der Quelle findenOtto, Daniel, Gianna Scharnberg, Michael Kerres und Olaf Zawacki-Richter, Hrsg. Distributed Learning Ecosystems. Wiesbaden: Springer Fachmedien Wiesbaden, 2023. http://dx.doi.org/10.1007/978-3-658-38703-7.
Der volle Inhalt der QuelleBuchteile zum Thema "Distributed optimization and learning"
Joshi, Gauri, und Shiqiang Wang. „Communication-Efficient Distributed Optimization Algorithms“. In Federated Learning, 125–43. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96896-0_6.
Der volle Inhalt der QuelleJiang, Jiawei, Bin Cui und Ce Zhang. „Distributed Gradient Optimization Algorithms“. In Distributed Machine Learning and Gradient Optimization, 57–114. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-3420-8_3.
Der volle Inhalt der QuelleJiang, Jiawei, Bin Cui und Ce Zhang. „Distributed Machine Learning Systems“. In Distributed Machine Learning and Gradient Optimization, 115–66. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-3420-8_4.
Der volle Inhalt der QuelleJoshi, Gauri. „Distributed Optimization in Machine Learning“. In Synthesis Lectures on Learning, Networks, and Algorithms, 1–12. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-19067-4_1.
Der volle Inhalt der QuelleLin, Zhouchen, Huan Li und Cong Fang. „ADMM for Distributed Optimization“. In Alternating Direction Method of Multipliers for Machine Learning, 207–40. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9840-8_6.
Der volle Inhalt der QuelleJiang, Jiawei, Bin Cui und Ce Zhang. „Basics of Distributed Machine Learning“. In Distributed Machine Learning and Gradient Optimization, 15–55. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-3420-8_2.
Der volle Inhalt der QuelleScheidegger, Carre, Arpit Shah und Dan Simon. „Distributed Learning with Biogeography-Based Optimization“. In Lecture Notes in Computer Science, 203–15. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21827-9_21.
Der volle Inhalt der QuelleGonzález-Mendoza, Miguel, Neil Hernández-Gress und André Titli. „Quadratic Optimization Fine Tuning for the Learning Phase of SVM“. In Advanced Distributed Systems, 347–57. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11533962_31.
Der volle Inhalt der QuelleWang, Huiwei, Huaqing Li und Bo Zhou. „Cooperative Distributed Optimization in Multiagent Networks with Delays“. In Distributed Optimization, Game and Learning Algorithms, 1–17. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-4528-7_1.
Der volle Inhalt der QuelleWang, Huiwei, Huaqing Li und Bo Zhou. „Constrained Consensus of Multi-agent Systems with Time-Varying Topology“. In Distributed Optimization, Game and Learning Algorithms, 19–37. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-4528-7_2.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Distributed optimization and learning"
Patil, Aditya, Sanket Lodha, Sonal Deshmukh, Rupali S. Joshi, Vaishali Patil und Sudhir Chitnis. „Battery Optimization Using Machine Learning“. In 2024 IEEE International Conference on Blockchain and Distributed Systems Security (ICBDS), 1–5. IEEE, 2024. https://doi.org/10.1109/icbds61829.2024.10837428.
Der volle Inhalt der QuelleKhan, Malak Abid Ali, Luo Senlin, Hongbin Ma, Abdul Khalique Shaikh, Ahlam Almusharraf und Imran Khan Mirani. „Optimization of LoRa for Distributed Environments Based on Machine Learning“. In 2024 IEEE Asia Pacific Conference on Wireless and Mobile (APWiMob), 137–42. IEEE, 2024. https://doi.org/10.1109/apwimob64015.2024.10792952.
Der volle Inhalt der QuelleChao, Liangchen, Bo Zhang, Hengpeng Guo, Fangheng Ji und Junfeng Li. „UAV Swarm Collaborative Transmission Optimization for Machine Learning Tasks“. In 2024 IEEE 30th International Conference on Parallel and Distributed Systems (ICPADS), 504–11. IEEE, 2024. http://dx.doi.org/10.1109/icpads63350.2024.00072.
Der volle Inhalt der QuelleShamir, Ohad, und Nathan Srebro. „Distributed stochastic optimization and learning“. In 2014 52nd Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 2014. http://dx.doi.org/10.1109/allerton.2014.7028543.
Der volle Inhalt der QuelleHulse, Daniel, Brandon Gigous, Kagan Tumer, Christopher Hoyle und Irem Y. Tumer. „Towards a Distributed Multiagent Learning-Based Design Optimization Method“. In ASME 2017 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/detc2017-68042.
Der volle Inhalt der QuelleLi, Naihao, Jiaqi Wang, Xu Liu, Lanfeng Wang und Long Zhang. „Contrastive Learning-based Meta-Learning Sequential Recommendation“. In 2024 International Conference on Distributed Computing and Optimization Techniques (ICDCOT). IEEE, 2024. http://dx.doi.org/10.1109/icdcot61034.2024.10515699.
Der volle Inhalt der QuelleVaidya, Nitin H. „Security and Privacy for Distributed Optimization & Distributed Machine Learning“. In PODC '21: ACM Symposium on Principles of Distributed Computing. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3465084.3467485.
Der volle Inhalt der QuelleLiao, Leonardo, und Yongqiang Wu. „Distributed Polytope ARTMAP: A Vigilance-Free ART Network for Distributed Supervised Learning“. In 2009 International Joint Conference on Computational Sciences and Optimization, CSO. IEEE, 2009. http://dx.doi.org/10.1109/cso.2009.63.
Der volle Inhalt der QuelleWang, Shoujin, Fan Wang und Yu Zhang. „Learning Rate Decay Algorithm Based on Mutual Information in Deep Learning“. In 2024 International Conference on Distributed Computing and Optimization Techniques (ICDCOT). IEEE, 2024. http://dx.doi.org/10.1109/icdcot61034.2024.10515368.
Der volle Inhalt der QuelleAnand, Aditya, Lakshay Rastogi, Ansh Agarwaal und Shashank Bhardwaj. „Refraction-Learning Based Whale Optimization Algorithm with Opposition-Learning and Adaptive Parameter Optimization“. In 2024 Third International Conference on Distributed Computing and Electrical Circuits and Electronics (ICDCECE). IEEE, 2024. http://dx.doi.org/10.1109/icdcece60827.2024.10548420.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Distributed optimization and learning"
Stuckey, Peter, und Toby Walsh. Learning within Optimization. Fort Belvoir, VA: Defense Technical Information Center, April 2013. http://dx.doi.org/10.21236/ada575367.
Der volle Inhalt der QuelleNygard, Kendall E. Distributed Optimization in Aircraft Mission Scheduling. Fort Belvoir, VA: Defense Technical Information Center, Mai 1995. http://dx.doi.org/10.21236/ada300064.
Der volle Inhalt der QuelleMeyer, Robert R. Large-Scale Optimization Via Distributed Systems. Fort Belvoir, VA: Defense Technical Information Center, November 1989. http://dx.doi.org/10.21236/ada215136.
Der volle Inhalt der QuelleShead, Timothy, Jonathan Berry, Cynthia Phillips und Jared Saia. Information-Theoretically Secure Distributed Machine Learning. Office of Scientific and Technical Information (OSTI), November 2019. http://dx.doi.org/10.2172/1763277.
Der volle Inhalt der QuelleGraesser, Arthur C., und Robert A. Wisher. Question Generation as a Learning Multiplier in Distributed Learning Environments. Fort Belvoir, VA: Defense Technical Information Center, Oktober 2001. http://dx.doi.org/10.21236/ada399456.
Der volle Inhalt der QuelleVoon, B. K., und M. A. Austin. Structural Optimization in a Distributed Computing Environment. Fort Belvoir, VA: Defense Technical Information Center, Januar 1991. http://dx.doi.org/10.21236/ada454846.
Der volle Inhalt der QuelleHays, Robert T. Theoretical Foundation for Advanced Distributed Learning Research. Fort Belvoir, VA: Defense Technical Information Center, Mai 2001. http://dx.doi.org/10.21236/ada385457.
Der volle Inhalt der QuelleChen, J. S. J. Distributed-query optimization in fragmented data-base systems. Office of Scientific and Technical Information (OSTI), August 1987. http://dx.doi.org/10.2172/7183881.
Der volle Inhalt der QuelleNocedal, Jorge. Nonlinear Optimization Methods for Large-Scale Learning. Office of Scientific and Technical Information (OSTI), Oktober 2019. http://dx.doi.org/10.2172/1571768.
Der volle Inhalt der QuelleLumsdaine, Andrew. Scalable Second Order Optimization for Machine Learning. Office of Scientific and Technical Information (OSTI), Mai 2022. http://dx.doi.org/10.2172/1984057.
Der volle Inhalt der Quelle