Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Federated averaging“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Federated averaging" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Federated averaging"
Dhada, Maharshi, Amit Kumar Jain und Ajith Kumar Parlikad. „Empirical Convergence Analysis of Federated Averaging for Failure Prognosis“. IFAC-PapersOnLine 53, Nr. 3 (2020): 360–65. http://dx.doi.org/10.1016/j.ifacol.2020.11.058.
Der volle Inhalt der QuelleBalakrishnan, Ravikumar, Mustafa Akdeniz, Sagar Dhakal, Arjun Anand, Ariela Zeira und Nageen Himayat. „Resource Management and Model Personalization for Federated Learning over Wireless Edge Networks“. Journal of Sensor and Actuator Networks 10, Nr. 1 (23.02.2021): 17. http://dx.doi.org/10.3390/jsan10010017.
Der volle Inhalt der QuelleXiao, Peng, Samuel Cheng, Vladimir Stankovic und Dejan Vukobratovic. „Averaging Is Probably Not the Optimum Way of Aggregating Parameters in Federated Learning“. Entropy 22, Nr. 3 (11.03.2020): 314. http://dx.doi.org/10.3390/e22030314.
Der volle Inhalt der QuelleWu, Xing, Zhaowang Liang und Jianjia Wang. „FedMed: A Federated Learning Framework for Language Modeling“. Sensors 20, Nr. 14 (21.07.2020): 4048. http://dx.doi.org/10.3390/s20144048.
Der volle Inhalt der QuelleShao, Rulin, Hongyu He, Ziwei Chen, Hui Liu und Dianbo Liu. „Stochastic Channel-Based Federated Learning With Neural Network Pruning for Medical Data Privacy Preservation: Model Development and Experimental Validation“. JMIR Formative Research 4, Nr. 12 (22.12.2020): e17265. http://dx.doi.org/10.2196/17265.
Der volle Inhalt der QuelleAsad, Muhammad, Ahmed Moustafa und Takayuki Ito. „FedOpt: Towards Communication Efficiency and Privacy Preservation in Federated Learning“. Applied Sciences 10, Nr. 8 (21.04.2020): 2864. http://dx.doi.org/10.3390/app10082864.
Der volle Inhalt der QuelleMunoz-Martin, Joan Francesc, David Llaveria, Christoph Herbert, Miriam Pablos, Hyuk Park und Adriano Camps. „Soil Moisture Estimation Synergy Using GNSS-R and L-Band Microwave Radiometry Data from FSSCat/FMPL-2“. Remote Sensing 13, Nr. 5 (05.03.2021): 994. http://dx.doi.org/10.3390/rs13050994.
Der volle Inhalt der QuelleCasado, Fernando E., Dylan Lema, Marcos F. Criado, Roberto Iglesias, Carlos V. Regueiro und Senén Barro. „Concept drift detection and adaptation for federated and continual learning“. Multimedia Tools and Applications, 17.07.2021. http://dx.doi.org/10.1007/s11042-021-11219-x.
Der volle Inhalt der QuelleImteaj, Ahmed, und M. Hadi Amini. „FedPARL: Client Activity and Resource-Oriented Lightweight Federated Learning Model for Resource-Constrained Heterogeneous IoT Environment“. Frontiers in Communications and Networks 2 (29.04.2021). http://dx.doi.org/10.3389/frcmn.2021.657653.
Der volle Inhalt der QuelleDissertationen zum Thema "Federated averaging"
Backstad, Sebastian. „Federated Averaging Deep Q-NetworkA Distributed Deep Reinforcement Learning Algorithm“. Thesis, Umeå universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-149637.
Der volle Inhalt der QuelleLangelaar, Johannes, und Mattsson Adam Strömme. „Federated Neural Collaborative Filtering for privacy-preserving recommender systems“. Thesis, Uppsala universitet, Avdelningen för systemteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-446913.
Der volle Inhalt der QuelleReddy, Sashlin. „A comparative analysis of dynamic averaging techniques in federated learning“. Thesis, 2020. https://hdl.handle.net/10539/31059.
Der volle Inhalt der QuelleDue to the advancements in mobile technology and user privacy concerns, federated learning has emerged as a popular machine learning (ML) method to push training of statistical models to the edge. Federated learning involves training a shared model under the coordination of a centralized server from a federation of participating clients. In practice federated learning methods have to overcome large network delays and bandwidth limits. To overcome the communication bottlenecks, recent works propose methods to reduce the communication frequency that have negligible impact on model accuracy also defined as model performance. Naive methods reduce the number of communication rounds in order to reduce the communication frequency. However, it is possible to invest communication more efficiently through dynamic communication protocols. This is deemed as dynamic averaging. Few have addressed such protocols. More so, few works base this dynamic averaging protocol on the diversity of the data and the loss. In this work, we introduce dynamic averaging frameworks based on the diversity of the data as well as the loss encountered by each client. This overcomes the assumption that each client participates equally and addresses the properties of federated learning. Results show that the overall communication overhead is reduced with negligible decrease in accuracy
CK2021
Buchteile zum Thema "Federated averaging"
Remedios, Samuel W., John A. Butman, Bennett A. Landman und Dzung L. Pham. „Federated Gradient Averaging for Multi-Site Training with Momentum-Based Optimizers“. In Domain Adaptation and Representation Transfer, and Distributed and Collaborative Learning, 170–80. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60548-3_17.
Der volle Inhalt der QuelleZhao, Fengpan, Yan Huang, Saide Zhu, Venkata Malladi und Yubao Wu. „A Weighted Federated Averaging Framework to Reduce the Negative Influence from the Dishonest Users“. In Security, Privacy, and Anonymity in Computation, Communication, and Storage, 241–50. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-68851-6_17.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Federated averaging"
Wang, Zheng, Xiaoliang Fan, Jianzhong Qi, Chenglu Wen, Cheng Wang und Rongshan Yu. „Federated Learning with Fair Averaging“. In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/223.
Der volle Inhalt der QuelleRo, Jae, Mingqing Chen, Rajiv Mathews, Mehryar Mohri und Ananda Theertha Suresh. „Communication-Efficient Agnostic Federated Averaging“. In Interspeech 2021. ISCA: ISCA, 2021. http://dx.doi.org/10.21437/interspeech.2021-153.
Der volle Inhalt der QuelleLi, Yiwei, Tsung-Hui Chang und Chong-Yung Chi. „Secure Federated Averaging Algorithm with Differential Privacy“. In 2020 IEEE 30th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2020. http://dx.doi.org/10.1109/mlsp49062.2020.9231531.
Der volle Inhalt der QuelleDesai, Nirmit, und Dinesh Verma. „Properties of federated averaging on highly distributed data“. In Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, herausgegeben von Tien Pham. SPIE, 2019. http://dx.doi.org/10.1117/12.2518941.
Der volle Inhalt der QuelleWang, Shuai, Richard Cornelius Suwandi und Tsung-Hui Chang. „Demystifying Model Averaging for Communication-Efficient Federated Matrix Factorization“. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021. http://dx.doi.org/10.1109/icassp39728.2021.9413927.
Der volle Inhalt der Quelle