Auswahl der wissenschaftlichen Literatur zum Thema „Federated averaging“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Federated averaging" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Federated averaging"

1

Dhada, Maharshi, Amit Kumar Jain und Ajith Kumar Parlikad. „Empirical Convergence Analysis of Federated Averaging for Failure Prognosis“. IFAC-PapersOnLine 53, Nr. 3 (2020): 360–65. http://dx.doi.org/10.1016/j.ifacol.2020.11.058.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Balakrishnan, Ravikumar, Mustafa Akdeniz, Sagar Dhakal, Arjun Anand, Ariela Zeira und Nageen Himayat. „Resource Management and Model Personalization for Federated Learning over Wireless Edge Networks“. Journal of Sensor and Actuator Networks 10, Nr. 1 (23.02.2021): 17. http://dx.doi.org/10.3390/jsan10010017.

Der volle Inhalt der Quelle
Annotation:
Client and Internet of Things devices are increasingly equipped with the ability to sense, process, and communicate data with high efficiency. This is resulting in a major shift in machine learning (ML) computation at the network edge. Distributed learning approaches such as federated learning that move ML training to end devices have emerged, promising lower latency and bandwidth costs and enhanced privacy of end users’ data. However, new challenges that arise from the heterogeneous nature of the devices’ communication rates, compute capabilities, and the limited observability of the training data at each device must be addressed. All these factors can significantly affect the training performance in terms of overall accuracy, model fairness, and convergence time. We present compute-communication and data importance-aware resource management schemes optimizing these metrics and evaluate the training performance on benchmark datasets. We also develop a federated meta-learning solution, based on task similarity, that serves as a sample efficient initialization for federated learning, as well as improves model personalization and generalization across non-IID (independent, identically distributed) data. We present experimental results on benchmark federated learning datasets to highlight the performance gains of the proposed methods in comparison to the well-known federated averaging algorithm and its variants.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Xiao, Peng, Samuel Cheng, Vladimir Stankovic und Dejan Vukobratovic. „Averaging Is Probably Not the Optimum Way of Aggregating Parameters in Federated Learning“. Entropy 22, Nr. 3 (11.03.2020): 314. http://dx.doi.org/10.3390/e22030314.

Der volle Inhalt der Quelle
Annotation:
Federated learning is a decentralized topology of deep learning, that trains a shared model through data distributed among each client (like mobile phones, wearable devices), in order to ensure data privacy by avoiding raw data exposed in data center (server). After each client computes a new model parameter by stochastic gradient descent (SGD) based on their own local data, these locally-computed parameters will be aggregated to generate an updated global model. Many current state-of-the-art studies aggregate different client-computed parameters by averaging them, but none theoretically explains why averaging parameters is a good approach. In this paper, we treat each client computed parameter as a random vector because of the stochastic properties of SGD, and estimate mutual information between two client computed parameters at different training phases using two methods in two learning tasks. The results confirm the correlation between different clients and show an increasing trend of mutual information with training iteration. However, when we further compute the distance between client computed parameters, we find that parameters are getting more correlated while not getting closer. This phenomenon suggests that averaging parameters may not be the optimum way of aggregating trained parameters.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Wu, Xing, Zhaowang Liang und Jianjia Wang. „FedMed: A Federated Learning Framework for Language Modeling“. Sensors 20, Nr. 14 (21.07.2020): 4048. http://dx.doi.org/10.3390/s20144048.

Der volle Inhalt der Quelle
Annotation:
Federated learning (FL) is a privacy-preserving technique for training a vast amount of decentralized data and making inferences on mobile devices. As a typical language modeling problem, mobile keyboard prediction aims at suggesting a probable next word or phrase and facilitating the human-machine interaction in a virtual keyboard of the smartphone or laptop. Mobile keyboard prediction with FL hopes to satisfy the growing demand that high-level data privacy be preserved in artificial intelligence applications even with the distributed models training. However, there are two major problems in the federated optimization for the prediction: (1) aggregating model parameters on the server-side and (2) reducing communication costs caused by model weights collection. To address the above issues, traditional FL methods simply use averaging aggregation or ignore communication costs. We propose a novel Federated Mediation (FedMed) framework with the adaptive aggregation, mediation incentive scheme, and topK strategy to address the model aggregation and communication costs. The performance is evaluated in terms of perplexity and communication rounds. Experiments are conducted on three datasets (i.e., Penn Treebank, WikiText-2, and Yelp) and the results demonstrate that our FedMed framework achieves robust performance and outperforms baseline approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Shao, Rulin, Hongyu He, Ziwei Chen, Hui Liu und Dianbo Liu. „Stochastic Channel-Based Federated Learning With Neural Network Pruning for Medical Data Privacy Preservation: Model Development and Experimental Validation“. JMIR Formative Research 4, Nr. 12 (22.12.2020): e17265. http://dx.doi.org/10.2196/17265.

Der volle Inhalt der Quelle
Annotation:
Background Artificial neural networks have achieved unprecedented success in the medical domain. This success depends on the availability of massive and representative datasets. However, data collection is often prevented by privacy concerns, and people want to take control over their sensitive information during both the training and using processes. Objective To address security and privacy issues, we propose a privacy-preserving method for the analysis of distributed medical data. The proposed method, termed stochastic channel-based federated learning (SCBFL), enables participants to train a high-performance model cooperatively and in a distributed manner without sharing their inputs. Methods We designed, implemented, and evaluated a channel-based update algorithm for a central server in a distributed system. The update algorithm will select the channels with regard to the most active features in a training loop, and then upload them as learned information from local datasets. A pruning process, which serves as a model accelerator, was further applied to the algorithm based on the validation set. Results We constructed a distributed system consisting of 5 clients and 1 server. Our trials showed that the SCBFL method can achieve an area under the receiver operating characteristic curve (AUC-ROC) of 0.9776 and an area under the precision-recall curve (AUC-PR) of 0.9695 with only 10% of channels shared with the server. Compared with the federated averaging algorithm, the proposed SCBFL method achieved a 0.05388 higher AUC-ROC and 0.09695 higher AUC-PR. In addition, our experiment showed that 57% of the time is saved by the pruning process with only a reduction of 0.0047 in AUC-ROC performance and a reduction of 0.0068 in AUC-PR performance. Conclusions In this experiment, our model demonstrated better performance and a higher saturating speed than the federated averaging method, which reveals all of the parameters of local models to the server. The saturation rate of performance could be promoted by introducing a pruning process and further improvement could be achieved by tuning the pruning rate.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Asad, Muhammad, Ahmed Moustafa und Takayuki Ito. „FedOpt: Towards Communication Efficiency and Privacy Preservation in Federated Learning“. Applied Sciences 10, Nr. 8 (21.04.2020): 2864. http://dx.doi.org/10.3390/app10082864.

Der volle Inhalt der Quelle
Annotation:
Artificial Intelligence (AI) has been applied to solve various challenges of real-world problems in recent years. However, the emergence of new AI technologies has brought several problems, especially with regard to communication efficiency, security threats and privacy violations. Towards this end, Federated Learning (FL) has received widespread attention due to its ability to facilitate the collaborative training of local learning models without compromising the privacy of data. However, recent studies have shown that FL still consumes considerable amounts of communication resources. These communication resources are vital for updating the learning models. In addition, the privacy of data could still be compromised once sharing the parameters of the local learning models in order to update the global model. Towards this end, we propose a new approach, namely, Federated Optimisation (FedOpt) in order to promote communication efficiency and privacy preservation in FL. In order to implement FedOpt, we design a novel compression algorithm, namely, Sparse Compression Algorithm (SCA) for efficient communication, and then integrate the additively homomorphic encryption with differential privacy to prevent data from being leaked. Thus, the proposed FedOpt smoothly trade-offs communication efficiency and privacy preservation in order to adopt the learning task. The experimental results demonstrate that FedOpt outperforms the state-of-the-art FL approaches. In particular, we consider three different evaluation criteria; model accuracy, communication efficiency and computation overhead. Then, we compare the proposed FedOpt with the baseline configurations and the state-of-the-art approaches, i.e., Federated Averaging (FedAvg) and the paillier-encryption based privacy-preserving deep learning (PPDL) on all these three evaluation criteria. The experimental results show that FedOpt is able to converge within fewer training epochs and a smaller privacy budget.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Munoz-Martin, Joan Francesc, David Llaveria, Christoph Herbert, Miriam Pablos, Hyuk Park und Adriano Camps. „Soil Moisture Estimation Synergy Using GNSS-R and L-Band Microwave Radiometry Data from FSSCat/FMPL-2“. Remote Sensing 13, Nr. 5 (05.03.2021): 994. http://dx.doi.org/10.3390/rs13050994.

Der volle Inhalt der Quelle
Annotation:
The Federated Satellite System mission (FSSCat) was the winner of the 2017 Copernicus Masters Competition and the first Copernicus third-party mission based on CubeSats. One of FSSCat’s objectives is to provide coarse Soil Moisture (SM) estimations by means of passive microwave measurements collected by Flexible Microwave Payload-2 (FMPL-2). This payload is a novel CubeSat based instrument combining an L1/E1 Global Navigation Satellite Systems-Reflectometer (GNSS-R) and an L-band Microwave Radiometer (MWR) using software-defined radio. This work presents the first results over land of the first two months of operations after the commissioning phase, from 1 October to 4 December 2020. Four neural network algorithms are implemented and analyzed in terms of different sets of input features to yield maps of SM content over the Northern Hemisphere (latitudes above 45° N). The first algorithm uses the surface skin temperature from the European Centre of Medium-Range Weather Forecast (ECMWF) in conjunction with the 16 day averaged Normalized Difference Vegetation Index (NDVI) from the Moderate Resolution Imaging Spectroradiometer (MODIS) to estimate SM and to use it as a comparison dataset for evaluating the additional models. A second approach is implemented to retrieve SM, which complements the first model using FMPL-2 L-band MWR antenna temperature measurements, showing a better performance than in the first case. The error standard deviation of this model referred to the Soil Moisture and Ocean Salinity (SMOS) SM product gridded at 36 km is 0.074 m3/m3. The third algorithm proposes a new approach to retrieve SM using FMPL-2 GNSS-R data. The mean and standard deviation of the GNSS-R reflectivity are obtained by averaging consecutive observations based on a sliding window and are further included as additional input features to the network. The model output shows an accurate SM estimation compared to a 9 km SMOS SM product, with an error of 0.087 m3/m3. Finally, a fourth model combines MWR and GNSS-R data and outperforms the previous approaches, with an error of just 0.063 m3/m3. These results demonstrate the capabilities of FMPL-2 to provide SM estimates over land with a good agreement with respect to SMOS SM.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Casado, Fernando E., Dylan Lema, Marcos F. Criado, Roberto Iglesias, Carlos V. Regueiro und Senén Barro. „Concept drift detection and adaptation for federated and continual learning“. Multimedia Tools and Applications, 17.07.2021. http://dx.doi.org/10.1007/s11042-021-11219-x.

Der volle Inhalt der Quelle
Annotation:
AbstractSmart devices, such as smartphones, wearables, robots, and others, can collect vast amounts of data from their environment. This data is suitable for training machine learning models, which can significantly improve their behavior, and therefore, the user experience. Federated learning is a young and popular framework that allows multiple distributed devices to train deep learning models collaboratively while preserving data privacy. Nevertheless, this approach may not be optimal for scenarios where data distribution is non-identical among the participants or changes over time, causing what is known as concept drift. Little research has yet been done in this field, but this kind of situation is quite frequent in real life and poses new challenges to both continual and federated learning. Therefore, in this work, we present a new method, called Concept-Drift-Aware Federated Averaging (CDA-FedAvg). Our proposal is an extension of the most popular federated algorithm, Federated Averaging (FedAvg), enhancing it for continual adaptation under concept drift. We empirically demonstrate the weaknesses of regular FedAvg and prove that CDA-FedAvg outperforms it in this type of scenario.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Imteaj, Ahmed, und M. Hadi Amini. „FedPARL: Client Activity and Resource-Oriented Lightweight Federated Learning Model for Resource-Constrained Heterogeneous IoT Environment“. Frontiers in Communications and Networks 2 (29.04.2021). http://dx.doi.org/10.3389/frcmn.2021.657653.

Der volle Inhalt der Quelle
Annotation:
Federated Learning (FL) is a recently invented distributed machine learning technique that allows available network clients to perform model training at the edge, rather than sharing it with a centralized server. Unlike conventional distributed machine learning approaches, the hallmark feature of FL is to allow performing local computation and model generation on the client side, ultimately protecting sensitive information. Most of the existing FL approaches assume that each FL client has sufficient computational resources and can accomplish a given task without facing any resource-related issues. However, if we consider FL for a heterogeneous Internet of Things (IoT) environment, a major portion of the FL clients may face low resource availability (e.g., lower computational power, limited bandwidth, and battery life). Consequently, the resource-constrained FL clients may give a very slow response, or may be unable to execute expected number of local iterations. Further, any FL client can inject inappropriate model during a training phase that can prolong convergence time and waste resources of all the network clients. In this paper, we propose a novel tri-layer FL scheme, Federated Proximal, Activity and Resource-Aware 31 Lightweight model (FedPARL), that reduces model size by performing sample-based pruning, avoids misbehaved clients by examining their trust score, and allows partial amount of work by considering their resource-availability. The pruning mechanism is particularly useful while dealing with resource-constrained FL-based IoT (FL-IoT) clients. In this scenario, the lightweight training model will consume less amount of resources to accomplish a target convergence. We evaluate each interested client's resource-availability before assigning a task, monitor their activities, and update their trust scores based on their previous performance. To tackle system and statistical heterogeneities, we adapt a re-parameterization and generalization of the current state-of-the-art Federated Averaging (FedAvg) algorithm. The modification of FedAvg algorithm allows clients to perform variable or partial amounts of work considering their resource-constraints. We demonstrate that simultaneously adapting the coupling of pruning, resource and activity awareness, and re-parameterization of FedAvg algorithm leads to more robust convergence of FL in IoT environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Federated averaging"

1

Backstad, Sebastian. „Federated Averaging Deep Q-NetworkA Distributed Deep Reinforcement Learning Algorithm“. Thesis, Umeå universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-149637.

Der volle Inhalt der Quelle
Annotation:
In the telecom sector, there is a huge amount of rich data generated every day. This trend will increase with the launch of 5G networks. Telco companies are interested in analyzing their data to shape and improve their core businesses. However, there can be a number of limiting factors that prevents them from logging data to central data centers for analysis.  Some examples include data privacy, data transfer, network latency etc. In this work, we present a distributed Deep Reinforcement Learning (DRL) method called Federated Averaging Deep Q-Network (FADQN), that employs a distributed hierarchical reinforcement learning architecture. It utilizes gradient averaging to decrease communication cost. Privacy concerns are also satisfied by training the agent locally and only sending aggregated information to the centralized server. We introduce two versions of FADQN: synchronous and asynchronous. Results on the cart-pole environment show 80 times reduction in communication without any significant loss in performance. Additionally, in case of asynchronous approach, we see a great improvement in convergence.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Langelaar, Johannes, und Mattsson Adam Strömme. „Federated Neural Collaborative Filtering for privacy-preserving recommender systems“. Thesis, Uppsala universitet, Avdelningen för systemteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-446913.

Der volle Inhalt der Quelle
Annotation:
In this thesis a number of models for recommender systems are explored, all using collaborative filtering to produce their recommendations. Extra focus is put on two models: Matrix Factorization, which is a linear model and Multi-Layer Perceptron, which is a non-linear model. With an additional purpose of training the models without collecting any sensitive data from the users, both models were implemented with a learning technique that does not require the server's knowledge of the users' data, called federated learning. The federated version of Matrix Factorization is already well-researched, and has proven not to protect the users' data at all; the data is derivable from the information that the users communicate to the server that is necessary for the learning of the model. However, on the federated Multi-Layer Perceptron model, no research could be found. In this thesis, such a model is therefore designed and presented. Arguments are put forth in support of the privacy preservability of the model, along with a proof of the user data not being analytically derivable for the central server.    In addition, new ways to further put the protection of the users' data on the test are discussed. All models are evaluated on two different data sets. The first data set contains data on ratings of movies and is called MovieLens 1M. The second is a data set that consists of anonymized fund transactions, provided by the Swedish bank SEB for this thesis. Test results suggest that the federated versions of the models can achieve similar recommendation performance as their non-federated counterparts.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Reddy, Sashlin. „A comparative analysis of dynamic averaging techniques in federated learning“. Thesis, 2020. https://hdl.handle.net/10539/31059.

Der volle Inhalt der Quelle
Annotation:
A dissertation submitted in fulfilment of the requirements for the degree Master of Science in the School of Computer Science and Applied Mathematics Faculty of Science, University of the Witwatersrand, 2020
Due to the advancements in mobile technology and user privacy concerns, federated learning has emerged as a popular machine learning (ML) method to push training of statistical models to the edge. Federated learning involves training a shared model under the coordination of a centralized server from a federation of participating clients. In practice federated learning methods have to overcome large network delays and bandwidth limits. To overcome the communication bottlenecks, recent works propose methods to reduce the communication frequency that have negligible impact on model accuracy also defined as model performance. Naive methods reduce the number of communication rounds in order to reduce the communication frequency. However, it is possible to invest communication more efficiently through dynamic communication protocols. This is deemed as dynamic averaging. Few have addressed such protocols. More so, few works base this dynamic averaging protocol on the diversity of the data and the loss. In this work, we introduce dynamic averaging frameworks based on the diversity of the data as well as the loss encountered by each client. This overcomes the assumption that each client participates equally and addresses the properties of federated learning. Results show that the overall communication overhead is reduced with negligible decrease in accuracy
CK2021
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Federated averaging"

1

Remedios, Samuel W., John A. Butman, Bennett A. Landman und Dzung L. Pham. „Federated Gradient Averaging for Multi-Site Training with Momentum-Based Optimizers“. In Domain Adaptation and Representation Transfer, and Distributed and Collaborative Learning, 170–80. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60548-3_17.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Zhao, Fengpan, Yan Huang, Saide Zhu, Venkata Malladi und Yubao Wu. „A Weighted Federated Averaging Framework to Reduce the Negative Influence from the Dishonest Users“. In Security, Privacy, and Anonymity in Computation, Communication, and Storage, 241–50. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-68851-6_17.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Federated averaging"

1

Wang, Zheng, Xiaoliang Fan, Jianzhong Qi, Chenglu Wen, Cheng Wang und Rongshan Yu. „Federated Learning with Fair Averaging“. In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/223.

Der volle Inhalt der Quelle
Annotation:
Fairness has emerged as a critical problem in federated learning (FL). In this work, we identify a cause of unfairness in FL -- conflicting gradients with large differences in the magnitudes. To address this issue, we propose the federated fair averaging (FedFV) algorithm to mitigate potential conflicts among clients before averaging their gradients. We first use the cosine similarity to detect gradient conflicts, and then iteratively eliminate such conflicts by modifying both the direction and the magnitude of the gradients. We further show the theoretical foundation of FedFV to mitigate the issue conflicting gradients and converge to Pareto stationary solutions. Extensive experiments on a suite of federated datasets confirm that FedFV compares favorably against state-of-the-art methods in terms of fairness, accuracy and efficiency. The source code is available at https://github.com/WwZzz/easyFL.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Ro, Jae, Mingqing Chen, Rajiv Mathews, Mehryar Mohri und Ananda Theertha Suresh. „Communication-Efficient Agnostic Federated Averaging“. In Interspeech 2021. ISCA: ISCA, 2021. http://dx.doi.org/10.21437/interspeech.2021-153.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Li, Yiwei, Tsung-Hui Chang und Chong-Yung Chi. „Secure Federated Averaging Algorithm with Differential Privacy“. In 2020 IEEE 30th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2020. http://dx.doi.org/10.1109/mlsp49062.2020.9231531.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Desai, Nirmit, und Dinesh Verma. „Properties of federated averaging on highly distributed data“. In Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, herausgegeben von Tien Pham. SPIE, 2019. http://dx.doi.org/10.1117/12.2518941.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Wang, Shuai, Richard Cornelius Suwandi und Tsung-Hui Chang. „Demystifying Model Averaging for Communication-Efficient Federated Matrix Factorization“. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021. http://dx.doi.org/10.1109/icassp39728.2021.9413927.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie