Academic literature on the topic 'Privacy-preserving federated learning algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Privacy-preserving federated learning algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Privacy-preserving federated learning algorithms"

1

Cellamare, Matteo, Anna J. van Gestel, Hasan Alradhi, Frank Martin, and Arturo Moncada-Torres. "A Federated Generalized Linear Model for Privacy-Preserving Analysis." Algorithms 15, no. 7 (July 13, 2022): 243. http://dx.doi.org/10.3390/a15070243.

Full text
Abstract:
In the last few years, federated learning (FL) has emerged as a novel alternative for analyzing data spread across different parties without needing to centralize them. In order to increase the adoption of FL, there is a need to develop more algorithms that can be deployed under this novel privacy-preserving paradigm. In this paper, we present our federated generalized linear model (GLM) for horizontally partitioned data. It allows generating models of different families (linear, Poisson, logistic) without disclosing privacy-sensitive individual records. We describe its algorithm (which can be implemented in the user’s platform of choice) and compare the obtained federated models against their centralized counterpart, which were mathematically equivalent. We also validated their execution time with increasing numbers of records and involved parties. We show that our federated GLM is accurate enough to be used for the privacy-preserving analysis of horizontally partitioned data in real-life scenarios. Further development of this type of algorithm has the potential to make FL a much more common practice among researchers.
APA, Harvard, Vancouver, ISO, and other styles
2

Park, Jaehyoung, and Hyuk Lim. "Privacy-Preserving Federated Learning Using Homomorphic Encryption." Applied Sciences 12, no. 2 (January 12, 2022): 734. http://dx.doi.org/10.3390/app12020734.

Full text
Abstract:
Federated learning (FL) is a machine learning technique that enables distributed devices to train a learning model collaboratively without sharing their local data. FL-based systems can achieve much stronger privacy preservation since the distributed devices deliver only local model parameters trained with local data to a centralized server. However, there exists a possibility that a centralized server or attackers infer/extract sensitive private information using the structure and parameters of local learning models. We propose employing homomorphic encryption (HE) scheme that can directly perform arithmetic operations on ciphertexts without decryption to protect the model parameters. Using the HE scheme, the proposed privacy-preserving federated learning (PPFL) algorithm enables the centralized server to aggregate encrypted local model parameters without decryption. Furthermore, the proposed algorithm allows each node to use a different HE private key in the same FL-based system using a distributed cryptosystem. The performance analysis and evaluation of the proposed PPFL algorithm are conducted in various cloud computing-based FL service scenarios.
APA, Harvard, Vancouver, ISO, and other styles
3

Thorgeirsson, Adam Thor, and Frank Gauterin. "Probabilistic Predictions with Federated Learning." Entropy 23, no. 1 (December 30, 2020): 41. http://dx.doi.org/10.3390/e23010041.

Full text
Abstract:
Probabilistic predictions with machine learning are important in many applications. These are commonly done with Bayesian learning algorithms. However, Bayesian learning methods are computationally expensive in comparison with non-Bayesian methods. Furthermore, the data used to train these algorithms are often distributed over a large group of end devices. Federated learning can be applied in this setting in a communication-efficient and privacy-preserving manner but does not include predictive uncertainty. To represent predictive uncertainty in federated learning, our suggestion is to introduce uncertainty in the aggregation step of the algorithm by treating the set of local weights as a posterior distribution for the weights of the global model. We compare our approach to state-of-the-art Bayesian and non-Bayesian probabilistic learning algorithms. By applying proper scoring rules to evaluate the predictive distributions, we show that our approach can achieve similar performance as the benchmark would achieve in a non-distributed setting.
APA, Harvard, Vancouver, ISO, and other styles
4

Jiang, Xue, Xuebing Zhou, and Jens Grossklags. "Privacy-Preserving High-dimensional Data Collection with Federated Generative Autoencoder." Proceedings on Privacy Enhancing Technologies 2022, no. 1 (November 20, 2021): 481–500. http://dx.doi.org/10.2478/popets-2022-0024.

Full text
Abstract:
Abstract Business intelligence and AI services often involve the collection of copious amounts of multidimensional personal data. Since these data usually contain sensitive information of individuals, the direct collection can lead to privacy violations. Local differential privacy (LDP) is currently considered a state-ofthe-art solution for privacy-preserving data collection. However, existing LDP algorithms are not applicable to high-dimensional data; not only because of the increase in computation and communication cost, but also poor data utility. In this paper, we aim at addressing the curse-of-dimensionality problem in LDP-based high-dimensional data collection. Based on the idea of machine learning and data synthesis, we propose DP-Fed-Wae, an efficient privacy-preserving framework for collecting high-dimensional categorical data. With the combination of a generative autoencoder, federated learning, and differential privacy, our framework is capable of privately learning the statistical distributions of local data and generating high utility synthetic data on the server side without revealing users’ private information. We have evaluated the framework in terms of data utility and privacy protection on a number of real-world datasets containing 68–124 classification attributes. We show that our framework outperforms the LDP-based baseline algorithms in capturing joint distributions and correlations of attributes and generating high-utility synthetic data. With a local privacy guarantee ∈ = 8, the machine learning models trained with the synthetic data generated by the baseline algorithm cause an accuracy loss of 10% ~ 30%, whereas the accuracy loss is significantly reduced to less than 3% and at best even less than 1% with our framework. Extensive experimental results demonstrate the capability and efficiency of our framework in synthesizing high-dimensional data while striking a satisfactory utility-privacy balance.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhou, Zhou, Youliang Tian, and Changgen Peng. "Privacy-Preserving Federated Learning Framework with General Aggregation and Multiparty Entity Matching." Wireless Communications and Mobile Computing 2021 (June 26, 2021): 1–14. http://dx.doi.org/10.1155/2021/6692061.

Full text
Abstract:
The requirement for data sharing and privacy has brought increasing attention to federated learning. However, the existing aggregation models are too specialized and deal less with users’ withdrawal issue. Moreover, protocols for multiparty entity matching are rarely covered. Thus, there is no systematic framework to perform federated learning tasks. In this paper, we systematically propose a privacy-preserving federated learning framework (PFLF) where we first construct a general secure aggregation model in federated learning scenarios by combining the Shamir secret sharing with homomorphic cryptography to ensure that the aggregated value can be decrypted correctly only when the number of participants is greater than t . Furthermore, we propose a multiparty entity matching protocol by employing secure multiparty computing to solve the entity alignment problems and a logistic regression algorithm to achieve privacy-preserving model training and support the withdrawal of users in vertical federated learning (VFL) scenarios. Finally, the security analyses prove that PFLF preserves the data privacy in the honest-but-curious model, and the experimental evaluations show PFLF attains consistent accuracy with the original model and demonstrates the practical feasibility.
APA, Harvard, Vancouver, ISO, and other styles
6

Gong, Xuan, Abhishek Sharma, Srikrishna Karanam, Ziyan Wu, Terrence Chen, David Doermann, and Arun Innanje. "Preserving Privacy in Federated Learning with Ensemble Cross-Domain Knowledge Distillation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (June 28, 2022): 11891–99. http://dx.doi.org/10.1609/aaai.v36i11.21446.

Full text
Abstract:
Federated Learning (FL) is a machine learning paradigm where local nodes collaboratively train a central model while the training data remains decentralized. Existing FL methods typically share model parameters or employ co-distillation to address the issue of unbalanced data distribution. However, they suffer from communication bottlenecks. More importantly, they risk privacy leakage risk. In this work, we develop a privacy preserving and communication efficient method in a FL framework with one-shot offline knowledge distillation using unlabeled, cross-domain, non-sensitive public data. We propose a quantized and noisy ensemble of local predictions from completely trained local models for stronger privacy guarantees without sacrificing accuracy. Based on extensive experiments on image classification and text classification tasks, we show that our method outperforms baseline FL algorithms with superior performance in both accuracy and data privacy preservation.
APA, Harvard, Vancouver, ISO, and other styles
7

Loftus, Tyler J., Matthew M. Ruppert, Benjamin Shickel, Tezcan Ozrazgat-Baslanti, Jeremy A. Balch, Philip A. Efron, Gilbert R. Upchurch, et al. "Federated learning for preserving data privacy in collaborative healthcare research." DIGITAL HEALTH 8 (January 2022): 205520762211344. http://dx.doi.org/10.1177/20552076221134455.

Full text
Abstract:
Generalizability, external validity, and reproducibility are high priorities for artificial intelligence applications in healthcare. Traditional approaches to addressing these elements involve sharing patient data between institutions or practice settings, which can compromise data privacy (individuals’ right to prevent the sharing and disclosure of information about themselves) and data security (simultaneously preserving confidentiality, accuracy, fidelity, and availability of data). This article describes insights from real-world implementation of federated learning techniques that offer opportunities to maintain both data privacy and availability via collaborative machine learning that shares knowledge, not data. Local models are trained separately on local data. As they train, they send local model updates (e.g. coefficients or gradients) for consolidation into a global model. In some use cases, global models outperform local models on new, previously unseen local datasets, suggesting that collaborative learning from a greater number of examples, including a greater number of rare cases, may improve predictive performance. Even when sharing model updates rather than data, privacy leakage can occur when adversaries perform property or membership inference attacks which can be used to ascertain information about the training set. Emerging techniques mitigate risk from adversarial attacks, allowing investigators to maintain both data privacy and availability in collaborative healthcare research. When data heterogeneity between participating centers is high, personalized algorithms may offer greater generalizability by improving performance on data from centers with proportionately smaller training sample sizes. Properly applied, federated learning has the potential to optimize the reproducibility and performance of collaborative learning while preserving data security and privacy.
APA, Harvard, Vancouver, ISO, and other styles
8

Fang, Haokun, and Quan Qian. "Privacy Preserving Machine Learning with Homomorphic Encryption and Federated Learning." Future Internet 13, no. 4 (April 8, 2021): 94. http://dx.doi.org/10.3390/fi13040094.

Full text
Abstract:
Privacy protection has been an important concern with the great success of machine learning. In this paper, it proposes a multi-party privacy preserving machine learning framework, named PFMLP, based on partially homomorphic encryption and federated learning. The core idea is all learning parties just transmitting the encrypted gradients by homomorphic encryption. From experiments, the model trained by PFMLP has almost the same accuracy, and the deviation is less than 1%. Considering the computational overhead of homomorphic encryption, we use an improved Paillier algorithm which can speed up the training by 25–28%. Moreover, comparisons on encryption key length, the learning network structure, number of learning clients, etc. are also discussed in detail in the paper.
APA, Harvard, Vancouver, ISO, and other styles
9

Ali, Waqar, Rajesh Kumar, Zhiyi Deng, Yansong Wang, and Jie Shao. "A Federated Learning Approach for Privacy Protection in Context-Aware Recommender Systems." Computer Journal 64, no. 7 (April 30, 2021): 1016–27. http://dx.doi.org/10.1093/comjnl/bxab025.

Full text
Abstract:
Abstract Privacy protection is one of the key concerns of users in recommender system-based consumer markets. Popular recommendation frameworks such as collaborative filtering (CF) suffer from several privacy issues. Federated learning has emerged as an optimistic approach for collaborative and privacy-preserved learning. Users in a federated learning environment train a local model on a self-maintained item log and collaboratively train a global model by exchanging model parameters instead of personalized preferences. In this research, we proposed a federated learning-based privacy-preserving CF model for context-aware recommender systems that work with a user-defined collaboration protocol to ensure users’ privacy. Instead of crawling users’ personal information into a central server, the whole data are divided into two disjoint parts, i.e. user data and sharable item information. The inbuilt power of federated architecture ensures the users’ privacy concerns while providing considerably accurate recommendations. We evaluated the performance of the proposed algorithm with two publicly available datasets through both the prediction and ranking perspectives. Despite the federated cost and lack of open collaboration, the overall performance achieved through the proposed technique is comparable with popular recommendation models and satisfactory while providing significant privacy guarantees.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Shengsheng, Shuzhen Lu, and Bin Cao. "Medical Image Object Detection Algorithm for Privacy-Preserving Federated Learning." Journal of Computer-Aided Design & Computer Graphics 33, no. 10 (October 1, 2021): 1153–562. http://dx.doi.org/10.3724/sp.j.1089.2021.18416.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Privacy-preserving federated learning algorithms"

1

Carlsson, Robert. "Privacy-Preserved Federated Learning : A survey of applicable machine learning algorithms in a federated environment." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-424383.

Full text
Abstract:
There is a potential in the field of medicine and finance of doing collaborative machine learning. These areas gather data which can be used for developing machine learning models that could predict all from sickness in patients to acts of economical crime like fraud. The problem that exists is that the data collected is mostly of confidential nature and should be handled with precaution. This makes the standard way of doing machine learning - gather data at one centralized server - unwanted to achieve. The safety of the data have to be taken into account. In this project we will explore the Federated learning approach of ”bringing the code to the data, instead of data to the code”. It is a decentralized way of doing machine learning where models are trained on connected devices and data is never shared. Keeping the data privacypreserved.
APA, Harvard, Vancouver, ISO, and other styles
2

Langelaar, Johannes, and Mattsson Adam Strömme. "Federated Neural Collaborative Filtering for privacy-preserving recommender systems." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-446913.

Full text
Abstract:
In this thesis a number of models for recommender systems are explored, all using collaborative filtering to produce their recommendations. Extra focus is put on two models: Matrix Factorization, which is a linear model and Multi-Layer Perceptron, which is a non-linear model. With an additional purpose of training the models without collecting any sensitive data from the users, both models were implemented with a learning technique that does not require the server's knowledge of the users' data, called federated learning. The federated version of Matrix Factorization is already well-researched, and has proven not to protect the users' data at all; the data is derivable from the information that the users communicate to the server that is necessary for the learning of the model. However, on the federated Multi-Layer Perceptron model, no research could be found. In this thesis, such a model is therefore designed and presented. Arguments are put forth in support of the privacy preservability of the model, along with a proof of the user data not being analytically derivable for the central server.    In addition, new ways to further put the protection of the users' data on the test are discussed. All models are evaluated on two different data sets. The first data set contains data on ratings of movies and is called MovieLens 1M. The second is a data set that consists of anonymized fund transactions, provided by the Swedish bank SEB for this thesis. Test results suggest that the federated versions of the models can achieve similar recommendation performance as their non-federated counterparts.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Privacy-preserving federated learning algorithms"

1

Lu, Yi, Lei Zhang, Lulu Wang, and Yuanyuan Gao. "Privacy-Preserving and Reliable Federated Learning." In Algorithms and Architectures for Parallel Processing, 346–61. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-95391-1_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Qiu, Fengyuan, Hao Yang, Lu Zhou, Chuan Ma, and LiMing Fang. "Privacy Preserving Federated Learning Using CKKS Homomorphic Encryption." In Wireless Algorithms, Systems, and Applications, 427–40. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19208-1_35.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bonura, Susanna, Davide Dalle Carbonare, Roberto Díaz-Morales, Marcos Fernández-Díaz, Lucrezia Morabito, Luis Muñoz-González, Chiara Napione, Ángel Navia-Vázquez, and Mark Purcell. "Privacy-Preserving Technologies for Trusted Data Spaces." In Technologies and Applications for Big Data Value, 111–34. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-78307-5_6.

Full text
Abstract:
AbstractThe quality of a machine learning model depends on the volume of data used during the training process. To prevent low accuracy models, one needs to generate more training data or add external data sources of the same kind. If the first option is not feasible, the second one requires the adoption of a federated learning approach, where different devices can collaboratively learn a shared prediction model. However, access to data can be hindered by privacy restrictions. Training machine learning algorithms using data collected from different data providers while mitigating privacy concerns is a challenging problem. In this chapter, we first introduce the general approach of federated machine learning and the H2020 MUSKETEER project, which aims to create a federated, privacy-preserving machine learning Industrial Data Platform. Then, we describe the Privacy Operations Modes designed in MUSKETEER as an answer for more privacy before looking at the platform and its operation using these different Privacy Operations Modes. We eventually present an efficiency assessment of the federated approach using the MUSKETEER platform. This chapter concludes with the description of a real use case of MUSKETEER in the manufacturing domain.
APA, Harvard, Vancouver, ISO, and other styles
4

Prabhugaonkar, Gargi Gopalkrishna, Xiaoyan Sun, Xuyu Wang, and Jun Dai. "Deep IoT Monitoring: Filtering IoT Traffic Using Deep Learning." In Silicon Valley Cybersecurity Conference, 120–36. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-24049-2_8.

Full text
Abstract:
AbstractThe use of IoT devices has significantly increased in recent years, but there have been growing concerns about the security and privacy issues associated with these IoT devices. A recent trend is to use deep network models to classify attack and benign traffic. A traditional approach is to train the models using centrally stored data collected from all the devices in the network. However, this framework raises concerns around data privacy and security. Attacks on the central server can compromise the data and expose sensitive information. To address the issues of data privacy and security, federated learning is now a widely studied solution in the research community. In this paper, we explore and implement federated learning techniques to detect attack traffic in the IoT network. We use Deep Neural Networks on the labeled dataset and Autoencoder on the unlabeled dataset in a federated framework. We implement different model aggregation algorithms such as FedSGD, FedAvg, and FedProx for federated learning. We compare the performance of these federated learning models with the models in a centralized framework and study which aggregation algorithm for the global model yields the best performance for detecting attack traffic in the IoT network.
APA, Harvard, Vancouver, ISO, and other styles
5

Bonura, Susanna, Davide dalle Carbonare, Roberto Díaz-Morales, Ángel Navia-Vázquez, Mark Purcell, and Stephanie Rossello. "Increasing Trust for Data Spaces with Federated Learning." In Data Spaces, 89–106. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98636-0_5.

Full text
Abstract:
AbstractDespite the need for data in a time of general digitization of organizations, many challenges are still hampering its shared use. Technical, organizational, legal, and commercial issues remain to leverage data satisfactorily, specially when the data is distributed among different locations and confidentiality must be preserved. Data platforms can offer “ad hoc” solutions to tackle specific matters within a data space. MUSKETEER develops an Industrial Data Platform (IDP) including algorithms for federated and privacy-preserving machine learning techniques on a distributed setup, detection and mitigation of adversarial attacks, and a rewarding model capable of monetizing datasets according to the real data value. The platform can offer an adequate response for organizations in demand of high security standards such as industrial companies with sensitive data or hospitals with personal data. From the architectural point of view, trust is enforced in such a way that data has never to leave out its provider’s premises, thanks to federated learning. This approach can help to better comply with the European regulation as confirmed from a legal perspective. Besides, MUSKETEER explores several rewarding models based on the availability of objective and quantitative data value estimations, which further increases the trust of the participants in the data space as a whole.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Shuaishuai, Jie Huang, Zeping Zhang, and Chunyang Qi. "Compromise Privacy in Large-Batch Federated Learning via Malicious Model Parameters." In Algorithms and Architectures for Parallel Processing, 63–80. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-22677-9_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chowdhury, Alexander, Hasan Kassem, Nicolas Padoy, Renato Umeton, and Alexandros Karargyris. "A Review of Medical Federated Learning: Applications in Oncology and Cancer Research." In Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, 3–24. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-08999-2_1.

Full text
Abstract:
AbstractMachine learning has revolutionized every facet of human life, while also becoming more accessible and ubiquitous. Its prevalence has had a powerful impact in healthcare, with numerous applications and intelligent systems achieving clinical level expertise. However, building robust and generalizable systems relies on training algorithms in a centralized fashion using large, heterogeneous datasets. In medicine, these datasets are time consuming to annotate and difficult to collect centrally due to privacy concerns. Recently, Federated Learning has been proposed as a distributed learning technique to alleviate many of these privacy concerns by providing a decentralized training paradigm for models using large, distributed data. This new approach has become the defacto way of building machine learning models in multiple industries (e.g. edge computing, smartphones). Due to its strong potential, Federated Learning is also becoming a popular training method in healthcare, where patient privacy is of paramount concern. In this paper we performed an extensive literature review to identify state-of-the-art Federated Learning applications for cancer research and clinical oncology analysis. Our objective is to provide readers with an overview of the evolving Federated Learning landscape, with a focus on applications and algorithms in oncology space. Moreover, we hope that this review will help readers to identify potential needs and future directions for research and development.
APA, Harvard, Vancouver, ISO, and other styles
8

Fan, Tian, Zhixia Zhang, Yang Lan, and Zhihua Cui. "A Many-Objective Anomaly Detection Model for Vehicle Network Based on Federated Learning and Differential Privacy Protection." In Exploration of Novel Intelligent Optimization Algorithms, 52–61. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-4109-2_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Xu, Runhua, Nathalie Baracaldo, Yi Zhou, Annie Abay, and Ali Anwar. "Privacy-Preserving Vertical Federated Learning." In Federated Learning, 417–38. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96896-0_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kim, Kwangjo, and Harry Chandra Tanuwidjaja. "Privacy-Preserving Federated Learning." In Privacy-Preserving Deep Learning, 55–63. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-3764-3_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Privacy-preserving federated learning algorithms"

1

Li, Zhenyu. "A Personalized Privacy-Preserving Scheme for Federated Learning." In 2022 IEEE International Conference on Electrical Engineering, Big Data and Algorithms (EEBDA). IEEE, 2022. http://dx.doi.org/10.1109/eebda53927.2022.9744805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Qinbin, Bingsheng He, and Dawn Song. "Practical One-Shot Federated Learning for Cross-Silo Setting." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/205.

Full text
Abstract:
Federated learning enables multiple parties to collaboratively learn a model without exchanging their data. While most existing federated learning algorithms need many rounds to converge, one-shot federated learning (i.e., federated learning with a single communication round) is a promising approach to make federated learning applicable in cross-silo setting in practice. However, existing one-shot algorithms only support specific models and do not provide any privacy guarantees, which significantly limit the applications in practice. In this paper, we propose a practical one-shot federated learning algorithm named FedKT. By utilizing the knowledge transfer technique, FedKT can be applied to any classification models and can flexibly achieve differential privacy guarantees. Our experiments on various tasks show that FedKT can significantly outperform the other state-of-the-art federated learning algorithms with a single communication round.
APA, Harvard, Vancouver, ISO, and other styles
3

Guo, Xiaohui. "Federated Learning for Data Security and Privacy Protection." In 2021 12th International Symposium on Parallel Architectures, Algorithms and Programming (PAAP). IEEE, 2021. http://dx.doi.org/10.1109/paap54281.2021.9720450.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jin, Hongwei, and Xun Chen. "Gromov-Wasserstein Discrepancy with Local Differential Privacy for Distributed Structural Graphs." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/294.

Full text
Abstract:
Learning the similarity between structured data, especially the graphs, is one of the essential problems. Besides the approach like graph kernels, Gromov-Wasserstein (GW) distance recently draws a big attention due to its flexibility to capture both topological and feature characteristics, as well as handling the permutation invariance. However, structured data are widely distributed for different data mining and machine learning applications. With privacy concerns, accessing the decentralized data is limited to either individual clients or different silos. To tackle these issues, we propose a privacy-preserving framework to analyze the GW discrepancy of node embedding learned locally from graph neural networks in a federated flavor, and then explicitly place local differential privacy (LDP) based on Multi-bit Encoder to protect sensitive information. Our experiments show that, with strong privacy protection guaranteed by ε-LDP algorithm, the proposed framework not only preserves privacy in graph learning, but also presents a noised structural metric under GW distance, resulting in comparable and even better performance in classification and clustering tasks. Moreover, we reason the rationale behind the LDP-based GW distance analytically and empirically.
APA, Harvard, Vancouver, ISO, and other styles
5

Chandran, Pravin, Raghavendra Bhat, Avinash Chakravarthy, and Srikanth Chandar. "Divide-and-Conquer Federated Learning Under Data Heterogeneity." In International Conference on AI, Machine Learning and Applications (AIMLA 2021). Academy and Industry Research Collaboration Center (AIRCC), 2021. http://dx.doi.org/10.5121/csit.2021.111302.

Full text
Abstract:
Federated Learning allows training of data stored in distributed devices without the need for centralizing training-data, thereby maintaining data-privacy. Addressing the ability to handle data heterogeneity (non-identical and independent distribution or non-IID) is a key enabler for the wider deployment of Federated Learning. In this paper, we propose a novel Divide-andConquer training methodology that enables the use of the popular FedAvg aggregation algorithm by over-coming the acknowledged FedAvg limitations in non-IID environments. We propose a novel use of Cosine-distance based Weight Divergence metric to determine the exact point where a Deep Learning network can be divided into class-agnostic initial layers and class-specific deep layers for performing a Divide and Conquer training. We show that the methodology achieves trained-model accuracy at-par with (and in certain cases exceeding) the numbers achieved by state-of-the-art algorithms like FedProx, FedMA, etc. Also, we show that this methodology leads to compute and/or bandwidth optimizations under certain documented conditions.
APA, Harvard, Vancouver, ISO, and other styles
6

Hu, Rui, Yuanxiong Guo, Hongning Li, Qingqi Pei, and Yanmin Gong. "Privacy-Preserving Personalized Federated Learning." In ICC 2020 - 2020 IEEE International Conference on Communications (ICC). IEEE, 2020. http://dx.doi.org/10.1109/icc40277.2020.9149207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhu, Xudong, and Hui Li. "Privacy-preserving Decentralized Federated Deep Learning." In ACM TURC 2021: ACM Turing Award Celebration Conference - China ( ACM TURC 2021). New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3472634.3472642.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chattopadhyay, Nandish, Arpit Singh, and Anupam Chattopadhyay. "ROFL: RObust privacy preserving Federated Learning." In 2022 IEEE 42nd International Conference on Distributed Computing Systems Workshops (ICDCSW). IEEE, 2022. http://dx.doi.org/10.1109/icdcsw56584.2022.00033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gao, Dashan, Yang Liu, Anbu Huang, Ce Ju, Han Yu, and Qiang Yang. "Privacy-preserving Heterogeneous Federated Transfer Learning." In 2019 IEEE International Conference on Big Data (Big Data). IEEE, 2019. http://dx.doi.org/10.1109/bigdata47090.2019.9005992.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gao, Yuanyuan, Lulu Wang, and Lei Zhang. "Privacy-Preserving Verifiable Asynchronous Federated Learning." In ICSED 2021: 2021 3rd International Conference on Software Engineering and Development. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3507473.3507478.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography