Journal articles on the topic 'Privacy preserving machine learning'

To see the other types of publications on this topic, follow the link: Privacy preserving machine learning.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Privacy preserving machine learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Liu, Zheyuan, and Rui Zhang. "Privacy Preserving Collaborative Machine Learning." ICST Transactions on Security and Safety 8, no. 28 (September 10, 2021): 170295. http://dx.doi.org/10.4108/eai.14-7-2021.170295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kerschbaum, Florian, and Nils Lukas. "Privacy-Preserving Machine Learning [Cryptography]." IEEE Security & Privacy 21, no. 6 (November 2023): 90–94. http://dx.doi.org/10.1109/msec.2023.3315944.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pan, Ziqi. "Machine learning for privacy-preserving: Approaches, challenges and discussion." Applied and Computational Engineering 18, no. 1 (October 23, 2023): 23–27. http://dx.doi.org/10.54254/2755-2721/18/20230957.

Full text
Abstract:
Currently, advanced technologies such as big data, artificial intelligence and machine learning are undergoing rapid development. However, the emergence of cybersecurity and privacy leakage problems has resulted in serious implications. This paper discusses the current state of privacy security issues in the field of machine learning in a comprehensive manner. During machine training, training models often unconsciously extract and record private information from raw data, and in addition, third-party attackers are interested in maliciously extracting private information from raw data. This paper first provides a quick introduction to the validation criterion in privacy-preserving strategies, based on which algorithms can account for and validate the privacy leakage problem during machine learning. The paper then describes different privacy-preserving strategies based mainly on federation learning that focus on Differentially Private Federated Averaging and Privacy-Preserving Asynchronous Federated Learning Mechanism and provides an analysis and discussion of their advantages and disadvantages. By improving the original machine learning methods, such as improving the parameter values and limiting the range of features, the possibility of privacy leakage during machine learning is successfully reduced. However, the different privacy-preserving strategies are mainly limited to changing the parameters of the original model training method, which leads to limitations in the training method, such as reduced efficiency or difficulty in training under certain conditions.
APA, Harvard, Vancouver, ISO, and other styles
4

Monika Dhananjay Rokade. "Advancements in Privacy-Preserving Techniques for Federated Learning: A Machine Learning Perspective." Journal of Electrical Systems 20, no. 2s (March 31, 2024): 1075–88. http://dx.doi.org/10.52783/jes.1754.

Full text
Abstract:
Federated learning has emerged as a promising paradigm for collaborative machine learning while preserving data privacy. However, concerns about data privacy remain significant, particularly in scenarios where sensitive information is involved. This paper reviews recent advancements in privacy-preserving techniques for federated learning from a machine learning perspective. It categorizes and analyses state-of-the-art approaches within a unified framework, highlighting their strengths, limitations, and potential applications. By providing insights into the landscape of privacy-preserving federated learning, this review aims to guide researchers and practitioners in developing robust and privacy-conscious machine learning solutions for collaborative environments. The paper concludes with future research directions to address ongoing challenges and further enhance the effectiveness and scalability of privacy-preserving federated learning.
APA, Harvard, Vancouver, ISO, and other styles
5

Zheng, Huadi, Haibo Hu, and Ziyang Han. "Preserving User Privacy for Machine Learning: Local Differential Privacy or Federated Machine Learning?" IEEE Intelligent Systems 35, no. 4 (July 1, 2020): 5–14. http://dx.doi.org/10.1109/mis.2020.3010335.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chamikara, M. A. P., P. Bertok, I. Khalil, D. Liu, and S. Camtepe. "Privacy preserving distributed machine learning with federated learning." Computer Communications 171 (April 2021): 112–25. http://dx.doi.org/10.1016/j.comcom.2021.02.014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bonawitz, Kallista, Peter Kairouz, Brendan Mcmahan, and Daniel Ramage. "Federated learning and privacy." Communications of the ACM 65, no. 4 (April 2022): 90–97. http://dx.doi.org/10.1145/3500240.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Al-Rubaie, Mohammad, and J. Morris Chang. "Privacy-Preserving Machine Learning: Threats and Solutions." IEEE Security & Privacy 17, no. 2 (March 2019): 49–58. http://dx.doi.org/10.1109/msec.2018.2888775.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hesamifard, Ehsan, Hassan Takabi, Mehdi Ghasemi, and Rebecca N. Wright. "Privacy-preserving Machine Learning as a Service." Proceedings on Privacy Enhancing Technologies 2018, no. 3 (June 1, 2018): 123–42. http://dx.doi.org/10.1515/popets-2018-0024.

Full text
Abstract:
Abstract Machine learning algorithms based on deep Neural Networks (NN) have achieved remarkable results and are being extensively used in different domains. On the other hand, with increasing growth of cloud services, several Machine Learning as a Service (MLaaS) are offered where training and deploying machine learning models are performed on cloud providers’ infrastructure. However, machine learning algorithms require access to the raw data which is often privacy sensitive and can create potential security and privacy risks. To address this issue, we present CryptoDL, a framework that develops new techniques to provide solutions for applying deep neural network algorithms to encrypted data. In this paper, we provide the theoretical foundation for implementing deep neural network algorithms in encrypted domain and develop techniques to adopt neural networks within practical limitations of current homomorphic encryption schemes. We show that it is feasible and practical to train neural networks using encrypted data and to make encrypted predictions, and also return the predictions in an encrypted form. We demonstrate applicability of the proposed CryptoDL using a large number of datasets and evaluate its performance. The empirical results show that it provides accurate privacy-preserving training and classification.
APA, Harvard, Vancouver, ISO, and other styles
10

Jitendra Singh Chouhan, Amit Kumar Bhatt, Nitin Anand. "Federated Learning; Privacy Preserving Machine Learning for Decentralized Data." Tuijin Jishu/Journal of Propulsion Technology 44, no. 1 (November 24, 2023): 167–69. http://dx.doi.org/10.52783/tjjpt.v44.i1.2234.

Full text
Abstract:
Federated learning represents a compelling solution for tackling the privacy challenges inherent in decentralized and distributed environments when it comes to machine learning. This scholarly paper delves deep into the realm of federated learning, encompassing its applications and the latest privacy-preserving techniques used for training machine learning models in a decentralized manner. We explore the reasons behind the adoption of federated learning, highlight its advantages over conventional centralized approaches, and examine the diverse methods employed to safeguard privacy within this framework. Furthermore, we scrutinize the current obstacles, unresolved research queries, and the prospective directions within this rapidly developing field
APA, Harvard, Vancouver, ISO, and other styles
11

Peringanji, Deepika. "Unlocking the Future: Privacy-Preserving ML Experimentation." International Journal for Research in Applied Science and Engineering Technology 12, no. 5 (May 31, 2024): 350–56. http://dx.doi.org/10.22214/ijraset.2024.60969.

Full text
Abstract:
Abstract: Experiments with machine learning (ML) have become a key source of new ideas in many fields. However, growing worries about data privacy have made it clear that we need ML testing methods that protect privacy. There are new technologies in this piece that let you play around with machine learning without putting your data at risk. Differential privacy, secure multiparty computation (SMPC), homomorphic encryption, federated learning, trusted execution environments (TEEs), making fake data, and using temporary and nameless IDs are some of these technologies. By using these privacy-protecting solutions, businesses can utilize the full potential of machine learning experiments while protecting individuals' privacy rights and staying in line with strict rules.
APA, Harvard, Vancouver, ISO, and other styles
12

Li, Changhao, Niraj Kumar, Zhixin Song, Shouvanik Chakrabarti, and Marco Pistoia. "Privacy-preserving quantum federated learning via gradient hiding." Quantum Science and Technology 9, no. 3 (May 8, 2024): 035028. http://dx.doi.org/10.1088/2058-9565/ad40cc.

Full text
Abstract:
Abstract Distributed quantum computing, particularly distributed quantum machine learning, has gained substantial prominence for its capacity to harness the collective power of distributed quantum resources, transcending the limitations of individual quantum nodes. Meanwhile, the critical concern of privacy within distributed computing protocols remains a significant challenge, particularly in standard classical federated learning (FL) scenarios where data of participating clients is susceptible to leakage via gradient inversion attacks by the server. This paper presents innovative quantum protocols with quantum communication designed to address the FL problem, strengthen privacy measures, and optimize communication efficiency. In contrast to previous works that leverage expressive variational quantum circuits or differential privacy techniques, we consider gradient information concealment using quantum states and propose two distinct FL protocols, one based on private inner-product estimation and the other on incremental learning. These protocols offer substantial advancements in privacy preservation with low communication resources, forging a path toward efficient quantum communication-assisted FL protocols and contributing to the development of secure distributed quantum machine learning, thus addressing critical privacy concerns in the quantum computing era.
APA, Harvard, Vancouver, ISO, and other styles
13

Shaik Khaleel Ahamed, Neerav Nishant, Ayyakkannu Selvaraj, Nisarg Gandhewar, Srithar A, and K.K.Baseer. "Investigating privacy-preserving machine learning for healthcare data sharing through federated learning." Scientific Temper 14, no. 04 (December 31, 2023): 1308–15. http://dx.doi.org/10.58414/scientifictemper.2023.14.4.37.

Full text
Abstract:
Privacy-Preserving Machine Learning (PPML) is a pivotal paradigm in healthcare research, offering innovative solutions to the challenges of data sharing and privacy preservation. In the context of Federated Learning, this paper investigates the implementation of PPML for healthcare data sharing, focusing on the dynamic nature of data collection, sample sizes, data modalities, patient demographics, and comorbidity indices. The results reveal substantial variations in sample sizes across substudies, underscoring the need to align data collection with research objectives and available resources. The distribution of measures demonstrates a balanced approach to healthcare data modalities, ensuring data fairness and equity. The interplay between average age and sample size highlights the significance of tailored privacy-preserving strategies. The comorbidity index distribution provides insights into the health status of the studied population and aids in personalized healthcare. Additionally, the fluctuation of sample sizes over substudies emphasizes the adaptability of privacy-preserving machine learning models in diverse healthcare research scenarios. Overall, this investigation contributes to the evolving landscape of healthcare data sharing by addressing the challenges of data heterogeneity, regulatory compliance, and collaborative model development. The findings empower researchers and healthcare professionals to strike a balance between data utility and privacy preservation, ultimately advancing the field of privacy-preserving machine learning in healthcare research.
APA, Harvard, Vancouver, ISO, and other styles
14

Zapechnikov, Sergey V. "Models and algorithms of privacy-preserving machine learning." Bezopasnost informacionnyh tehnology 27, no. 1 (March 2020): 51–67. http://dx.doi.org/10.26583/bit.2020.1.05.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Baron, Benjamin, and Mirco Musolesi. "Interpretable Machine Learning for Privacy-Preserving Pervasive Systems." IEEE Pervasive Computing 19, no. 1 (January 2020): 73–82. http://dx.doi.org/10.1109/mprv.2019.2918540.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Kim, Hyunil, Seung-Hyun Kim, Jung Yeon Hwang, and Changho Seo. "Efficient Privacy-Preserving Machine Learning for Blockchain Network." IEEE Access 7 (2019): 136481–95. http://dx.doi.org/10.1109/access.2019.2940052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Li, Ping, Tong Li, Heng Ye, Jin Li, Xiaofeng Chen, and Yang Xiang. "Privacy-preserving machine learning with multiple data providers." Future Generation Computer Systems 87 (October 2018): 341–50. http://dx.doi.org/10.1016/j.future.2018.04.076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Taigel, Fabian, Anselme K. Tueno, and Richard Pibernik. "Privacy-preserving condition-based forecasting using machine learning." Journal of Business Economics 88, no. 5 (January 5, 2018): 563–92. http://dx.doi.org/10.1007/s11573-017-0889-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Froelicher, David, Juan R. Troncoso-Pastoriza, Apostolos Pyrgelis, Sinem Sav, Joao Sa Sousa, Jean-Philippe Bossuat, and Jean-Pierre Hubaux. "Scalable Privacy-Preserving Distributed Learning." Proceedings on Privacy Enhancing Technologies 2021, no. 2 (January 29, 2021): 323–47. http://dx.doi.org/10.2478/popets-2021-0030.

Full text
Abstract:
Abstract In this paper, we address the problem of privacy-preserving distributed learning and the evaluation of machine-learning models by analyzing it in the widespread MapReduce abstraction that we extend with privacy constraints. We design spindle (Scalable Privacy-preservINg Distributed LEarning), the first distributed and privacy-preserving system that covers the complete ML workflow by enabling the execution of a cooperative gradient-descent and the evaluation of the obtained model and by preserving data and model confidentiality in a passive-adversary model with up to N −1 colluding parties. spindle uses multiparty homomorphic encryption to execute parallel high-depth computations on encrypted data without significant overhead. We instantiate spindle for the training and evaluation of generalized linear models on distributed datasets and show that it is able to accurately (on par with non-secure centrally-trained models) and efficiently (due to a multi-level parallelization of the computations) train models that require a high number of iterations on large input data with thousands of features, distributed among hundreds of data providers. For instance, it trains a logistic-regression model on a dataset of one million samples with 32 features distributed among 160 data providers in less than three minutes.
APA, Harvard, Vancouver, ISO, and other styles
20

Senekane, Makhamisa. "Differentially Private Image Classification Using Support Vector Machine and Differential Privacy." Machine Learning and Knowledge Extraction 1, no. 1 (February 20, 2019): 483–91. http://dx.doi.org/10.3390/make1010029.

Full text
Abstract:
The ubiquity of data, including multi-media data such as images, enables easy mining and analysis of such data. However, such an analysis might involve the use of sensitive data such as medical records (including radiological images) and financial records. Privacy-preserving machine learning is an approach that is aimed at the analysis of such data in such a way that privacy is not compromised. There are various privacy-preserving data analysis approaches such as k-anonymity, l-diversity, t-closeness and Differential Privacy (DP). Currently, DP is a golden standard of privacy-preserving data analysis due to its robustness against background knowledge attacks. In this paper, we report a scheme for privacy-preserving image classification using Support Vector Machine (SVM) and DP. SVM is chosen as a classification algorithm because unlike variants of artificial neural networks, it converges to a global optimum. SVM kernels used are linear and Radial Basis Function (RBF), while ϵ -differential privacy was the DP framework used. The proposed scheme achieved an accuracy of up to 98%. The results obtained underline the utility of using SVM and DP for privacy-preserving image classification.
APA, Harvard, Vancouver, ISO, and other styles
21

Yin, Xuefei, Yanming Zhu, and Jiankun Hu. "A Comprehensive Survey of Privacy-preserving Federated Learning." ACM Computing Surveys 54, no. 6 (July 2021): 1–36. http://dx.doi.org/10.1145/3460427.

Full text
Abstract:
The past four years have witnessed the rapid development of federated learning (FL). However, new privacy concerns have also emerged during the aggregation of the distributed intermediate results. The emerging privacy-preserving FL (PPFL) has been heralded as a solution to generic privacy-preserving machine learning. However, the challenge of protecting data privacy while maintaining the data utility through machine learning still remains. In this article, we present a comprehensive and systematic survey on the PPFL based on our proposed 5W-scenario-based taxonomy. We analyze the privacy leakage risks in the FL from five aspects, summarize existing methods, and identify future research directions.
APA, Harvard, Vancouver, ISO, and other styles
22

Fang, Haokun, and Quan Qian. "Privacy Preserving Machine Learning with Homomorphic Encryption and Federated Learning." Future Internet 13, no. 4 (April 8, 2021): 94. http://dx.doi.org/10.3390/fi13040094.

Full text
Abstract:
Privacy protection has been an important concern with the great success of machine learning. In this paper, it proposes a multi-party privacy preserving machine learning framework, named PFMLP, based on partially homomorphic encryption and federated learning. The core idea is all learning parties just transmitting the encrypted gradients by homomorphic encryption. From experiments, the model trained by PFMLP has almost the same accuracy, and the deviation is less than 1%. Considering the computational overhead of homomorphic encryption, we use an improved Paillier algorithm which can speed up the training by 25–28%. Moreover, comparisons on encryption key length, the learning network structure, number of learning clients, etc. are also discussed in detail in the paper.
APA, Harvard, Vancouver, ISO, and other styles
23

Park, Jaehyoung, and Hyuk Lim. "Privacy-Preserving Federated Learning Using Homomorphic Encryption." Applied Sciences 12, no. 2 (January 12, 2022): 734. http://dx.doi.org/10.3390/app12020734.

Full text
Abstract:
Federated learning (FL) is a machine learning technique that enables distributed devices to train a learning model collaboratively without sharing their local data. FL-based systems can achieve much stronger privacy preservation since the distributed devices deliver only local model parameters trained with local data to a centralized server. However, there exists a possibility that a centralized server or attackers infer/extract sensitive private information using the structure and parameters of local learning models. We propose employing homomorphic encryption (HE) scheme that can directly perform arithmetic operations on ciphertexts without decryption to protect the model parameters. Using the HE scheme, the proposed privacy-preserving federated learning (PPFL) algorithm enables the centralized server to aggregate encrypted local model parameters without decryption. Furthermore, the proposed algorithm allows each node to use a different HE private key in the same FL-based system using a distributed cryptosystem. The performance analysis and evaluation of the proposed PPFL algorithm are conducted in various cloud computing-based FL service scenarios.
APA, Harvard, Vancouver, ISO, and other styles
24

Papadopoulos, Pavlos, Will Abramson, Adam J. Hall, Nikolaos Pitropakis, and William J. Buchanan. "Privacy and Trust Redefined in Federated Machine Learning." Machine Learning and Knowledge Extraction 3, no. 2 (March 29, 2021): 333–56. http://dx.doi.org/10.3390/make3020017.

Full text
Abstract:
A common privacy issue in traditional machine learning is that data needs to be disclosed for the training procedures. In situations with highly sensitive data such as healthcare records, accessing this information is challenging and often prohibited. Luckily, privacy-preserving technologies have been developed to overcome this hurdle by distributing the computation of the training and ensuring the data privacy to their owners. The distribution of the computation to multiple participating entities introduces new privacy complications and risks. In this paper, we present a privacy-preserving decentralised workflow that facilitates trusted federated learning among participants. Our proof-of-concept defines a trust framework instantiated using decentralised identity technologies being developed under Hyperledger projects Aries/Indy/Ursa. Only entities in possession of Verifiable Credentials issued from the appropriate authorities are able to establish secure, authenticated communication channels authorised to participate in a federated learning workflow related to mental health data.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhu, Liehuang, Xiangyun Tang, Meng Shen, Feng Gao, Jie Zhang, and Xiaojiang Du. "Privacy-Preserving Machine Learning Training in IoT Aggregation Scenarios." IEEE Internet of Things Journal 8, no. 15 (August 1, 2021): 12106–18. http://dx.doi.org/10.1109/jiot.2021.3060764.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Owusu-Agyemang, Kwabena, Zhen Qin, Appiah Benjamin, Hu Xiong, and Zhiguang Qin. "Guaranteed distributed machine learning: Privacy-preserving empirical risk minimization." Mathematical Biosciences and Engineering 18, no. 4 (2021): 4772–96. http://dx.doi.org/10.3934/mbe.2021243.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

KAWAMURA, Ayana, Yuma KINOSHITA, Takayuki NAKACHI, Sayaka SHIOTA, and Hitoshi KIYA. "A Privacy-Preserving Machine Learning Scheme Using EtC Images." IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E103.A, no. 12 (December 1, 2020): 1571–78. http://dx.doi.org/10.1587/transfun.2020smp0022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Jia, Qi, Linke Guo, Yuguang Fang, and Guirong Wang. "Efficient Privacy-Preserving Machine Learning in Hierarchical Distributed System." IEEE Transactions on Network Science and Engineering 6, no. 4 (October 1, 2019): 599–612. http://dx.doi.org/10.1109/tnse.2018.2859420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Jia, Qi, Linke Guo, Zhanpeng Jin, and Yuguang Fang. "Preserving Model Privacy for Machine Learning in Distributed Systems." IEEE Transactions on Parallel and Distributed Systems 29, no. 8 (August 1, 2018): 1808–22. http://dx.doi.org/10.1109/tpds.2018.2809624.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Zapechnikov, Sergey. "Secure multi-party computations for privacy-preserving machine learning." Procedia Computer Science 213 (2022): 523–27. http://dx.doi.org/10.1016/j.procs.2022.11.100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Samet, Saeed, and Ali Miri. "Privacy-preserving back-propagation and extreme learning machine algorithms." Data & Knowledge Engineering 79-80 (September 2012): 40–61. http://dx.doi.org/10.1016/j.datak.2012.06.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Terziyan, Vagan, Bohdan Bilokon, and Mariia Gavriushenko. "Deep Homeomorphic Data Encryption for Privacy Preserving Machine Learning." Procedia Computer Science 232 (2024): 2201–12. http://dx.doi.org/10.1016/j.procs.2024.02.039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

D Nikhil Teja, D. Nikhil Teja. "PRIVACY PRESERVING LOCATION DATA PUBLISHING: A MACHINE LEARNING APPROACH." Journal of Science and Technology 8, no. 12 (December 12, 2023): 23–30. http://dx.doi.org/10.46243/jst.2023.v8.i12.pp23-30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Xu, Shasha, and Xiufang Yin. "Recommendation System for Privacy-Preserving Education Technologies." Computational Intelligence and Neuroscience 2022 (April 16, 2022): 1–8. http://dx.doi.org/10.1155/2022/3502992.

Full text
Abstract:
Considering the priority for personalized and fully customized learning systems, the innovative computational intelligent systems for personalized educational technologies are the timeliest research area. Since the machine learning models reflect the data over which they were trained, data that have privacy and other sensitivities associated with the education abilities of learners, which can be vulnerable. This work proposes a recommendation system for privacy-preserving education technologies that uses machine learning and differential privacy to overcome this issue. Specifically, each student is automatically classified on their skills in a category using a directed acyclic graph method. In the next step, the model uses differential privacy which is the technology that enables a facility for the purpose of obtaining useful information from databases containing individuals’ personal information without divulging sensitive identification about each individual. In addition, an intelligent recommendation mechanism based on collaborative filtering offers personalized real-time data for the users’ privacy.
APA, Harvard, Vancouver, ISO, and other styles
35

Kjamilji, Artrim. "Techniques and Challenges while Applying Machine Learning Algorithms in Privacy Preserving Fashion." Proceeding International Conference on Science and Engineering 3 (April 30, 2020): xix. http://dx.doi.org/10.14421/icse.v3.600.

Full text
Abstract:
Nowadays many different entities collect data of the same nature, but in slightly different environments. In this sense different hospitals collect data about their patients’ symptoms and corresponding disease diagnoses, different banks collect transactions of their customers’ bank accounts, multiple cyber-security companies collect data about log files and corresponding attacks, etc. It is shown that if those different entities would merge their privately collected data in a single dataset and use it to train a machine learning (ML) model, they often end up with a trained model that outperforms the human experts of the corresponding fields in terms of accurate predictions. However, there is a drawback. Due to privacy concerns, empowered by laws and ethical reasons, no entity is willing to share with others their privately collected data. The same problem appears during the classification case over an already trained ML model. On one hand, a user that has an unclassified query (record), doesn’t want to share with the server that owns the trained model neither the content of the query (which might contain private data such as credit card number, IP address, etc.), nor the final prediction (classification) of the query. On the other hand, the owner of the trained model doesn’t want to leak any parameter of the trained model to the user. In order to overcome those shortcomings, several cryptographic and probabilistic techniques have been proposed during the last few years to enable both privacy preserving training and privacy preserving classification schemes. Some of them include anonymization and k-anonymity, differential privacy, secure multiparty computation (MPC), federated learning, Private Information Retrieval (PIR), Oblivious Transfer (OT), garbled circuits and/or homomorphic encryption, to name a few. Theoretical analyses and experimental results show that the current privacy preserving schemes are suitable for real-case deployment, while the accuracy of most of them differ little or not at all with the schemes that work in non-privacy preserving fashion.
APA, Harvard, Vancouver, ISO, and other styles
36

Upadhyay, Utsav, Alok Kumar, Satyabrata Roy, and Umashankar Rawat. "Balancing innovation and privacy : A machine learning perspective." Journal of Discrete Mathematical Sciences and Cryptography 27, no. 2-B (2024): 547–57. http://dx.doi.org/10.47974/jdmsc-1877.

Full text
Abstract:
As digital innovation advances, concerns surrounding privacy escalate. This manuscript explores the intricate relationship between machine learning and privacy preservation. Beginning with a comprehensive literature review, we delve into the current state of privacy in the digital age and examine machine learning’s role in addressing these concerns. The manuscript highlights key privacy-preserving techniques, including homomorphic encryption, differential privacy, and federated learning, providing in-depth insights into their applications and real-world implementations. Anticipating future challenges and trends, we recommend maintaining a delicate equilibrium between innovation and privacy in the dynamic landscape of machine learning.
APA, Harvard, Vancouver, ISO, and other styles
37

Wibawa, Febrianti, Ferhat Ozgur Catak, Salih Sarp, and Murat Kuzlu. "BFV-Based Homomorphic Encryption for Privacy-Preserving CNN Models." Cryptography 6, no. 3 (July 1, 2022): 34. http://dx.doi.org/10.3390/cryptography6030034.

Full text
Abstract:
Medical data is frequently quite sensitive in terms of data privacy and security. Federated learning has been used to increase the privacy and security of medical data, which is a sort of machine learning technique. The training data is disseminated across numerous machines in federated learning, and the learning process is collaborative. There are numerous privacy attacks on deep learning (DL) models that attackers can use to obtain sensitive information. As a result, the DL model should be safeguarded from adversarial attacks, particularly in medical data applications. Homomorphic encryption-based model security from the adversarial collaborator is one of the answers to this challenge. Using homomorphic encryption, this research presents a privacy-preserving federated learning system for medical data. The proposed technique employs a secure multi-party computation protocol to safeguard the deep learning model from adversaries. The proposed approach is tested in terms of model performance using a real-world medical dataset in this paper.
APA, Harvard, Vancouver, ISO, and other styles
38

Rezaeifar, Shideh, Slava Voloshynovskiy, Meisam Asgari Asgari Jirhandeh, and Vitality Kinakh. "Privacy-Preserving Image Template Sharing Using Contrastive Learning." Entropy 24, no. 5 (May 3, 2022): 643. http://dx.doi.org/10.3390/e24050643.

Full text
Abstract:
With the recent developments of Machine Learning as a Service (MLaaS), various privacy concerns have been raised. Having access to the user’s data, an adversary can design attacks with different objectives, namely, reconstruction or attribute inference attacks. In this paper, we propose two different training frameworks for an image classification task while preserving user data privacy against the two aforementioned attacks. In both frameworks, an encoder is trained with contrastive loss, providing a superior utility-privacy trade-off. In the reconstruction attack scenario, a supervised contrastive loss was employed to provide maximal discrimination for the targeted classification task. The encoded features are further perturbed using the obfuscator module to remove all redundant information. Moreover, the obfuscator module is jointly trained with a classifier to minimize the correlation between private feature representation and original data while retaining the model utility for the classification. For the attribute inference attack, we aim to provide a representation of data that is independent of the sensitive attribute. Therefore, the encoder is trained with supervised and private contrastive loss. Furthermore, an obfuscator module is trained in an adversarial manner to preserve the privacy of sensitive attributes while maintaining the classification performance on the target attribute. The reported results on the CelebA dataset validate the effectiveness of the proposed frameworks.
APA, Harvard, Vancouver, ISO, and other styles
39

Zhao, Ruoli, Yong Xie, Hong Cheng, Xingxing Jia, and Syed Zamad Shirazi. "ePMLF: Efficient and Privacy-Preserving Machine Learning Framework Based on Fog Computing." International Journal of Intelligent Systems 2023 (February 27, 2023): 1–16. http://dx.doi.org/10.1155/2023/8292559.

Full text
Abstract:
With the continuous improvement of computation and communication capabilities, the Internet of Things (IoT) plays a vital role in many intelligent applications. Therefore, IoT devices generate a large amount of data every day, which lays a solid foundation for the success of machine learning. However, the strong privacy requirements of the IoT data make its machine learning very difficult. To protect data privacy, many privacy-preserving machine learning schemes have been proposed. At present, most schemes only aim at specific models and lack general solutions, which is not an ideal solution in engineering practice. In order to meet this challenge, we propose an efficient and privacy-preserving machine learning training framework (ePMLF) in a fog computing environment. The ePMLF framework can let the software service provider (SSP) perform privacy-preserving model training with the data on the fog nodes. The security of the data on the fog nodes can be protected and the model parameters can only be obtained by SSP. The proposed secure data normalization method in the framework further improves the accuracy of the training model. Experimental analysis shows that our framework significantly reduces the computation and communication overhead compared with the existing scheme.
APA, Harvard, Vancouver, ISO, and other styles
40

Ma, Xindi, Jianfeng Ma, Saru Kumari, Fushan Wei, Mohammad Shojafar, and Mamoun Alazab. "Privacy-Preserving Distributed Multi-Task Learning against Inference Attack in Cloud Computing." ACM Transactions on Internet Technology 22, no. 2 (May 31, 2022): 1–24. http://dx.doi.org/10.1145/3426969.

Full text
Abstract:
Because of the powerful computing and storage capability in cloud computing, machine learning as a service (MLaaS) has recently been valued by the organizations for machine learning training over some related representative datasets. When these datasets are collected from different organizations and have different distributions, multi-task learning (MTL) is usually used to improve the generalization performance by scheduling the related training tasks into the virtual machines in MLaaS and transferring the related knowledge between those tasks. However, because of concerns about privacy breaches (e.g., property inference attack and model inverse attack), organizations cannot directly outsource their training data to MLaaS or share their extracted knowledge in plaintext, especially the organizations in sensitive domains. In this article, we propose a novel privacy-preserving mechanism for distributed MTL, namely NOInfer, to allow several task nodes to train the model locally and transfer their shared knowledge privately. Specifically, we construct a single-server architecture to achieve the private MTL, which protects task nodes’ local data even if n-1 out of n nodes colluded. Then, a new protocol for the Alternating Direction Method of Multipliers (ADMM) is designed to perform the privacy-preserving model training, which resists the inference attack through the intermediate results and ensures that the training efficiency is independent of the number of training samples. When releasing the trained model, we also design a differentially private model releasing mechanism to resist the membership inference attack. Furthermore, we analyze the privacy preservation and efficiency of NOInfer in theory. Finally, we evaluate our NOInfer over two testing datasets and evaluation results demonstrate that NOInfer efficiently and effectively achieves the distributed MTL.
APA, Harvard, Vancouver, ISO, and other styles
41

Xiao, Taihong, Yi-Hsuan Tsai, Kihyuk Sohn, Manmohan Chandraker, and Ming-Hsuan Yang. "Adversarial Learning of Privacy-Preserving and Task-Oriented Representations." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 12434–41. http://dx.doi.org/10.1609/aaai.v34i07.6930.

Full text
Abstract:
Data privacy has emerged as an important issue as data-driven deep learning has been an essential component of modern machine learning systems. For instance, there could be a potential privacy risk of machine learning systems via the model inversion attack, whose goal is to reconstruct the input data from the latent representation of deep networks. Our work aims at learning a privacy-preserving and task-oriented representation to defend against such model inversion attacks. Specifically, we propose an adversarial reconstruction learning framework that prevents the latent representations decoded into original input data. By simulating the expected behavior of adversary, our framework is realized by minimizing the negative pixel reconstruction loss or the negative feature reconstruction (i.e., perceptual distance) loss. We validate the proposed method on face attribute prediction, showing that our method allows protecting visual privacy with a small decrease in utility performance. In addition, we show the utility-privacy trade-off with different choices of hyperparameter for negative perceptual distance loss at training, allowing service providers to determine the right level of privacy-protection with a certain utility performance. Moreover, we provide an extensive study with different selections of features, tasks, and the data to further analyze their influence on privacy protection.
APA, Harvard, Vancouver, ISO, and other styles
42

Papernot, Nicolas, Abhradeep Thakurta, Shuang Song, Steve Chien, and Úlfar Erlingsson. "Tempered Sigmoid Activations for Deep Learning with Differential Privacy." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 10 (May 18, 2021): 9312–21. http://dx.doi.org/10.1609/aaai.v35i10.17123.

Full text
Abstract:
Because learning sometimes involves sensitive data, machine learning algorithms have been extended to offer differential privacy for training data. In practice, this has been mostly an afterthought, with privacy-preserving models obtained by re-running training with a different optimizer, but using the model architectures that already performed well in a non-privacy-preserving setting. This approach leads to less than ideal privacy/utility tradeoffs, as we show here. To improve these tradeoffs, prior work introduces variants of differential privacy that weaken the privacy guarantee proved to increase model utility. We show this is not necessary and instead propose that utility be improved by choosing activation functions designed explicitly for privacy-preserving training. A crucial operation in differentially private SGD is gradient clipping, which along with modifying the optimization path (at times resulting in not-optimizing a single objective function), may also introduce both significant bias and variance to the learning process. We empirically identify exploding gradients arising from ReLU may be one of the main sources of this. We demonstrate analytically and experimentally how a general family of bounded activation functions, the tempered sigmoids, consistently outperform the currently established choice: unbounded activation functions like ReLU. Using this paradigm, we achieve new state-of-the-art accuracy on MNIST, FashionMNIST, and CIFAR10 without any modification of the learning procedure fundamentals or differential privacy analysis. While the changes we make are simple in retrospect, the simplicity of our approach facilitates its implementation and adoption to meaningfully improve state-of-the-art machine learning while still providing strong guarantees in the original framework of differential privacy.
APA, Harvard, Vancouver, ISO, and other styles
43

Fu, Dongqi, Wenxuan Bao, Ross Maciejewski, Hanghang Tong, and Jingrui He. "Privacy-Preserving Graph Machine Learning from Data to Computation: A Survey." ACM SIGKDD Explorations Newsletter 25, no. 1 (June 22, 2023): 54–72. http://dx.doi.org/10.1145/3606274.3606280.

Full text
Abstract:
In graph machine learning, data collection, sharing, and analysis often involve multiple parties, each of which may require varying levels of data security and privacy. To this end, preserving privacy is of great importance in protecting sensitive information. In the era of big data, the relationships among data entities have become unprecedentedly complex, and more applications utilize advanced data structures (i.e., graphs) that can support network structures and relevant attribute information. To date, many graph-based AI models have been proposed (e.g., graph neural networks) for various domain tasks, like computer vision and natural language processing. In this paper, we focus on reviewing privacypreserving techniques of graph machine learning. We systematically review related works from the data to the computational aspects. We rst review methods for generating privacy-preserving graph data. Then we describe methods for transmitting privacy-preserved information (e.g., graph model parameters) to realize the optimization-based computation when data sharing among multiple parties is risky or impossible. In addition to discussing relevant theoretical methodology and software tools, we also discuss current challenges and highlight several possible future research opportunities for privacy-preserving graph machine learning. Finally, we envision a uni ed and comprehensive secure graph machine learning system.
APA, Harvard, Vancouver, ISO, and other styles
44

Alazab, Ammar, Ansam Khraisat, Sarabjot Singh, and Tony Jan. "Enhancing Privacy-Preserving Intrusion Detection through Federated Learning." Electronics 12, no. 16 (August 8, 2023): 3382. http://dx.doi.org/10.3390/electronics12163382.

Full text
Abstract:
Detecting anomalies, intrusions, and security threats in the network (including Internet of Things) traffic necessitates the processing of large volumes of sensitive data, which raises concerns about privacy and security. Federated learning, a distributed machine learning approach, enables multiple parties to collaboratively train a shared model while preserving data decentralization and privacy. In a federated learning environment, instead of training and evaluating the model on a single machine, each client learns a local model with the same structure but is trained on different local datasets. These local models are then communicated to an aggregation server that employs federated averaging to aggregate them and produce an optimized global model. This approach offers significant benefits for developing efficient and effective intrusion detection system (IDS) solutions. In this research, we investigated the effectiveness of federated learning for IDSs and compared it with that of traditional deep learning models. Our findings demonstrate that federated learning, by utilizing random client selection, achieved higher accuracy and lower loss compared to deep learning, particularly in scenarios emphasizing data privacy and security. Our experiments highlight the capability of federated learning to create global models without sharing sensitive data, thereby mitigating the risks associated with data breaches or leakage. The results suggest that federated averaging in federated learning has the potential to revolutionize the development of IDS solutions, thus making them more secure, efficient, and effective.
APA, Harvard, Vancouver, ISO, and other styles
45

B. Ankayarkanni, Niroj Kumar Pani, M. Anand, V. Malathy, and Bhupati. "P2FLF: Privacy-Preserving Federated Learning Framework Based on Mobile Fog Computing." International Journal of Interactive Mobile Technologies (iJIM) 17, no. 17 (September 14, 2023): 72–81. http://dx.doi.org/10.3991/ijim.v17i17.42835.

Full text
Abstract:
Mobile IoT devices provide a lot of data every day, which provides a strong base for machine learning to succeed. However, the stringent privacy demands associated with mobile IoT data pose significant challenges for its implementation in machine learning tasks. In order to address this challenge, we propose privacy-preserving federated learning framework (P2FLF) in a mobile fog computing environment. By employing federated learning, it is possible to bring together numerous dispersed user sets and collectively train models without the need to upload datasets. Federated learning, an approach to distributed machine learning, has garnered significant attention for its ability to enable collaborative model training without the need to share sensitive data. By utilizing fog nodes deployed at the edge of the network, P2FLF ensures that sensitive mobile IoT data remains local and is not transmitted to the central server. The framework integrates privacy-preserving methods, such as differential privacy and encryption, to safeguard the data throughout the learning process. We evaluate the performance and efficacy of P2FLF through experimental simulations and compare it with existing approaches. The results demonstrate that P2FLF strikes a balance between model accuracy and privacy protection while enabling efficient federated learning in mobile IoT environments.
APA, Harvard, Vancouver, ISO, and other styles
46

Chatel, Sylvain, Apostolos Pyrgelis, Juan Ramón Troncoso-Pastoriza, and Jean-Pierre Hubaux. "SoK: Privacy-Preserving Collaborative Tree-based Model Learning." Proceedings on Privacy Enhancing Technologies 2021, no. 3 (April 27, 2021): 182–203. http://dx.doi.org/10.2478/popets-2021-0043.

Full text
Abstract:
Abstract Tree-based models are among the most efficient machine learning techniques for data mining nowadays due to their accuracy, interpretability, and simplicity. The recent orthogonal needs for more data and privacy protection call for collaborative privacy-preserving solutions. In this work, we survey the literature on distributed and privacy-preserving training of tree-based models and we systematize its knowledge based on four axes: the learning algorithm, the collaborative model, the protection mechanism, and the threat model. We use this to identify the strengths and limitations of these works and provide for the first time a framework analyzing the information leakage occurring in distributed tree-based model learning.
APA, Harvard, Vancouver, ISO, and other styles
47

Rajput, Amit, and Suraksha Tiwari. "A Review on Privacy Preserving Using Machine learning and Deep Learning Techniques." International Journal for Research in Applied Science and Engineering Technology 11, no. 3 (March 31, 2023): 1785–90. http://dx.doi.org/10.22214/ijraset.2023.49781.

Full text
Abstract:
Abstract: In order to entice data custodians to provide precise documentation so that data mining can continue with confidence, protecting the confidentiality of healthcare data is crucial. Association rule mining has been extensively used in the past to analyze healthcare data. The majority of applications ignore the drawbacks of specific diagnostic procedures in favor of positive association criteria. Negative association criteria may provide more useful information when bridging disparate diseases and medications than positive ones. In the case of doctors and social groups, this is particularly accurate. Data mining for medical purposes must be done with patient identities protected, especially when working with sensitive data. However, it might be attacked if this information becomes public. In order to perform data mining research, technology that modifies data (data sanitization) that reconstructs aggregate distributions has recently addressed the importance of healthcare data privacy. This study examines data sanitization in healthcare data mining using metaheuristics in order to safeguard patient privacy. Studies on SHM have looked at the uses of IoT &/or Machine Learning (ML) within the field, as well as the architecture, security, & privacy issues. However, no studies have looked into how AI and ubiquity computing technologies have affected SHM systems. The objective of this research is to identify and map the primary technical concepts within the SHM framework.
APA, Harvard, Vancouver, ISO, and other styles
48

Zapechnikov, Sergey V., and Andrey Yu Shcherbakov. "Privacy-preserving machine learning based on secure two-party computations." Bezopasnost informacionnyh tehnology 28, no. 4 (December 2021): 39–51. http://dx.doi.org/10.26583/bit.2021.4.03.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Zapechnikov, Sergey V. "Privacy-preserving machine learning based on secure three-party computations." Bezopasnost informacionnyh tehnology 29, no. 1 (March 2022): 30–43. http://dx.doi.org/10.26583/bit.2022.1.04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Byali, Megha, Harsh Chaudhari, Arpita Patra, and Ajith Suresh. "FLASH: Fast and Robust Framework for Privacy-preserving Machine Learning." Proceedings on Privacy Enhancing Technologies 2020, no. 2 (April 1, 2020): 459–80. http://dx.doi.org/10.2478/popets-2020-0036.

Full text
Abstract:
AbstractPrivacy-preserving machine learning (PPML) via Secure Multi-party Computation (MPC) has gained momentum in the recent past. Assuming a minimal network of pair-wise private channels, we propose an efficient four-party PPML framework over rings ℤ2ℓ, FLASH, the first of its kind in the regime of PPML framework, that achieves the strongest security notion of Guaranteed Output Delivery (all parties obtain the output irrespective of adversary’s behaviour). The state of the art ML frameworks such as ABY3 by Mohassel et.al (ACM CCS’18) and SecureNN by Wagh et.al (PETS’19) operate in the setting of 3 parties with one malicious corruption but achieve the weaker security guarantee of abort. We demonstrate PPML with real-time efficiency, using the following custom-made tools that overcome the limitations of the aforementioned state-of-the-art– (a) dot product, which is independent of the vector size unlike the state-of-the-art ABY3, SecureNN and ASTRA by Chaudhari et.al (ACM CCSW’19), all of which have linear dependence on the vector size. (b) Truncation and MSB Extraction, which are constant round and free of circuits like Parallel Prefix Adder (PPA) and Ripple Carry Adder (RCA), unlike ABY3 which uses these circuits and has round complexity of the order of depth of these circuits. We then exhibit the application of our FLASH framework in the secure server-aided prediction of vital algorithms– Linear Regression, Logistic Regression, Deep Neural Networks, and Binarized Neural Networks. We substantiate our theoretical claims through improvement in benchmarks of the aforementioned algorithms when compared with the current best framework ABY3. All the protocols are implemented over a 64-bit ring in LAN and WAN. Our experiments demonstrate that, for MNIST dataset, the improvement (in terms of throughput) ranges from 24 × to 1390 × over LAN and WAN together.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography