Academic literature on the topic 'Privacy preserving machine learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Privacy preserving machine learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Privacy preserving machine learning"

1

Liu, Zheyuan, and Rui Zhang. "Privacy Preserving Collaborative Machine Learning." ICST Transactions on Security and Safety 8, no. 28 (September 10, 2021): 170295. http://dx.doi.org/10.4108/eai.14-7-2021.170295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kerschbaum, Florian, and Nils Lukas. "Privacy-Preserving Machine Learning [Cryptography]." IEEE Security & Privacy 21, no. 6 (November 2023): 90–94. http://dx.doi.org/10.1109/msec.2023.3315944.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pan, Ziqi. "Machine learning for privacy-preserving: Approaches, challenges and discussion." Applied and Computational Engineering 18, no. 1 (October 23, 2023): 23–27. http://dx.doi.org/10.54254/2755-2721/18/20230957.

Full text
Abstract:
Currently, advanced technologies such as big data, artificial intelligence and machine learning are undergoing rapid development. However, the emergence of cybersecurity and privacy leakage problems has resulted in serious implications. This paper discusses the current state of privacy security issues in the field of machine learning in a comprehensive manner. During machine training, training models often unconsciously extract and record private information from raw data, and in addition, third-party attackers are interested in maliciously extracting private information from raw data. This paper first provides a quick introduction to the validation criterion in privacy-preserving strategies, based on which algorithms can account for and validate the privacy leakage problem during machine learning. The paper then describes different privacy-preserving strategies based mainly on federation learning that focus on Differentially Private Federated Averaging and Privacy-Preserving Asynchronous Federated Learning Mechanism and provides an analysis and discussion of their advantages and disadvantages. By improving the original machine learning methods, such as improving the parameter values and limiting the range of features, the possibility of privacy leakage during machine learning is successfully reduced. However, the different privacy-preserving strategies are mainly limited to changing the parameters of the original model training method, which leads to limitations in the training method, such as reduced efficiency or difficulty in training under certain conditions.
APA, Harvard, Vancouver, ISO, and other styles
4

Monika Dhananjay Rokade. "Advancements in Privacy-Preserving Techniques for Federated Learning: A Machine Learning Perspective." Journal of Electrical Systems 20, no. 2s (March 31, 2024): 1075–88. http://dx.doi.org/10.52783/jes.1754.

Full text
Abstract:
Federated learning has emerged as a promising paradigm for collaborative machine learning while preserving data privacy. However, concerns about data privacy remain significant, particularly in scenarios where sensitive information is involved. This paper reviews recent advancements in privacy-preserving techniques for federated learning from a machine learning perspective. It categorizes and analyses state-of-the-art approaches within a unified framework, highlighting their strengths, limitations, and potential applications. By providing insights into the landscape of privacy-preserving federated learning, this review aims to guide researchers and practitioners in developing robust and privacy-conscious machine learning solutions for collaborative environments. The paper concludes with future research directions to address ongoing challenges and further enhance the effectiveness and scalability of privacy-preserving federated learning.
APA, Harvard, Vancouver, ISO, and other styles
5

Zheng, Huadi, Haibo Hu, and Ziyang Han. "Preserving User Privacy for Machine Learning: Local Differential Privacy or Federated Machine Learning?" IEEE Intelligent Systems 35, no. 4 (July 1, 2020): 5–14. http://dx.doi.org/10.1109/mis.2020.3010335.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chamikara, M. A. P., P. Bertok, I. Khalil, D. Liu, and S. Camtepe. "Privacy preserving distributed machine learning with federated learning." Computer Communications 171 (April 2021): 112–25. http://dx.doi.org/10.1016/j.comcom.2021.02.014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bonawitz, Kallista, Peter Kairouz, Brendan Mcmahan, and Daniel Ramage. "Federated learning and privacy." Communications of the ACM 65, no. 4 (April 2022): 90–97. http://dx.doi.org/10.1145/3500240.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Al-Rubaie, Mohammad, and J. Morris Chang. "Privacy-Preserving Machine Learning: Threats and Solutions." IEEE Security & Privacy 17, no. 2 (March 2019): 49–58. http://dx.doi.org/10.1109/msec.2018.2888775.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hesamifard, Ehsan, Hassan Takabi, Mehdi Ghasemi, and Rebecca N. Wright. "Privacy-preserving Machine Learning as a Service." Proceedings on Privacy Enhancing Technologies 2018, no. 3 (June 1, 2018): 123–42. http://dx.doi.org/10.1515/popets-2018-0024.

Full text
Abstract:
Abstract Machine learning algorithms based on deep Neural Networks (NN) have achieved remarkable results and are being extensively used in different domains. On the other hand, with increasing growth of cloud services, several Machine Learning as a Service (MLaaS) are offered where training and deploying machine learning models are performed on cloud providers’ infrastructure. However, machine learning algorithms require access to the raw data which is often privacy sensitive and can create potential security and privacy risks. To address this issue, we present CryptoDL, a framework that develops new techniques to provide solutions for applying deep neural network algorithms to encrypted data. In this paper, we provide the theoretical foundation for implementing deep neural network algorithms in encrypted domain and develop techniques to adopt neural networks within practical limitations of current homomorphic encryption schemes. We show that it is feasible and practical to train neural networks using encrypted data and to make encrypted predictions, and also return the predictions in an encrypted form. We demonstrate applicability of the proposed CryptoDL using a large number of datasets and evaluate its performance. The empirical results show that it provides accurate privacy-preserving training and classification.
APA, Harvard, Vancouver, ISO, and other styles
10

Jitendra Singh Chouhan, Amit Kumar Bhatt, Nitin Anand. "Federated Learning; Privacy Preserving Machine Learning for Decentralized Data." Tuijin Jishu/Journal of Propulsion Technology 44, no. 1 (November 24, 2023): 167–69. http://dx.doi.org/10.52783/tjjpt.v44.i1.2234.

Full text
Abstract:
Federated learning represents a compelling solution for tackling the privacy challenges inherent in decentralized and distributed environments when it comes to machine learning. This scholarly paper delves deep into the realm of federated learning, encompassing its applications and the latest privacy-preserving techniques used for training machine learning models in a decentralized manner. We explore the reasons behind the adoption of federated learning, highlight its advantages over conventional centralized approaches, and examine the diverse methods employed to safeguard privacy within this framework. Furthermore, we scrutinize the current obstacles, unresolved research queries, and the prospective directions within this rapidly developing field
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Privacy preserving machine learning"

1

Bozdemir, Beyza. "Privacy-preserving machine learning techniques." Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS323.

Full text
Abstract:
L'apprentissage automatique en tant que service (MLaaS) fait référence à un service qui permet aux entreprises de déléguer leurs tâches d'apprentissage automatique à un ou plusieurs serveurs puissants, à savoir des serveurs cloud. Néanmoins, les entreprises sont confrontées à des défis importants pour garantir la confidentialité des données et le respect des réglementations en matière de protection des données. L'exécution de tâches d'apprentissage automatique sur des données sensibles nécessite la conception de nouveaux protocoles garantissant la confidentialité des données pour les techniques d'apprentissage automatique.Dans cette thèse, nous visons à concevoir de tels protocoles pour MLaaS et étudions trois techniques d'apprentissage automatique : les réseaux de neurones, le partitionnement de trajectoires et l'agrégation de données. Dans nos solutions, notre objectif est de garantir la confidentialité des données tout en fournissant un niveau acceptable de performance et d’utilité. Afin de préserver la confidentialité des données, nous utilisons plusieurs techniques cryptographiques avancées : le calcul bipartite sécurisé, le chiffrement homomorphe, le rechiffrement proxy homomorphe ainsi que le chiffrement à seuil et le chiffrement à clé multiples. Nous avons en outre implémenté ces nouveaux protocoles et étudié le compromis entre confidentialité, performance et utilité/qualité pour chacun d’entre eux
Machine Learning as a Service (MLaaS) refers to a service that enables companies to delegate their machine learning tasks to single or multiple untrusted but powerful third parties, namely cloud servers. Thanks to MLaaS, the need for computational resources and domain expertise required to execute machine learning techniques is significantly reduced. Nevertheless, companies face increasing challenges with ensuring data privacy guarantees and compliance with the data protection regulations. Executing machine learning tasks over sensitive data requires the design of privacy-preserving protocols for machine learning techniques.In this thesis, we aim to design such protocols for MLaaS and study three machine learning techniques: Neural network classification, trajectory clustering, and data aggregation under privacy protection. In our solutions, our goal is to guarantee data privacy while keeping an acceptable level of performance and accuracy/quality evaluation when executing the privacy-preserving variants of these machine learning techniques. In order to ensure data privacy, we employ several advanced cryptographic techniques: Secure two-party computation, homomorphic encryption, homomorphic proxy re-encryption, multi-key homomorphic encryption, and threshold homomorphic encryption. We have implemented our privacy-preserving protocols and studied the trade-off between privacy, efficiency, and accuracy/quality evaluation for each of them
APA, Harvard, Vancouver, ISO, and other styles
2

Hesamifard, Ehsan. "Privacy Preserving Machine Learning as a Service." Thesis, University of North Texas, 2020. https://digital.library.unt.edu/ark:/67531/metadc1703277/.

Full text
Abstract:
Machine learning algorithms based on neural networks have achieved remarkable results and are being extensively used in different domains. However, the machine learning algorithms requires access to raw data which is often privacy sensitive. To address this issue, we develop new techniques to provide solutions for running deep neural networks over encrypted data. In this paper, we develop new techniques to adopt deep neural networks within the practical limitation of current homomorphic encryption schemes. We focus on training and classification of the well-known neural networks and convolutional neural networks. First, we design methods for approximation of the activation functions commonly used in CNNs (i.e. ReLU, Sigmoid, and Tanh) with low degree polynomials which is essential for efficient homomorphic encryption schemes. Then, we train neural networks with the approximation polynomials instead of original activation functions and analyze the performance of the models. Finally, we implement neural networks and convolutional neural networks over encrypted data and measure performance of the models.
APA, Harvard, Vancouver, ISO, and other styles
3

Grivet, Sébert Arnaud. "Combining differential privacy and homomorphic encryption for privacy-preserving collaborative machine learning." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG037.

Full text
Abstract:
L'objet de cette thèse est la conception de protocoles pour l'entraînement de modèles d'apprentissage automatique avec protection des données d'entraînement. Pour ce faire, nous nous sommes concentrés sur deux outils de confidentialité, la confidentialité différentielle et le chiffrement homomorphe. Alors que la confidentialité différentielle permet de fournir un modèle fonctionnel protégé des attaques sur la confidentialité par les utilisateurs finaux, le chiffrement homomorphe permet d'utiliser un serveur comme intermédiaire totalement aveugle entre les propriétaires des données, qui fournit des ressources de calcul sans aucun accès aux informations en clair. Cependant, ces deux techniques sont de nature totalement différente et impliquent toutes deux leurs propres contraintes qui peuvent interférer : la confidentialité différentielle nécessite généralement l'utilisation d'un bruit continu et non borné, tandis que le chiffrement homomorphe ne peut traiter que des nombres encodés avec un nombre limité de bits. Les travaux présentés visent à faire fonctionner ensemble ces deux outils de confidentialité en gérant leurs interférences et même en les exploitant afin que les deux techniques puissent bénéficier l'une de l'autre.Dans notre premier travail, SPEED, nous étendons le modèle de menace du protocole PATE (Private Aggregation of Teacher Ensembles) au cas d'un serveur honnête mais curieux en protégeant les calculs du serveur par une couche homomorphe. Nous définissons soigneusement quelles opérations sont effectuées homomorphiquement pour faire le moins de calculs possible dans le domaine chiffré très coûteux tout en révélant suffisamment peu d'informations en clair pour être facilement protégé par la confidentialité différentielle. Ce compromis nous contraint à réaliser une opération argmax dans le domaine chiffré, qui, même si elle est raisonnable, reste coûteuse. C'est pourquoi nous proposons SHIELD dans une autre contribution, un opérateur argmax volontairement imprécis, à la fois pour satisfaire la confidentialité différentielle et alléger le calcul homomorphe. La dernière contribution présentée combine la confidentialité différentielle et le chiffrement homomorphe pour sécuriser un protocole d'apprentissage fédéré. Le principal défi de cette combinaison provient de la discrétisation nécessaire du bruit induit par le chiffrement, qui complique l'analyse des garanties de confidentialité différentielle et justifie la conception et l'utilisation d'un nouvel opérateur de quantification qui commute avec l'agrégation
The purpose of this PhD is to design protocols to collaboratively train machine learning models while keeping the training data private. To do so, we focused on two privacy tools, namely differential privacy and homomorphic encryption. While differential privacy enables to deliver a functional model immune to attacks on the training data privacy by end-users, homomorphic encryption allows to make use of a server as a totally blind intermediary between the data owners, that provides computational resource without any access to clear information. Yet, these two techniques are of totally different natures and both entail their own constraints that may interfere: differential privacy generally requires the use of continuous and unbounded noise whereas homomorphic encryption can only deal with numbers encoded with a quite limited number of bits. The presented contributions make these two privacy tools work together by coping with their interferences and even leveraging them so that the two techniques may benefit from each other.In our first work, SPEED, we built on Private Aggregation of Teacher Ensembles (PATE) framework and extend the threat model to deal with an honest but curious server by covering the server computations with a homomorphic layer. We carefully define which operations are realised homomorphically to make as less computation as possible in the costly encrypted domain while revealing little enough information in clear to be easily protected by differential privacy. This trade-off forced us to realise an argmax operation in the encrypted domain, which, even if reasonable, remained expensive. That is why we propose SHIELD in another contribution, an argmax operator made inaccurate on purpose, both to satisfy differential privacy and lighten the homomorphic computation. The last presented contribution combines differential privacy and homomorphic encryption to secure a federated learning protocol. The main challenge of this combination comes from the necessary quantisation of the noise induced by encryption, that complicates the differential privacy analysis and justifies the design and use of a novel quantisation operator that commutes with the aggregation
APA, Harvard, Vancouver, ISO, and other styles
4

Cyphers, Bennett James. "A system for privacy-preserving machine learning on personal data." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/119518.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 81-85).
This thesis describes the design and implementation of a system which allows users to generate machine learning models with their own data while preserving privacy. We approach the problem in two steps. First, we present a framework with which a user can collate personal data from a variety of sources in order to generate machine learning models for problems of the user's choosing. Second, we describe AnonML, a system which allows a group of users to share data privately in order to build models for classification. We analyze AnonML under differential privacy and test its performance on real-world datasets. In tandem, these two systems will help democratize machine learning, allowing people to make the most of their own data without relying on trusted third parties.
by Bennett James Cyphers.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
5

Esperança, Pedro M. "Privacy-preserving statistical and machine learning methods under fully homomorphic encryption." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:a081311c-b25c-462e-a66b-1e4ac4de5fc2.

Full text
Abstract:
Advances in technology have now made it possible to monitor heart rate, body temperature and sleep patterns; continuously track movement; record brain activity; and sequence DNA in the jungle --- all using devices that fit in the palm of a hand. These and other recent developments have sparked interest in privacy-preserving methods: computational approaches which are able to utilise the data without leaking subjects' personal information. Classical encryption techniques have been used very successfully to protect data in transit and in storage. However, the process of encrypting data also renders it unusable in computation. Recently developed fully homomorphic encryption (FHE) techniques improve on this substantially. Unlike classical methods, which require the data to be decrypted prior to computation, homomorphic methods allow data to be simultaneously stored or transfered securely, and used in computation. However, FHE imposes serious constraints on computation, both arithmetic (e.g., no divisions can be performed) and computational (e.g., multiplications become much slower), rendering traditional statistical algorithms inadequate. In this thesis we develop statistical and machine learning methods for outsourced, privacy-preserving analysis of sensitive information under FHE. Specifically, we tackle two problems: (i) classification, using a semiparametric approach based on the naive Bayes assumption and modeling the class decision boundary directly using an approximation to univariate logistic regression; (ii) regression, using two approaches; an accelerated method for least squares estimation based on gradient descent, and a cooperative framework for Bayesian regression based on recursive Bayesian updating in a multi-party setting. Taking into account the constraints imposed by FHE, we analyse the potential of different algorithmic approaches to provide tractable solutions to these problems and give details on several computational costs and performance trade-offs.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Kevin M. Eng Massachusetts Institute of Technology. "Tiresias : a peer-to-peer platform for privacy preserving machine learning." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/129840.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2020
Cataloged from student-submitted PDF of thesis.
Includes bibliographical references (pages 81-84).
Big technology firms have a monopoly over user data. To remediate this, we propose a data science platform which allows users to collect their personal data and offer computations on them in a differentially private manner. This platform provides a mechanism for contributors to offer computations on their data in a privacy-preserving way and for requesters -- i.e. anyone who can benefit from applying machine learning to the users' data -- to request computations on user data they would otherwise not be able to collect. Through carefully designed differential privacy mechanisms, we can create a platform which gives people control over their data and enables new types of applications.
by Kevin Zhang.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
7

Langelaar, Johannes, and Mattsson Adam Strömme. "Federated Neural Collaborative Filtering for privacy-preserving recommender systems." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-446913.

Full text
Abstract:
In this thesis a number of models for recommender systems are explored, all using collaborative filtering to produce their recommendations. Extra focus is put on two models: Matrix Factorization, which is a linear model and Multi-Layer Perceptron, which is a non-linear model. With an additional purpose of training the models without collecting any sensitive data from the users, both models were implemented with a learning technique that does not require the server's knowledge of the users' data, called federated learning. The federated version of Matrix Factorization is already well-researched, and has proven not to protect the users' data at all; the data is derivable from the information that the users communicate to the server that is necessary for the learning of the model. However, on the federated Multi-Layer Perceptron model, no research could be found. In this thesis, such a model is therefore designed and presented. Arguments are put forth in support of the privacy preservability of the model, along with a proof of the user data not being analytically derivable for the central server.    In addition, new ways to further put the protection of the users' data on the test are discussed. All models are evaluated on two different data sets. The first data set contains data on ratings of movies and is called MovieLens 1M. The second is a data set that consists of anonymized fund transactions, provided by the Swedish bank SEB for this thesis. Test results suggest that the federated versions of the models can achieve similar recommendation performance as their non-federated counterparts.
APA, Harvard, Vancouver, ISO, and other styles
8

Dou, Yanzhi. "Toward Privacy-Preserving and Secure Dynamic Spectrum Access." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/81882.

Full text
Abstract:
Dynamic spectrum access (DSA) technique has been widely accepted as a crucial solution to mitigate the potential spectrum scarcity problem. Spectrum sharing between the government incumbents and commercial wireless broadband operators/users is one of the key forms of DSA. Two categories of spectrum management methods for shared use between incumbent users (IUs) and secondary users (SUs) have been proposed, i.e., the server-driven method and the sensing-based method. The server-driven method employs a central server to allocate spectrum resources while considering incumbent protection. The central server has access to the detailed IU operating information, and based on some accurate radio propagation model, it is able to allocate spectrum following a particular access enforcement method. Two types of access enforcement methods -- exclusion zone and protection zone -- have been adopted for server-driven DSA systems in the current literature. The sensing-based method is based on recent advances in cognitive radio (CR) technology. A CR can dynamically identify white spaces through various incumbent detection techniques and reconfigure its radio parameters in response to changes of spectrum availability. The focus of this dissertation is to address critical privacy and security issues in the existing DSA systems that may severely hinder the progress of DSA's deployment in the real world. Firstly, we identify serious threats to users' privacy in existing server-driven DSA designs and propose a privacy-preserving design named P2-SAS to address the issue. P2-SAS realizes the complex spectrum allocation process of protection-zone-based DSA in a privacy-preserving way through Homomorphic Encryption (HE), so that none of the IU or SU operation data would be exposed to any snooping party, including the central server itself. Secondly, we develop a privacy-preserving design named IP-SAS for the exclusion-zone- based server-driven DSA system. We extend the basic design that only considers semi- honest adversaries to include malicious adversaries in order to defend the more practical and complex attack scenarios that can happen in the real world. Thirdly, we redesign our privacy-preserving SAS systems entirely to remove the somewhat- trusted third party (TTP) named Key Distributor, which in essence provides a weak proxy re-encryption online service in P2-SAS and IP-SAS. Instead, in this new system, RE-SAS, we leverage a new crypto system that supports both a strong proxy re-encryption notion and MPC to realize privacy-preserving spectrum allocation. The advantages of RE-SAS are that it can prevent single point of vulnerability due to TTP and also increase SAS's service performance dramatically. Finally, we identify the potentially crucial threat of compromised CR devices to the ambient wireless infrastructures and propose a scalable and accurate zero-day malware detection system called GuardCR to enhance CR network security at the device level. GuardCR leverages a host-based anomaly detection technique driven by machine learning, which makes it autonomous in malicious behavior recognition. We boost the performance of GuardCR in terms of accuracy and efficiency by integrating proper domain knowledge of CR software.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
9

García, Recuero Álvaro. "Discouraging abusive behavior in privacy-preserving decentralized online social networks." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S010/document.

Full text
Abstract:
Le principal objectif de cette thèse est d'évaluer les protocoles qui prennent en considération la protection de la vie privée et qui nécessitent seulement des métadonnées locales pour détecter les comportements malveillants sur les réseaux sociaux décentralisés. En appliquant des techniques d'analyse de réseaux sociaux qui réduisent la quantité de métadonnées sensibles, nous obtenons des résultats acceptables comparé aux techniques qui ne préservent pas la vie privée. De plus, nous prévoyons d'élaborer une série de recommandations pour construire de futurs réseaux sociaux décentralisés qui découragent cette type des comportements abusifs
The main goal of this thesis is to evaluate privacy-preserving protocols to detect abuse in future decentralised online social platforms or microblogging services, where often limited amount of metadata is available to perform data analytics. Taking into account such data minimization, we obtain acceptable results compared to techniques of machine learning that use all metadata available. We draw a series of conclusion and recommendations that will aid in the design and development of a privacy-preserving decentralised social network that discourages abusive behavior
APA, Harvard, Vancouver, ISO, and other styles
10

Ligier, Damien. "Functional encryption applied to privacy-preserving classification : practical use, performances and security." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2018. http://www.theses.fr/2018IMTA0040/document.

Full text
Abstract:
L'apprentissage automatique (en anglais machine learning) ou apprentissage statistique, a prouvé être un ensemble de techniques très puissantes. La classification automatique en particulier, permettant d'identifier efficacement des informations contenues dans des gros ensembles de données. Cependant, cela lève le souci de la confidentialité des données. C'est pour cela que le besoin de créer des algorithmes d'apprentissage automatique capable de garantir la confidentialité a été mis en avant. Cette thèse propose une façon de combiner certains systèmes cryptographiques avec des algorithmes de classification afin d'obtenir un classifieur que veille à la confidentialité. Les systèmes cryptographiques en question sont la famille des chiffrements fonctionnels. Il s'agit d'une généralisation de la cryptographie à clef publique traditionnelle dans laquelle les clefs de déchiffrement sont associées à des fonctions. Nous avons mené des expérimentations sur cette construction avec un scénario réaliste se servant de la base de données du MNIST composée d'images de digits écrits à la main. Notre système est capable dans ce cas d'utilisation de savoir quel digit est écrit sur une image en ayant seulement un chiffre de l'image. Nous avons aussi étudié la sécurité de cette construction dans un contexte réaliste. Ceci a révélé des risques quant à l'utilisation des chiffrements fonctionnels en général et pas seulement dans notre cas d'utilisation. Nous avons ensuite proposé une méthode pour négocier (dans notre construction) entre les performances de classification et les risques encourus
Machine Learning (ML) algorithms have proven themselves very powerful. Especially classification, enabling to efficiently identify information in large datasets. However, it raises concerns about the privacy of this data. Therefore, it brought to the forefront the challenge of designing machine learning algorithms able to preserve confidentiality.This thesis proposes a way to combine some cryptographic systems with classification algorithms to achieve privacy preserving classifier. The cryptographic system family in question is the functional encryption one. It is a generalization of the traditional public key encryption in which decryption keys are associated with a function. We did some experimentations on that combination on realistic scenario using the MNIST dataset of handwritten digit images. Our system is able in this use case to know which digit is written in an encrypted digit image. We also study its security in this real life scenario. It raises concerns about uses of functional encryption schemes in general and not just in our use case. We then introduce a way to balance in our construction efficiency of the classification and the risks
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Privacy preserving machine learning"

1

Li, Jin, Ping Li, Zheli Liu, Xiaofeng Chen, and Tong Li. Privacy-Preserving Machine Learning. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9139-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pathak, Manas A. Privacy-Preserving Machine Learning for Speech Processing. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-4639-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pathak, Manas A. Privacy-Preserving Machine Learning for Speech Processing. New York, NY: Springer New York, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Oyarzun Laura, Cristina, M. Jorge Cardoso, Michal Rosen-Zvi, Georgios Kaissis, Marius George Linguraru, Raj Shekhar, Stefan Wesarg, et al., eds. Clinical Image-Based Procedures, Distributed and Collaborative Learning, Artificial Intelligence for Combating COVID-19 and Secure and Privacy-Preserving Machine Learning. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-90874-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kim, Kwangjo, and Harry Chandra Tanuwidjaja. Privacy-Preserving Deep Learning. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-3764-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Qu, Youyang, Longxiang Gao, Shui Yu, and Yong Xiang. Privacy Preservation in IoT: Machine Learning Approaches. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-1797-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zimmeck, Sebastian. Using Machine Learning to improve Internet Privacy. [New York, N.Y.?]: [publisher not identified], 2017.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yu, Philip S. Machine Learning in Cyber Trust: Security, Privacy, and Reliability. Boston, MA: Springer-Verlag US, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lecuyer, Mathias. Security, Privacy, and Transparency Guarantees for Machine Learning Systems. [New York, N.Y.?]: [publisher not identified], 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dimitrakakis, Christos, Aris Gkoulalas-Divanis, Aikaterini Mitrokotsa, Vassilios S. Verykios, and Yücel Saygin, eds. Privacy and Security Issues in Data Mining and Machine Learning. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-19896-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Privacy preserving machine learning"

1

Chow, Sherman S. M. "Privacy-Preserving Machine Learning." In Communications in Computer and Information Science, 3–6. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-3095-7_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Jin, Ping Li, Zheli Liu, Xiaofeng Chen, and Tong Li. "Secure Distributed Learning." In Privacy-Preserving Machine Learning, 47–56. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9139-3_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Jin, Ping Li, Zheli Liu, Xiaofeng Chen, and Tong Li. "Learning with Differential Privacy." In Privacy-Preserving Machine Learning, 57–64. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9139-3_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zeugmann, Thomas, Pascal Poupart, James Kennedy, Xin Jin, Jiawei Han, Lorenza Saitta, Michele Sebag, et al. "Privacy-Preserving Data Mining." In Encyclopedia of Machine Learning, 795. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_667.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Jin, Ping Li, Zheli Liu, Xiaofeng Chen, and Tong Li. "Outsourced Computation for Learning." In Privacy-Preserving Machine Learning, 31–45. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9139-3_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Jin, Ping Li, Zheli Liu, Xiaofeng Chen, and Tong Li. "Threats in Open Environment." In Privacy-Preserving Machine Learning, 75–86. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9139-3_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Jin, Ping Li, Zheli Liu, Xiaofeng Chen, and Tong Li. "Applications—Privacy-Preserving Image Processing." In Privacy-Preserving Machine Learning, 65–74. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9139-3_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Jin, Ping Li, Zheli Liu, Xiaofeng Chen, and Tong Li. "Conclusion." In Privacy-Preserving Machine Learning, 87–88. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9139-3_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Jin, Ping Li, Zheli Liu, Xiaofeng Chen, and Tong Li. "Secure Cooperative Learning in Early Years." In Privacy-Preserving Machine Learning, 15–30. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9139-3_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Jin, Ping Li, Zheli Liu, Xiaofeng Chen, and Tong Li. "Introduction." In Privacy-Preserving Machine Learning, 1–13. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9139-3_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Privacy preserving machine learning"

1

EL MESTARI, Soumia Zohra. "Privacy Preserving Machine Learning Systems." In AIES '22: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3514094.3539530.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Carey, Alycia, and Nicholas Pattengale. "Privacy-Preserving AutoML." In Proposed for presentation at the Sandia Machine Learning and Deep Learning Workshop held July 19-22, 2021 in ,. US DOE, 2021. http://dx.doi.org/10.2172/1877808.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Senekane, Makhamisa, Mhlambululi Mafu, and Benedict Molibeli Taele. "Privacy-preserving quantum machine learning using differential privacy." In 2017 IEEE AFRICON. IEEE, 2017. http://dx.doi.org/10.1109/afrcon.2017.8095692.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

"Session details: Privacy-preserving Machine Learning." In the 12th ACM Workshop, Chair Sadia Afroz. New York, New York, USA: ACM Press, 2019. http://dx.doi.org/10.1145/3338501.3371912.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hesamifard, Ehsan, Hassan Takabi, Mehdi Ghasemi, and Catherine Jones. "Privacy-preserving Machine Learning in Cloud." In CCS '17: 2017 ACM SIGSAC Conference on Computer and Communications Security. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3140649.3140655.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Xin, Hideaki Ishii, Linkang Du, Peng Cheng, and Jiming Chen. "Differential Privacy-preserving Distributed Machine Learning." In 2019 IEEE 58th Conference on Decision and Control (CDC). IEEE, 2019. http://dx.doi.org/10.1109/cdc40024.2019.9029938.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Schneider, Thomas. "Engineering Privacy-Preserving Machine Learning Protocols." In CCS '20: 2020 ACM SIGSAC Conference on Computer and Communications Security. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3411501.3418607.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Miyaji, Atsuko, Tatsuhiro Yamatsuki, Bingchang He, Shintaro Yamashita, and Tomoaki Mimoto. "Re-visited Privacy-Preserving Machine Learning." In 2023 20th Annual International Conference on Privacy, Security and Trust (PST). IEEE, 2023. http://dx.doi.org/10.1109/pst58708.2023.10320156.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Afroz, Sadia. "Session details: Privacy-preserving Machine Learning." In CCS '19: 2019 ACM SIGSAC Conference on Computer and Communications Security. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3371912.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Prabhu, Akshay, Niranjana Balasubramanian, Chinmay Tiwari, and Rugved Deolekar. "Privacy preserving and secure machine learning." In 2021 IEEE 18th India Council International Conference (INDICON). IEEE, 2021. http://dx.doi.org/10.1109/indicon52576.2021.9691706.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Privacy preserving machine learning"

1

Martindale, Nathan, Scott Stewart, Mark Adams, and Greg Westphal. Considerations for using Privacy Preserving Machine Learning Techniques for Safeguards. Office of Scientific and Technical Information (OSTI), December 2020. http://dx.doi.org/10.2172/1737477.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Daudelin, Francois, Lina Taing, Lucy Chen, Claudia Abreu Lopes, Adeniyi Francis Fagbamigbe, and Hamid Mehmood. Mapping WASH-related disease risk: A review of risk concepts and methods. United Nations University Institute for Water, Environment and Health, December 2021. http://dx.doi.org/10.53328/uxuo4751.

Full text
Abstract:
The report provides a review of how risk is conceived of, modelled, and mapped in studies of infectious water, sanitation, and hygiene (WASH) related diseases. It focuses on spatial epidemiology of cholera, malaria and dengue to offer recommendations for the field of WASH-related disease risk mapping. The report notes a lack of consensus on the definition of disease risk in the literature, which limits the interpretability of the resulting analyses and could affect the quality of the design and direction of public health interventions. In addition, existing risk frameworks that consider disease incidence separately from community vulnerability have conceptual overlap in their components and conflate the probability and severity of disease risk into a single component. The report identifies four methods used to develop risk maps, i) observational, ii) index-based, iii) associative modelling and iv) mechanistic modelling. Observational methods are limited by a lack of historical data sets and their assumption that historical outcomes are representative of current and future risks. The more general index-based methods offer a highly flexible approach based on observed and modelled risks and can be used for partially qualitative or difficult-to-measure indicators, such as socioeconomic vulnerability. For multidimensional risk measures, indices representing different dimensions can be aggregated to form a composite index or be considered jointly without aggregation. The latter approach can distinguish between different types of disease risk such as outbreaks of high frequency/low intensity and low frequency/high intensity. Associative models, including machine learning and artificial intelligence (AI), are commonly used to measure current risk, future risk (short-term for early warning systems) or risk in areas with low data availability, but concerns about bias, privacy, trust, and accountability in algorithms can limit their application. In addition, they typically do not account for gender and demographic variables that allow risk analyses for different vulnerable groups. As an alternative, mechanistic models can be used for similar purposes as well as to create spatial measures of disease transmission efficiency or to model risk outcomes from hypothetical scenarios. Mechanistic models, however, are limited by their inability to capture locally specific transmission dynamics. The report recommends that future WASH-related disease risk mapping research: - Conceptualise risk as a function of the probability and severity of a disease risk event. Probability and severity can be disaggregated into sub-components. For outbreak-prone diseases, probability can be represented by a likelihood component while severity can be disaggregated into transmission and sensitivity sub-components, where sensitivity represents factors affecting health and socioeconomic outcomes of infection. -Employ jointly considered unaggregated indices to map multidimensional risk. Individual indices representing multiple dimensions of risk should be developed using a range of methods to take advantage of their relative strengths. -Develop and apply collaborative approaches with public health officials, development organizations and relevant stakeholders to identify appropriate interventions and priority levels for different types of risk, while ensuring the needs and values of users are met in an ethical and socially responsible manner. -Enhance identification of vulnerable populations by further disaggregating risk estimates and accounting for demographic and behavioural variables and using novel data sources such as big data and citizen science. This review is the first to focus solely on WASH-related disease risk mapping and modelling. The recommendations can be used as a guide for developing spatial epidemiology models in tandem with public health officials and to help detect and develop tailored responses to WASH-related disease outbreaks that meet the needs of vulnerable populations. The report’s main target audience is modellers, public health authorities and partners responsible for co-designing and implementing multi-sectoral health interventions, with a particular emphasis on facilitating the integration of health and WASH services delivery contributing to Sustainable Development Goals (SDG) 3 (good health and well-being) and 6 (clean water and sanitation).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography