Indice
Letteratura scientifica selezionata sul tema "Apprentissage automatique préservant la confidentialité"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Apprentissage automatique préservant la confidentialité".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Articoli di riviste sul tema "Apprentissage automatique préservant la confidentialité"
Önen, Melek, Francesco Cremonesi e Marco Lorenzi. "Apprentissage automatique fédéré pour l’IA collaborative dans le secteur de la santé". Revue internationale de droit économique XXXVI, n. 3 (21 aprile 2023): 95–113. http://dx.doi.org/10.3917/ride.363.0095.
Testo completoTesi sul tema "Apprentissage automatique préservant la confidentialité"
Kaplan, Caelin. "Compromis inhérents à l'apprentissage automatique préservant la confidentialité". Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4045.
Testo completoAs machine learning (ML) models are increasingly integrated into a wide range of applications, ensuring the privacy of individuals' data is becoming more important than ever. However, privacy-preserving ML techniques often result in reduced task-specific utility and may negatively impact other essential factors like fairness, robustness, and interpretability. These challenges have limited the widespread adoption of privacy-preserving methods. This thesis aims to address these challenges through two primary goals: (1) to deepen the understanding of key trade-offs in three privacy-preserving ML techniques—differential privacy, empirical privacy defenses, and federated learning; (2) to propose novel methods and algorithms that improve utility and effectiveness while maintaining privacy protections. The first study in this thesis investigates how differential privacy impacts fairness across groups defined by sensitive attributes. While previous assumptions suggested that differential privacy could exacerbate unfairness in ML models, our experiments demonstrate that selecting an optimal model architecture and tuning hyperparameters for DP-SGD (Differentially Private Stochastic Gradient Descent) can mitigate fairness disparities. Using standard ML fairness datasets, we show that group disparities in metrics like demographic parity, equalized odds, and predictive parity are often reduced or remain negligible when compared to non-private baselines, challenging the prevailing notion that differential privacy worsens fairness for underrepresented groups. The second study focuses on empirical privacy defenses, which aim to protect training data privacy while minimizing utility loss. Most existing defenses assume access to reference data---an additional dataset from the same or a similar distribution as the training data. However, previous works have largely neglected to evaluate the privacy risks associated with reference data. To address this, we conducted the first comprehensive analysis of reference data privacy in empirical defenses. We proposed a baseline defense method, Weighted Empirical Risk Minimization (WERM), which allows for a clearer understanding of the trade-offs between model utility, training data privacy, and reference data privacy. In addition to offering theoretical guarantees on model utility and the relative privacy of training and reference data, WERM consistently outperforms state-of-the-art empirical privacy defenses in nearly all relative privacy regimes.The third study addresses the convergence-related trade-offs in Collaborative Inference Systems (CISs), which are increasingly used in the Internet of Things (IoT) to enable smaller nodes in a network to offload part of their inference tasks to more powerful nodes. While Federated Learning (FL) is often used to jointly train models within CISs, traditional methods have overlooked the operational dynamics of these systems, such as heterogeneity in serving rates across nodes. We propose a novel FL approach explicitly designed for CISs, which accounts for varying serving rates and uneven data availability. Our framework provides theoretical guarantees and consistently outperforms state-of-the-art algorithms, particularly in scenarios where end devices handle high inference request rates.In conclusion, this thesis advances the field of privacy-preserving ML by addressing key trade-offs in differential privacy, empirical privacy defenses, and federated learning. The proposed methods provide new insights into balancing privacy with utility and other critical factors, offering practical solutions for integrating privacy-preserving techniques into real-world applications. These contributions aim to support the responsible and ethical deployment of AI technologies that prioritize data privacy and protection
Taiello, Riccardo. "Apprentissage automatique sécurisé pour l'analyse collaborative des données de santé à grande échelle". Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4031.
Testo completoThis PhD thesis explores the integration of privacy preservation, medical imaging, and Federated Learning (FL) using advanced cryptographic methods. Within the context of medical image analysis, we develop a privacy-preserving image registration (PPIR) framework. This framework addresses the challenge of registering images confidentially, without revealing their contents. By extending classical registration paradigms, we incorporate cryptographic tools like secure multi-party computation and homomorphic encryption to perform these operations securely. These tools are vital as they prevent data leakage during processing. Given the challenges associated with the performance and scalability of cryptographic methods in high-dimensional data, we optimize our image registration operations using gradient approximations. Our focus extends to increasingly complex registration methods, such as rigid, affine, and non-linear approaches using cubic splines or diffeomorphisms, parameterized by time-varying velocity fields. We demonstrate how these sophisticated registration methods can integrate privacy-preserving mechanisms effectively across various tasks. Concurrently, the thesis addresses the challenge of stragglers in FL, emphasizing the role of Secure Aggregation (SA) in collaborative model training. We introduce "Eagle", a synchronous SA scheme designed to optimize participation by late-arriving devices, significantly enhancing computational and communication efficiencies. We also present "Owl", tailored for buffered asynchronous FL settings, consistently outperforming earlier solutions. Furthermore, in the realm of Buffered AsyncSA, we propose two novel approaches: "Buffalo" and "Buffalo+". "Buffalo" advances SA techniques for Buffered AsyncSA, while "Buffalo+" counters sophisticated attacks that traditional methods fail to detect, such as model replacement. This solution leverages the properties of incremental hash functions and explores the sparsity in the quantization of local gradients from client models. Both Buffalo and Buffalo+ are validated theoretically and experimentally, demonstrating their effectiveness in a new cross-device FL task for medical devices.Finally, this thesis has devoted particular attention to the translation of privacy-preserving tools in real-world applications, notably through the FL open-source framework Fed-BioMed. Contributions concern the introduction of one of the first practical SA implementations specifically designed for cross-silo FL among hospitals, showcasing several practical use cases
Maag, Maria Coralia Laura. "Apprentissage automatique de fonctions d'anonymisation pour les graphes et les graphes dynamiques". Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066050/document.
Testo completoData privacy is a major problem that has to be considered before releasing datasets to the public or even to a partner company that would compute statistics or make a deep analysis of these data. Privacy is insured by performing data anonymization as required by legislation. In this context, many different anonymization techniques have been proposed in the literature. These techniques are difficult to use in a general context where attacks can be of different types, and where measures are not known to the anonymizer. Generic methods able to adapt to different situations become desirable. We are addressing the problem of privacy related to graph data which needs, for different reasons, to be publicly made available. This corresponds to the anonymized graph data publishing problem. We are placing from the perspective of an anonymizer not having access to the methods used to analyze the data. A generic methodology is proposed based on machine learning to obtain directly an anonymization function from a set of training data so as to optimize a tradeoff between privacy risk and utility loss. The method thus allows one to get a good anonymization procedure for any kind of attacks, and any characteristic in a given set. The methodology is instantiated for simple graphs and complex timestamped graphs. A tool has been developed implementing the method and has been experimented with success on real anonymized datasets coming from Twitter, Enron or Amazon. Results are compared with baseline and it is showed that the proposed method is generic and can automatically adapt itself to different anonymization contexts
Maag, Maria Coralia Laura. "Apprentissage automatique de fonctions d'anonymisation pour les graphes et les graphes dynamiques". Electronic Thesis or Diss., Paris 6, 2015. http://www.theses.fr/2015PA066050.
Testo completoData privacy is a major problem that has to be considered before releasing datasets to the public or even to a partner company that would compute statistics or make a deep analysis of these data. Privacy is insured by performing data anonymization as required by legislation. In this context, many different anonymization techniques have been proposed in the literature. These techniques are difficult to use in a general context where attacks can be of different types, and where measures are not known to the anonymizer. Generic methods able to adapt to different situations become desirable. We are addressing the problem of privacy related to graph data which needs, for different reasons, to be publicly made available. This corresponds to the anonymized graph data publishing problem. We are placing from the perspective of an anonymizer not having access to the methods used to analyze the data. A generic methodology is proposed based on machine learning to obtain directly an anonymization function from a set of training data so as to optimize a tradeoff between privacy risk and utility loss. The method thus allows one to get a good anonymization procedure for any kind of attacks, and any characteristic in a given set. The methodology is instantiated for simple graphs and complex timestamped graphs. A tool has been developed implementing the method and has been experimented with success on real anonymized datasets coming from Twitter, Enron or Amazon. Results are compared with baseline and it is showed that the proposed method is generic and can automatically adapt itself to different anonymization contexts
Ligier, Damien. "Functional encryption applied to privacy-preserving classification : practical use, performances and security". Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2018. http://www.theses.fr/2018IMTA0040/document.
Testo completoMachine Learning (ML) algorithms have proven themselves very powerful. Especially classification, enabling to efficiently identify information in large datasets. However, it raises concerns about the privacy of this data. Therefore, it brought to the forefront the challenge of designing machine learning algorithms able to preserve confidentiality.This thesis proposes a way to combine some cryptographic systems with classification algorithms to achieve privacy preserving classifier. The cryptographic system family in question is the functional encryption one. It is a generalization of the traditional public key encryption in which decryption keys are associated with a function. We did some experimentations on that combination on realistic scenario using the MNIST dataset of handwritten digit images. Our system is able in this use case to know which digit is written in an encrypted digit image. We also study its security in this real life scenario. It raises concerns about uses of functional encryption schemes in general and not just in our use case. We then introduce a way to balance in our construction efficiency of the classification and the risks
Béthune, Louis. "Apprentissage profond avec contraintes Lipschitz". Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES014.
Testo completoThis thesis explores the characteristics and applications of Lipschitz networks in machine learning tasks. First, the framework of "optimization as a layer" is presented, showcasing various applications, including the parametrization of Lipschitz-constrained layers. Then, the expressiveness of these networks in classification tasks is investigated, revealing an accuracy/robustness tradeoff controlled by entropic regularization of the loss, accompanied by generalization guarantees. Subsequently, the research delves into the utilization of signed distance functions as a solution to a regularized optimal transport problem, showcasing their efficacy in robust one-class learning and the construction of neural implicit surfaces. After, the thesis demonstrates the adaptability of the back-propagation algorithm to propagate bounds instead of vectors, enabling differentially private training of Lipschitz networks without incurring runtime and memory overhead. Finally, it goes beyond Lipschitz constraints and explores the use of convexity constraint for multivariate quantiles
Grivet, Sébert Arnaud. "Combining differential privacy and homomorphic encryption for privacy-preserving collaborative machine learning". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG037.
Testo completoThe purpose of this PhD is to design protocols to collaboratively train machine learning models while keeping the training data private. To do so, we focused on two privacy tools, namely differential privacy and homomorphic encryption. While differential privacy enables to deliver a functional model immune to attacks on the training data privacy by end-users, homomorphic encryption allows to make use of a server as a totally blind intermediary between the data owners, that provides computational resource without any access to clear information. Yet, these two techniques are of totally different natures and both entail their own constraints that may interfere: differential privacy generally requires the use of continuous and unbounded noise whereas homomorphic encryption can only deal with numbers encoded with a quite limited number of bits. The presented contributions make these two privacy tools work together by coping with their interferences and even leveraging them so that the two techniques may benefit from each other.In our first work, SPEED, we built on Private Aggregation of Teacher Ensembles (PATE) framework and extend the threat model to deal with an honest but curious server by covering the server computations with a homomorphic layer. We carefully define which operations are realised homomorphically to make as less computation as possible in the costly encrypted domain while revealing little enough information in clear to be easily protected by differential privacy. This trade-off forced us to realise an argmax operation in the encrypted domain, which, even if reasonable, remained expensive. That is why we propose SHIELD in another contribution, an argmax operator made inaccurate on purpose, both to satisfy differential privacy and lighten the homomorphic computation. The last presented contribution combines differential privacy and homomorphic encryption to secure a federated learning protocol. The main challenge of this combination comes from the necessary quantisation of the noise induced by encryption, that complicates the differential privacy analysis and justifies the design and use of a novel quantisation operator that commutes with the aggregation
Chatalic, Antoine. "Efficient and privacy-preserving compressive learning". Thesis, Rennes 1, 2020. http://www.theses.fr/2020REN1S030.
Testo completoThe topic of this Ph.D. thesis lies on the borderline between signal processing, statistics and computer science. It mainly focuses on compressive learning, a paradigm for large-scale machine learning in which the whole dataset is compressed down to a single vector of randomized generalized moments, called the sketch. An approximate solution of the learning task at hand is then estimated from this sketch, without using the initial data. This framework is by nature suited for learning from distributed collections or data streams, and has already been instantiated with success on several unsupervised learning tasks such as k-means clustering, density fitting using Gaussian mixture models, or principal component analysis. We improve this framework in multiple directions. First, it is shown that perturbing the sketch with additive noise is sufficient to derive (differential) privacy guarantees. Sharp bounds on the noise level required to obtain a given privacy level are provided, and the proposed method is shown empirically to compare favourably with state-of-the-art techniques. Then, the compression scheme is modified to leverage structured random matrices, which reduce the computational cost of the framework and make it possible to learn on high-dimensional data. Lastly, we introduce a new algorithm based on message passing techniques to learn from the sketch for the k-means clustering problem. These contributions open the way for a broader application of the framework
Saadeh, Angelo. "Applications of secure multi-party computation in Machine Learning". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT022.
Testo completoPrivacy-preserving in machine learning and data analysis is becoming increasingly important as the amount of sensitive personal information collected and used by organizations continues to grow. This poses the risk of exposing sensitive personal information to malicious third parties - which can lead to identity theft, financial fraud, or other types of cybercrime. Laws against the use of private data are important to protect individuals from having their information used and shared. However, by doing so, data protection laws limit the applications of machine learning models, and some of these applications could be life-saving - like in the medical field.Secure multi-party computation (MPC) allows multiple parties to jointly compute a function over their inputs without having to reveal or exchange the data itself. This tool can be used for training collaborative machine learning models when there are privacy concerns about exchanging sensitive datasets between different entities.In this thesis, we (I) use existing and develop new secure multi-party computation algorithms, (II) introduce cryptography-friendly approximations of common machine functions, and (III) complement secure multi-party computation with other privacy tools. This work is done in the goal of implementing privacy-preserving machine learning and data analysis algorithms.Our work and experimental results show that by executing the algorithms using secure multi-party computation both security and correctness are satisfied. In other words, no party has access to another's information and they are still being able to collaboratively train machine learning models with high accuracy results, and to collaboratively evaluate data analysis algorithms in comparison with non-encrypted datasets.Overall, this thesis provides a comprehensive view of secure multi-party computation for machine learning, demonstrating its potential to revolutionize the field. This thesis contributes to the deployment and acceptability of secure multi-party computation in machine learning and data analysis
Ameur, Yulliwas. "Exploring the Scope of Machine Learning using Homomorphic Encryption in IoT/Cloud". Electronic Thesis or Diss., Paris, HESAM, 2023. http://www.theses.fr/2023HESAC036.
Testo completoMachine Learning as a Service (MLaaS) has accelerated the adoption of machine learning techniques in various domains. However, this trend has also raised serious concerns over the security and privacy of the sensitive data used in machine learning models. To address this challenge, our approach is to use homomorphic encryption.The aim of this thesis is to examine the implementation of homomorphic encryption in different applications of machine learning.. The first part of the work focuses on the use of homomorphic encryption in a multi-cloud environment, where the encryption is applied to simple operations such as addition and multiplication.This thesis explores the application of homomorphic encryption to the k-nearest neighbors (k-NN) algorithm. The study presents a practical implementation of the k-NN algorithm using homomorphic encryption and demonstrates the feasibility of this approach on a variety of datasets. The results show that the performance of the k-NN algorithm using homomorphic encryption is comparable to that of the unencrypted algorithm.Third, the work investigates the application of homomorphic encryption to the k-means clustering algorithm. Similar to the k-NN study, the thesis presents a practical implementation of the k-means algorithm using homomorphic encryption and evaluates its performance on various datasets.Finally, the thesis explores the combination of homomorphic encryption with differential privacy (DP) techniques to further enhance the privacy of machine learning models. The study proposes a novel approach that combines homomorphic encryption with DP to achieve better privacy guarantees for machine learning models. The research presented in this thesis contributes to the growing body of research on the intersection of homomorphic encryption and machine learning, providing practical implementations and evaluations of homomorphic encryption in various machine learning contexts.iffalseAccording to Gartner, 5.8 Billion Enterprise and Automotive IoT endpoints will be in use at the end of 2020 while Statistica shows that IoT enablers solutions (such as Cloud, analytics, security) will reach 15 Billion of euros in the European Union market by 2025. However, these IoT devices have not enough resource capacity to process the data collected by their sensors making these devices vulnerable and prone to attack. To avoid processing data within the IoT devices, the trend is to outsource the sensed data to the Cloud that has both resourceful data storage and data processing. Nevertheless, the externalized data may be sensitive, and the users may lose privacy on the data content while allowing the cloud providers to access and possibly use these data to their own business. To avoid this situation and preserve data privacy in the Cloud datacenter, one possible solution is to use the fully homomorphic encryption (FHE) that assures both confidentiality and efficiency of the processing. In many smart environments such as smart cities, smart health, smart farming, industry 4.0, etc. where massive data are generated, there is a need to apply machine learning (ML) techniques, hence contributing to the decision making to act on the smart environment. Indeed, the challenging issue in this context is to adapt the ML approaches to apply them on encrypted data so that the decision taken on encrypted data can be reported on the cleartext data. This PhD thesis is a cooperative research work between two teams ROC and MSDMA of CEDRIC Lab. It aims at exploring the use of ML and FHE in smart applications where IoT devices collect sensitive data to outsource them on untrusted Cloud datacenter for computing thanks to ML models