Academic literature on the topic 'Privacy preserving machine learning'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Privacy preserving machine learning.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Privacy preserving machine learning"
Liu, Zheyuan, and Rui Zhang. "Privacy Preserving Collaborative Machine Learning." ICST Transactions on Security and Safety 8, no. 28 (September 10, 2021): 170295. http://dx.doi.org/10.4108/eai.14-7-2021.170295.
Full textKerschbaum, Florian, and Nils Lukas. "Privacy-Preserving Machine Learning [Cryptography]." IEEE Security & Privacy 21, no. 6 (November 2023): 90–94. http://dx.doi.org/10.1109/msec.2023.3315944.
Full textPan, Ziqi. "Machine learning for privacy-preserving: Approaches, challenges and discussion." Applied and Computational Engineering 18, no. 1 (October 23, 2023): 23–27. http://dx.doi.org/10.54254/2755-2721/18/20230957.
Full textMonika Dhananjay Rokade. "Advancements in Privacy-Preserving Techniques for Federated Learning: A Machine Learning Perspective." Journal of Electrical Systems 20, no. 2s (March 31, 2024): 1075–88. http://dx.doi.org/10.52783/jes.1754.
Full textZheng, Huadi, Haibo Hu, and Ziyang Han. "Preserving User Privacy for Machine Learning: Local Differential Privacy or Federated Machine Learning?" IEEE Intelligent Systems 35, no. 4 (July 1, 2020): 5–14. http://dx.doi.org/10.1109/mis.2020.3010335.
Full textChamikara, M. A. P., P. Bertok, I. Khalil, D. Liu, and S. Camtepe. "Privacy preserving distributed machine learning with federated learning." Computer Communications 171 (April 2021): 112–25. http://dx.doi.org/10.1016/j.comcom.2021.02.014.
Full textBonawitz, Kallista, Peter Kairouz, Brendan Mcmahan, and Daniel Ramage. "Federated learning and privacy." Communications of the ACM 65, no. 4 (April 2022): 90–97. http://dx.doi.org/10.1145/3500240.
Full textAl-Rubaie, Mohammad, and J. Morris Chang. "Privacy-Preserving Machine Learning: Threats and Solutions." IEEE Security & Privacy 17, no. 2 (March 2019): 49–58. http://dx.doi.org/10.1109/msec.2018.2888775.
Full textHesamifard, Ehsan, Hassan Takabi, Mehdi Ghasemi, and Rebecca N. Wright. "Privacy-preserving Machine Learning as a Service." Proceedings on Privacy Enhancing Technologies 2018, no. 3 (June 1, 2018): 123–42. http://dx.doi.org/10.1515/popets-2018-0024.
Full textJitendra Singh Chouhan, Amit Kumar Bhatt, Nitin Anand. "Federated Learning; Privacy Preserving Machine Learning for Decentralized Data." Tuijin Jishu/Journal of Propulsion Technology 44, no. 1 (November 24, 2023): 167–69. http://dx.doi.org/10.52783/tjjpt.v44.i1.2234.
Full textDissertations / Theses on the topic "Privacy preserving machine learning"
Bozdemir, Beyza. "Privacy-preserving machine learning techniques." Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS323.
Full textMachine Learning as a Service (MLaaS) refers to a service that enables companies to delegate their machine learning tasks to single or multiple untrusted but powerful third parties, namely cloud servers. Thanks to MLaaS, the need for computational resources and domain expertise required to execute machine learning techniques is significantly reduced. Nevertheless, companies face increasing challenges with ensuring data privacy guarantees and compliance with the data protection regulations. Executing machine learning tasks over sensitive data requires the design of privacy-preserving protocols for machine learning techniques.In this thesis, we aim to design such protocols for MLaaS and study three machine learning techniques: Neural network classification, trajectory clustering, and data aggregation under privacy protection. In our solutions, our goal is to guarantee data privacy while keeping an acceptable level of performance and accuracy/quality evaluation when executing the privacy-preserving variants of these machine learning techniques. In order to ensure data privacy, we employ several advanced cryptographic techniques: Secure two-party computation, homomorphic encryption, homomorphic proxy re-encryption, multi-key homomorphic encryption, and threshold homomorphic encryption. We have implemented our privacy-preserving protocols and studied the trade-off between privacy, efficiency, and accuracy/quality evaluation for each of them
Hesamifard, Ehsan. "Privacy Preserving Machine Learning as a Service." Thesis, University of North Texas, 2020. https://digital.library.unt.edu/ark:/67531/metadc1703277/.
Full textGrivet, Sébert Arnaud. "Combining differential privacy and homomorphic encryption for privacy-preserving collaborative machine learning." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG037.
Full textThe purpose of this PhD is to design protocols to collaboratively train machine learning models while keeping the training data private. To do so, we focused on two privacy tools, namely differential privacy and homomorphic encryption. While differential privacy enables to deliver a functional model immune to attacks on the training data privacy by end-users, homomorphic encryption allows to make use of a server as a totally blind intermediary between the data owners, that provides computational resource without any access to clear information. Yet, these two techniques are of totally different natures and both entail their own constraints that may interfere: differential privacy generally requires the use of continuous and unbounded noise whereas homomorphic encryption can only deal with numbers encoded with a quite limited number of bits. The presented contributions make these two privacy tools work together by coping with their interferences and even leveraging them so that the two techniques may benefit from each other.In our first work, SPEED, we built on Private Aggregation of Teacher Ensembles (PATE) framework and extend the threat model to deal with an honest but curious server by covering the server computations with a homomorphic layer. We carefully define which operations are realised homomorphically to make as less computation as possible in the costly encrypted domain while revealing little enough information in clear to be easily protected by differential privacy. This trade-off forced us to realise an argmax operation in the encrypted domain, which, even if reasonable, remained expensive. That is why we propose SHIELD in another contribution, an argmax operator made inaccurate on purpose, both to satisfy differential privacy and lighten the homomorphic computation. The last presented contribution combines differential privacy and homomorphic encryption to secure a federated learning protocol. The main challenge of this combination comes from the necessary quantisation of the noise induced by encryption, that complicates the differential privacy analysis and justifies the design and use of a novel quantisation operator that commutes with the aggregation
Cyphers, Bennett James. "A system for privacy-preserving machine learning on personal data." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/119518.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 81-85).
This thesis describes the design and implementation of a system which allows users to generate machine learning models with their own data while preserving privacy. We approach the problem in two steps. First, we present a framework with which a user can collate personal data from a variety of sources in order to generate machine learning models for problems of the user's choosing. Second, we describe AnonML, a system which allows a group of users to share data privately in order to build models for classification. We analyze AnonML under differential privacy and test its performance on real-world datasets. In tandem, these two systems will help democratize machine learning, allowing people to make the most of their own data without relying on trusted third parties.
by Bennett James Cyphers.
M. Eng.
Esperança, Pedro M. "Privacy-preserving statistical and machine learning methods under fully homomorphic encryption." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:a081311c-b25c-462e-a66b-1e4ac4de5fc2.
Full textZhang, Kevin M. Eng Massachusetts Institute of Technology. "Tiresias : a peer-to-peer platform for privacy preserving machine learning." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/129840.
Full textCataloged from student-submitted PDF of thesis.
Includes bibliographical references (pages 81-84).
Big technology firms have a monopoly over user data. To remediate this, we propose a data science platform which allows users to collect their personal data and offer computations on them in a differentially private manner. This platform provides a mechanism for contributors to offer computations on their data in a privacy-preserving way and for requesters -- i.e. anyone who can benefit from applying machine learning to the users' data -- to request computations on user data they would otherwise not be able to collect. Through carefully designed differential privacy mechanisms, we can create a platform which gives people control over their data and enables new types of applications.
by Kevin Zhang.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Langelaar, Johannes, and Mattsson Adam Strömme. "Federated Neural Collaborative Filtering for privacy-preserving recommender systems." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-446913.
Full textDou, Yanzhi. "Toward Privacy-Preserving and Secure Dynamic Spectrum Access." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/81882.
Full textPh. D.
García, Recuero Álvaro. "Discouraging abusive behavior in privacy-preserving decentralized online social networks." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S010/document.
Full textThe main goal of this thesis is to evaluate privacy-preserving protocols to detect abuse in future decentralised online social platforms or microblogging services, where often limited amount of metadata is available to perform data analytics. Taking into account such data minimization, we obtain acceptable results compared to techniques of machine learning that use all metadata available. We draw a series of conclusion and recommendations that will aid in the design and development of a privacy-preserving decentralised social network that discourages abusive behavior
Ligier, Damien. "Functional encryption applied to privacy-preserving classification : practical use, performances and security." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2018. http://www.theses.fr/2018IMTA0040/document.
Full textMachine Learning (ML) algorithms have proven themselves very powerful. Especially classification, enabling to efficiently identify information in large datasets. However, it raises concerns about the privacy of this data. Therefore, it brought to the forefront the challenge of designing machine learning algorithms able to preserve confidentiality.This thesis proposes a way to combine some cryptographic systems with classification algorithms to achieve privacy preserving classifier. The cryptographic system family in question is the functional encryption one. It is a generalization of the traditional public key encryption in which decryption keys are associated with a function. We did some experimentations on that combination on realistic scenario using the MNIST dataset of handwritten digit images. Our system is able in this use case to know which digit is written in an encrypted digit image. We also study its security in this real life scenario. It raises concerns about uses of functional encryption schemes in general and not just in our use case. We then introduce a way to balance in our construction efficiency of the classification and the risks
Books on the topic "Privacy preserving machine learning"
Li, Jin, Ping Li, Zheli Liu, Xiaofeng Chen, and Tong Li. Privacy-Preserving Machine Learning. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9139-3.
Full textPathak, Manas A. Privacy-Preserving Machine Learning for Speech Processing. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-4639-2.
Full textPathak, Manas A. Privacy-Preserving Machine Learning for Speech Processing. New York, NY: Springer New York, 2013.
Find full textOyarzun Laura, Cristina, M. Jorge Cardoso, Michal Rosen-Zvi, Georgios Kaissis, Marius George Linguraru, Raj Shekhar, Stefan Wesarg, et al., eds. Clinical Image-Based Procedures, Distributed and Collaborative Learning, Artificial Intelligence for Combating COVID-19 and Secure and Privacy-Preserving Machine Learning. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-90874-4.
Full textKim, Kwangjo, and Harry Chandra Tanuwidjaja. Privacy-Preserving Deep Learning. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-3764-3.
Full textQu, Youyang, Longxiang Gao, Shui Yu, and Yong Xiang. Privacy Preservation in IoT: Machine Learning Approaches. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-1797-4.
Full textZimmeck, Sebastian. Using Machine Learning to improve Internet Privacy. [New York, N.Y.?]: [publisher not identified], 2017.
Find full textYu, Philip S. Machine Learning in Cyber Trust: Security, Privacy, and Reliability. Boston, MA: Springer-Verlag US, 2009.
Find full textLecuyer, Mathias. Security, Privacy, and Transparency Guarantees for Machine Learning Systems. [New York, N.Y.?]: [publisher not identified], 2019.
Find full textDimitrakakis, Christos, Aris Gkoulalas-Divanis, Aikaterini Mitrokotsa, Vassilios S. Verykios, and Yücel Saygin, eds. Privacy and Security Issues in Data Mining and Machine Learning. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-19896-0.
Full textBook chapters on the topic "Privacy preserving machine learning"
Chow, Sherman S. M. "Privacy-Preserving Machine Learning." In Communications in Computer and Information Science, 3–6. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-3095-7_1.
Full textLi, Jin, Ping Li, Zheli Liu, Xiaofeng Chen, and Tong Li. "Secure Distributed Learning." In Privacy-Preserving Machine Learning, 47–56. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9139-3_4.
Full textLi, Jin, Ping Li, Zheli Liu, Xiaofeng Chen, and Tong Li. "Learning with Differential Privacy." In Privacy-Preserving Machine Learning, 57–64. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9139-3_5.
Full textZeugmann, Thomas, Pascal Poupart, James Kennedy, Xin Jin, Jiawei Han, Lorenza Saitta, Michele Sebag, et al. "Privacy-Preserving Data Mining." In Encyclopedia of Machine Learning, 795. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_667.
Full textLi, Jin, Ping Li, Zheli Liu, Xiaofeng Chen, and Tong Li. "Outsourced Computation for Learning." In Privacy-Preserving Machine Learning, 31–45. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9139-3_3.
Full textLi, Jin, Ping Li, Zheli Liu, Xiaofeng Chen, and Tong Li. "Threats in Open Environment." In Privacy-Preserving Machine Learning, 75–86. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9139-3_7.
Full textLi, Jin, Ping Li, Zheli Liu, Xiaofeng Chen, and Tong Li. "Applications—Privacy-Preserving Image Processing." In Privacy-Preserving Machine Learning, 65–74. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9139-3_6.
Full textLi, Jin, Ping Li, Zheli Liu, Xiaofeng Chen, and Tong Li. "Conclusion." In Privacy-Preserving Machine Learning, 87–88. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9139-3_8.
Full textLi, Jin, Ping Li, Zheli Liu, Xiaofeng Chen, and Tong Li. "Secure Cooperative Learning in Early Years." In Privacy-Preserving Machine Learning, 15–30. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9139-3_2.
Full textLi, Jin, Ping Li, Zheli Liu, Xiaofeng Chen, and Tong Li. "Introduction." In Privacy-Preserving Machine Learning, 1–13. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9139-3_1.
Full textConference papers on the topic "Privacy preserving machine learning"
EL MESTARI, Soumia Zohra. "Privacy Preserving Machine Learning Systems." In AIES '22: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3514094.3539530.
Full textCarey, Alycia, and Nicholas Pattengale. "Privacy-Preserving AutoML." In Proposed for presentation at the Sandia Machine Learning and Deep Learning Workshop held July 19-22, 2021 in ,. US DOE, 2021. http://dx.doi.org/10.2172/1877808.
Full textSenekane, Makhamisa, Mhlambululi Mafu, and Benedict Molibeli Taele. "Privacy-preserving quantum machine learning using differential privacy." In 2017 IEEE AFRICON. IEEE, 2017. http://dx.doi.org/10.1109/afrcon.2017.8095692.
Full text"Session details: Privacy-preserving Machine Learning." In the 12th ACM Workshop, Chair Sadia Afroz. New York, New York, USA: ACM Press, 2019. http://dx.doi.org/10.1145/3338501.3371912.
Full textHesamifard, Ehsan, Hassan Takabi, Mehdi Ghasemi, and Catherine Jones. "Privacy-preserving Machine Learning in Cloud." In CCS '17: 2017 ACM SIGSAC Conference on Computer and Communications Security. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3140649.3140655.
Full textWang, Xin, Hideaki Ishii, Linkang Du, Peng Cheng, and Jiming Chen. "Differential Privacy-preserving Distributed Machine Learning." In 2019 IEEE 58th Conference on Decision and Control (CDC). IEEE, 2019. http://dx.doi.org/10.1109/cdc40024.2019.9029938.
Full textSchneider, Thomas. "Engineering Privacy-Preserving Machine Learning Protocols." In CCS '20: 2020 ACM SIGSAC Conference on Computer and Communications Security. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3411501.3418607.
Full textMiyaji, Atsuko, Tatsuhiro Yamatsuki, Bingchang He, Shintaro Yamashita, and Tomoaki Mimoto. "Re-visited Privacy-Preserving Machine Learning." In 2023 20th Annual International Conference on Privacy, Security and Trust (PST). IEEE, 2023. http://dx.doi.org/10.1109/pst58708.2023.10320156.
Full textAfroz, Sadia. "Session details: Privacy-preserving Machine Learning." In CCS '19: 2019 ACM SIGSAC Conference on Computer and Communications Security. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3371912.
Full textPrabhu, Akshay, Niranjana Balasubramanian, Chinmay Tiwari, and Rugved Deolekar. "Privacy preserving and secure machine learning." In 2021 IEEE 18th India Council International Conference (INDICON). IEEE, 2021. http://dx.doi.org/10.1109/indicon52576.2021.9691706.
Full textReports on the topic "Privacy preserving machine learning"
Martindale, Nathan, Scott Stewart, Mark Adams, and Greg Westphal. Considerations for using Privacy Preserving Machine Learning Techniques for Safeguards. Office of Scientific and Technical Information (OSTI), December 2020. http://dx.doi.org/10.2172/1737477.
Full textDaudelin, Francois, Lina Taing, Lucy Chen, Claudia Abreu Lopes, Adeniyi Francis Fagbamigbe, and Hamid Mehmood. Mapping WASH-related disease risk: A review of risk concepts and methods. United Nations University Institute for Water, Environment and Health, December 2021. http://dx.doi.org/10.53328/uxuo4751.
Full text