Добірка наукової літератури з теми "Privacy preserving machine learning"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Privacy preserving machine learning".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Privacy preserving machine learning"
Liu, Zheyuan, and Rui Zhang. "Privacy Preserving Collaborative Machine Learning." ICST Transactions on Security and Safety 8, no. 28 (September 10, 2021): 170295. http://dx.doi.org/10.4108/eai.14-7-2021.170295.
Повний текст джерелаKerschbaum, Florian, and Nils Lukas. "Privacy-Preserving Machine Learning [Cryptography]." IEEE Security & Privacy 21, no. 6 (November 2023): 90–94. http://dx.doi.org/10.1109/msec.2023.3315944.
Повний текст джерелаPan, Ziqi. "Machine learning for privacy-preserving: Approaches, challenges and discussion." Applied and Computational Engineering 18, no. 1 (October 23, 2023): 23–27. http://dx.doi.org/10.54254/2755-2721/18/20230957.
Повний текст джерелаMonika Dhananjay Rokade. "Advancements in Privacy-Preserving Techniques for Federated Learning: A Machine Learning Perspective." Journal of Electrical Systems 20, no. 2s (March 31, 2024): 1075–88. http://dx.doi.org/10.52783/jes.1754.
Повний текст джерелаZheng, Huadi, Haibo Hu, and Ziyang Han. "Preserving User Privacy for Machine Learning: Local Differential Privacy or Federated Machine Learning?" IEEE Intelligent Systems 35, no. 4 (July 1, 2020): 5–14. http://dx.doi.org/10.1109/mis.2020.3010335.
Повний текст джерелаChamikara, M. A. P., P. Bertok, I. Khalil, D. Liu, and S. Camtepe. "Privacy preserving distributed machine learning with federated learning." Computer Communications 171 (April 2021): 112–25. http://dx.doi.org/10.1016/j.comcom.2021.02.014.
Повний текст джерелаBonawitz, Kallista, Peter Kairouz, Brendan Mcmahan, and Daniel Ramage. "Federated learning and privacy." Communications of the ACM 65, no. 4 (April 2022): 90–97. http://dx.doi.org/10.1145/3500240.
Повний текст джерелаAl-Rubaie, Mohammad, and J. Morris Chang. "Privacy-Preserving Machine Learning: Threats and Solutions." IEEE Security & Privacy 17, no. 2 (March 2019): 49–58. http://dx.doi.org/10.1109/msec.2018.2888775.
Повний текст джерелаHesamifard, Ehsan, Hassan Takabi, Mehdi Ghasemi, and Rebecca N. Wright. "Privacy-preserving Machine Learning as a Service." Proceedings on Privacy Enhancing Technologies 2018, no. 3 (June 1, 2018): 123–42. http://dx.doi.org/10.1515/popets-2018-0024.
Повний текст джерелаJitendra Singh Chouhan, Amit Kumar Bhatt, Nitin Anand. "Federated Learning; Privacy Preserving Machine Learning for Decentralized Data." Tuijin Jishu/Journal of Propulsion Technology 44, no. 1 (November 24, 2023): 167–69. http://dx.doi.org/10.52783/tjjpt.v44.i1.2234.
Повний текст джерелаДисертації з теми "Privacy preserving machine learning"
Bozdemir, Beyza. "Privacy-preserving machine learning techniques." Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS323.
Повний текст джерелаMachine Learning as a Service (MLaaS) refers to a service that enables companies to delegate their machine learning tasks to single or multiple untrusted but powerful third parties, namely cloud servers. Thanks to MLaaS, the need for computational resources and domain expertise required to execute machine learning techniques is significantly reduced. Nevertheless, companies face increasing challenges with ensuring data privacy guarantees and compliance with the data protection regulations. Executing machine learning tasks over sensitive data requires the design of privacy-preserving protocols for machine learning techniques.In this thesis, we aim to design such protocols for MLaaS and study three machine learning techniques: Neural network classification, trajectory clustering, and data aggregation under privacy protection. In our solutions, our goal is to guarantee data privacy while keeping an acceptable level of performance and accuracy/quality evaluation when executing the privacy-preserving variants of these machine learning techniques. In order to ensure data privacy, we employ several advanced cryptographic techniques: Secure two-party computation, homomorphic encryption, homomorphic proxy re-encryption, multi-key homomorphic encryption, and threshold homomorphic encryption. We have implemented our privacy-preserving protocols and studied the trade-off between privacy, efficiency, and accuracy/quality evaluation for each of them
Hesamifard, Ehsan. "Privacy Preserving Machine Learning as a Service." Thesis, University of North Texas, 2020. https://digital.library.unt.edu/ark:/67531/metadc1703277/.
Повний текст джерелаGrivet, Sébert Arnaud. "Combining differential privacy and homomorphic encryption for privacy-preserving collaborative machine learning." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG037.
Повний текст джерелаThe purpose of this PhD is to design protocols to collaboratively train machine learning models while keeping the training data private. To do so, we focused on two privacy tools, namely differential privacy and homomorphic encryption. While differential privacy enables to deliver a functional model immune to attacks on the training data privacy by end-users, homomorphic encryption allows to make use of a server as a totally blind intermediary between the data owners, that provides computational resource without any access to clear information. Yet, these two techniques are of totally different natures and both entail their own constraints that may interfere: differential privacy generally requires the use of continuous and unbounded noise whereas homomorphic encryption can only deal with numbers encoded with a quite limited number of bits. The presented contributions make these two privacy tools work together by coping with their interferences and even leveraging them so that the two techniques may benefit from each other.In our first work, SPEED, we built on Private Aggregation of Teacher Ensembles (PATE) framework and extend the threat model to deal with an honest but curious server by covering the server computations with a homomorphic layer. We carefully define which operations are realised homomorphically to make as less computation as possible in the costly encrypted domain while revealing little enough information in clear to be easily protected by differential privacy. This trade-off forced us to realise an argmax operation in the encrypted domain, which, even if reasonable, remained expensive. That is why we propose SHIELD in another contribution, an argmax operator made inaccurate on purpose, both to satisfy differential privacy and lighten the homomorphic computation. The last presented contribution combines differential privacy and homomorphic encryption to secure a federated learning protocol. The main challenge of this combination comes from the necessary quantisation of the noise induced by encryption, that complicates the differential privacy analysis and justifies the design and use of a novel quantisation operator that commutes with the aggregation
Cyphers, Bennett James. "A system for privacy-preserving machine learning on personal data." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/119518.
Повний текст джерелаThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 81-85).
This thesis describes the design and implementation of a system which allows users to generate machine learning models with their own data while preserving privacy. We approach the problem in two steps. First, we present a framework with which a user can collate personal data from a variety of sources in order to generate machine learning models for problems of the user's choosing. Second, we describe AnonML, a system which allows a group of users to share data privately in order to build models for classification. We analyze AnonML under differential privacy and test its performance on real-world datasets. In tandem, these two systems will help democratize machine learning, allowing people to make the most of their own data without relying on trusted third parties.
by Bennett James Cyphers.
M. Eng.
Esperança, Pedro M. "Privacy-preserving statistical and machine learning methods under fully homomorphic encryption." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:a081311c-b25c-462e-a66b-1e4ac4de5fc2.
Повний текст джерелаZhang, Kevin M. Eng Massachusetts Institute of Technology. "Tiresias : a peer-to-peer platform for privacy preserving machine learning." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/129840.
Повний текст джерелаCataloged from student-submitted PDF of thesis.
Includes bibliographical references (pages 81-84).
Big technology firms have a monopoly over user data. To remediate this, we propose a data science platform which allows users to collect their personal data and offer computations on them in a differentially private manner. This platform provides a mechanism for contributors to offer computations on their data in a privacy-preserving way and for requesters -- i.e. anyone who can benefit from applying machine learning to the users' data -- to request computations on user data they would otherwise not be able to collect. Through carefully designed differential privacy mechanisms, we can create a platform which gives people control over their data and enables new types of applications.
by Kevin Zhang.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Langelaar, Johannes, and Mattsson Adam Strömme. "Federated Neural Collaborative Filtering for privacy-preserving recommender systems." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-446913.
Повний текст джерелаDou, Yanzhi. "Toward Privacy-Preserving and Secure Dynamic Spectrum Access." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/81882.
Повний текст джерелаPh. D.
García, Recuero Álvaro. "Discouraging abusive behavior in privacy-preserving decentralized online social networks." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S010/document.
Повний текст джерелаThe main goal of this thesis is to evaluate privacy-preserving protocols to detect abuse in future decentralised online social platforms or microblogging services, where often limited amount of metadata is available to perform data analytics. Taking into account such data minimization, we obtain acceptable results compared to techniques of machine learning that use all metadata available. We draw a series of conclusion and recommendations that will aid in the design and development of a privacy-preserving decentralised social network that discourages abusive behavior
Ligier, Damien. "Functional encryption applied to privacy-preserving classification : practical use, performances and security." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2018. http://www.theses.fr/2018IMTA0040/document.
Повний текст джерелаMachine Learning (ML) algorithms have proven themselves very powerful. Especially classification, enabling to efficiently identify information in large datasets. However, it raises concerns about the privacy of this data. Therefore, it brought to the forefront the challenge of designing machine learning algorithms able to preserve confidentiality.This thesis proposes a way to combine some cryptographic systems with classification algorithms to achieve privacy preserving classifier. The cryptographic system family in question is the functional encryption one. It is a generalization of the traditional public key encryption in which decryption keys are associated with a function. We did some experimentations on that combination on realistic scenario using the MNIST dataset of handwritten digit images. Our system is able in this use case to know which digit is written in an encrypted digit image. We also study its security in this real life scenario. It raises concerns about uses of functional encryption schemes in general and not just in our use case. We then introduce a way to balance in our construction efficiency of the classification and the risks
Книги з теми "Privacy preserving machine learning"
Li, Jin, Ping Li, Zheli Liu, Xiaofeng Chen, and Tong Li. Privacy-Preserving Machine Learning. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9139-3.
Повний текст джерелаPathak, Manas A. Privacy-Preserving Machine Learning for Speech Processing. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-4639-2.
Повний текст джерелаPathak, Manas A. Privacy-Preserving Machine Learning for Speech Processing. New York, NY: Springer New York, 2013.
Знайти повний текст джерелаOyarzun Laura, Cristina, M. Jorge Cardoso, Michal Rosen-Zvi, Georgios Kaissis, Marius George Linguraru, Raj Shekhar, Stefan Wesarg, et al., eds. Clinical Image-Based Procedures, Distributed and Collaborative Learning, Artificial Intelligence for Combating COVID-19 and Secure and Privacy-Preserving Machine Learning. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-90874-4.
Повний текст джерелаKim, Kwangjo, and Harry Chandra Tanuwidjaja. Privacy-Preserving Deep Learning. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-3764-3.
Повний текст джерелаQu, Youyang, Longxiang Gao, Shui Yu, and Yong Xiang. Privacy Preservation in IoT: Machine Learning Approaches. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-1797-4.
Повний текст джерелаZimmeck, Sebastian. Using Machine Learning to improve Internet Privacy. [New York, N.Y.?]: [publisher not identified], 2017.
Знайти повний текст джерелаYu, Philip S. Machine Learning in Cyber Trust: Security, Privacy, and Reliability. Boston, MA: Springer-Verlag US, 2009.
Знайти повний текст джерелаLecuyer, Mathias. Security, Privacy, and Transparency Guarantees for Machine Learning Systems. [New York, N.Y.?]: [publisher not identified], 2019.
Знайти повний текст джерелаDimitrakakis, Christos, Aris Gkoulalas-Divanis, Aikaterini Mitrokotsa, Vassilios S. Verykios, and Yücel Saygin, eds. Privacy and Security Issues in Data Mining and Machine Learning. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-19896-0.
Повний текст джерелаЧастини книг з теми "Privacy preserving machine learning"
Chow, Sherman S. M. "Privacy-Preserving Machine Learning." In Communications in Computer and Information Science, 3–6. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-3095-7_1.
Повний текст джерелаLi, Jin, Ping Li, Zheli Liu, Xiaofeng Chen, and Tong Li. "Secure Distributed Learning." In Privacy-Preserving Machine Learning, 47–56. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9139-3_4.
Повний текст джерелаLi, Jin, Ping Li, Zheli Liu, Xiaofeng Chen, and Tong Li. "Learning with Differential Privacy." In Privacy-Preserving Machine Learning, 57–64. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9139-3_5.
Повний текст джерелаZeugmann, Thomas, Pascal Poupart, James Kennedy, Xin Jin, Jiawei Han, Lorenza Saitta, Michele Sebag, et al. "Privacy-Preserving Data Mining." In Encyclopedia of Machine Learning, 795. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_667.
Повний текст джерелаLi, Jin, Ping Li, Zheli Liu, Xiaofeng Chen, and Tong Li. "Outsourced Computation for Learning." In Privacy-Preserving Machine Learning, 31–45. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9139-3_3.
Повний текст джерелаLi, Jin, Ping Li, Zheli Liu, Xiaofeng Chen, and Tong Li. "Threats in Open Environment." In Privacy-Preserving Machine Learning, 75–86. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9139-3_7.
Повний текст джерелаLi, Jin, Ping Li, Zheli Liu, Xiaofeng Chen, and Tong Li. "Applications—Privacy-Preserving Image Processing." In Privacy-Preserving Machine Learning, 65–74. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9139-3_6.
Повний текст джерелаLi, Jin, Ping Li, Zheli Liu, Xiaofeng Chen, and Tong Li. "Conclusion." In Privacy-Preserving Machine Learning, 87–88. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9139-3_8.
Повний текст джерелаLi, Jin, Ping Li, Zheli Liu, Xiaofeng Chen, and Tong Li. "Secure Cooperative Learning in Early Years." In Privacy-Preserving Machine Learning, 15–30. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9139-3_2.
Повний текст джерелаLi, Jin, Ping Li, Zheli Liu, Xiaofeng Chen, and Tong Li. "Introduction." In Privacy-Preserving Machine Learning, 1–13. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9139-3_1.
Повний текст джерелаТези доповідей конференцій з теми "Privacy preserving machine learning"
EL MESTARI, Soumia Zohra. "Privacy Preserving Machine Learning Systems." In AIES '22: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3514094.3539530.
Повний текст джерелаCarey, Alycia, and Nicholas Pattengale. "Privacy-Preserving AutoML." In Proposed for presentation at the Sandia Machine Learning and Deep Learning Workshop held July 19-22, 2021 in ,. US DOE, 2021. http://dx.doi.org/10.2172/1877808.
Повний текст джерелаSenekane, Makhamisa, Mhlambululi Mafu, and Benedict Molibeli Taele. "Privacy-preserving quantum machine learning using differential privacy." In 2017 IEEE AFRICON. IEEE, 2017. http://dx.doi.org/10.1109/afrcon.2017.8095692.
Повний текст джерела"Session details: Privacy-preserving Machine Learning." In the 12th ACM Workshop, chair Sadia Afroz. New York, New York, USA: ACM Press, 2019. http://dx.doi.org/10.1145/3338501.3371912.
Повний текст джерелаHesamifard, Ehsan, Hassan Takabi, Mehdi Ghasemi, and Catherine Jones. "Privacy-preserving Machine Learning in Cloud." In CCS '17: 2017 ACM SIGSAC Conference on Computer and Communications Security. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3140649.3140655.
Повний текст джерелаWang, Xin, Hideaki Ishii, Linkang Du, Peng Cheng, and Jiming Chen. "Differential Privacy-preserving Distributed Machine Learning." In 2019 IEEE 58th Conference on Decision and Control (CDC). IEEE, 2019. http://dx.doi.org/10.1109/cdc40024.2019.9029938.
Повний текст джерелаSchneider, Thomas. "Engineering Privacy-Preserving Machine Learning Protocols." In CCS '20: 2020 ACM SIGSAC Conference on Computer and Communications Security. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3411501.3418607.
Повний текст джерелаMiyaji, Atsuko, Tatsuhiro Yamatsuki, Bingchang He, Shintaro Yamashita, and Tomoaki Mimoto. "Re-visited Privacy-Preserving Machine Learning." In 2023 20th Annual International Conference on Privacy, Security and Trust (PST). IEEE, 2023. http://dx.doi.org/10.1109/pst58708.2023.10320156.
Повний текст джерелаAfroz, Sadia. "Session details: Privacy-preserving Machine Learning." In CCS '19: 2019 ACM SIGSAC Conference on Computer and Communications Security. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3371912.
Повний текст джерелаPrabhu, Akshay, Niranjana Balasubramanian, Chinmay Tiwari, and Rugved Deolekar. "Privacy preserving and secure machine learning." In 2021 IEEE 18th India Council International Conference (INDICON). IEEE, 2021. http://dx.doi.org/10.1109/indicon52576.2021.9691706.
Повний текст джерелаЗвіти організацій з теми "Privacy preserving machine learning"
Martindale, Nathan, Scott Stewart, Mark Adams, and Greg Westphal. Considerations for using Privacy Preserving Machine Learning Techniques for Safeguards. Office of Scientific and Technical Information (OSTI), December 2020. http://dx.doi.org/10.2172/1737477.
Повний текст джерелаDaudelin, Francois, Lina Taing, Lucy Chen, Claudia Abreu Lopes, Adeniyi Francis Fagbamigbe, and Hamid Mehmood. Mapping WASH-related disease risk: A review of risk concepts and methods. United Nations University Institute for Water, Environment and Health, December 2021. http://dx.doi.org/10.53328/uxuo4751.
Повний текст джерела