Gotowa bibliografia na temat „Empirical privacy defenses”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Empirical privacy defenses”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Empirical privacy defenses"
Kaplan, Caelin, Chuan Xu, Othmane Marfoq, Giovanni Neglia i Anderson Santana de Oliveira. "A Cautionary Tale: On the Role of Reference Data in Empirical Privacy Defenses". Proceedings on Privacy Enhancing Technologies 2024, nr 1 (styczeń 2024): 525–48. http://dx.doi.org/10.56553/popets-2024-0031.
Pełny tekst źródłaNakai, Tsunato, Ye Wang, Kota Yoshida i Takeshi Fujino. "SEDMA: Self-Distillation with Model Aggregation for Membership Privacy". Proceedings on Privacy Enhancing Technologies 2024, nr 1 (styczeń 2024): 494–508. http://dx.doi.org/10.56553/popets-2024-0029.
Pełny tekst źródłaOzdayi, Mustafa Safa, Murat Kantarcioglu i Yulia R. Gel. "Defending against Backdoors in Federated Learning with Robust Learning Rate". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 10 (18.05.2021): 9268–76. http://dx.doi.org/10.1609/aaai.v35i10.17118.
Pełny tekst źródłaWang, Tianhao, Yuheng Zhang i Ruoxi Jia. "Improving Robustness to Model Inversion Attacks via Mutual Information Regularization". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 13 (18.05.2021): 11666–73. http://dx.doi.org/10.1609/aaai.v35i13.17387.
Pełny tekst źródłaPrimus, Eve. "The Problematic Structure of Indigent Defense Delivery". Michigan Law Review, nr 122.2 (2023): 205. http://dx.doi.org/10.36644/mlr.122.2.problematic.
Pełny tekst źródłaSangero, Boaz. "A New Defense for Self-Defense". Buffalo Criminal Law Review 9, nr 2 (1.01.2006): 475–559. http://dx.doi.org/10.1525/nclr.2006.9.2.475.
Pełny tekst źródłaChen, Jiyu, Yiwen Guo, Qianjun Zheng i Hao Chen. "Protect privacy of deep classification networks by exploiting their generative power". Machine Learning 110, nr 4 (kwiecień 2021): 651–74. http://dx.doi.org/10.1007/s10994-021-05951-6.
Pełny tekst źródłaMiao, Lu, Weibo Li, Jia Zhao, Xin Zhou i Yao Wu. "Differential Private Defense Against Backdoor Attacks in Federated Learning". Frontiers in Computing and Intelligent Systems 9, nr 2 (28.08.2024): 31–39. http://dx.doi.org/10.54097/dyt1nn60.
Pełny tekst źródłaAbbasi Tadi, Ali, Saroj Dayal, Dima Alhadidi i Noman Mohammed. "Comparative Analysis of Membership Inference Attacks in Federated and Centralized Learning". Information 14, nr 11 (19.11.2023): 620. http://dx.doi.org/10.3390/info14110620.
Pełny tekst źródłaPERSKY, JOSEPH. "Rawls's Thin (Millean) Defense of Private Property". Utilitas 22, nr 2 (10.05.2010): 134–47. http://dx.doi.org/10.1017/s0953820810000051.
Pełny tekst źródłaRozprawy doktorskie na temat "Empirical privacy defenses"
Kaplan, Caelin. "Compromis inhérents à l'apprentissage automatique préservant la confidentialité". Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4045.
Pełny tekst źródłaAs machine learning (ML) models are increasingly integrated into a wide range of applications, ensuring the privacy of individuals' data is becoming more important than ever. However, privacy-preserving ML techniques often result in reduced task-specific utility and may negatively impact other essential factors like fairness, robustness, and interpretability. These challenges have limited the widespread adoption of privacy-preserving methods. This thesis aims to address these challenges through two primary goals: (1) to deepen the understanding of key trade-offs in three privacy-preserving ML techniques—differential privacy, empirical privacy defenses, and federated learning; (2) to propose novel methods and algorithms that improve utility and effectiveness while maintaining privacy protections. The first study in this thesis investigates how differential privacy impacts fairness across groups defined by sensitive attributes. While previous assumptions suggested that differential privacy could exacerbate unfairness in ML models, our experiments demonstrate that selecting an optimal model architecture and tuning hyperparameters for DP-SGD (Differentially Private Stochastic Gradient Descent) can mitigate fairness disparities. Using standard ML fairness datasets, we show that group disparities in metrics like demographic parity, equalized odds, and predictive parity are often reduced or remain negligible when compared to non-private baselines, challenging the prevailing notion that differential privacy worsens fairness for underrepresented groups. The second study focuses on empirical privacy defenses, which aim to protect training data privacy while minimizing utility loss. Most existing defenses assume access to reference data---an additional dataset from the same or a similar distribution as the training data. However, previous works have largely neglected to evaluate the privacy risks associated with reference data. To address this, we conducted the first comprehensive analysis of reference data privacy in empirical defenses. We proposed a baseline defense method, Weighted Empirical Risk Minimization (WERM), which allows for a clearer understanding of the trade-offs between model utility, training data privacy, and reference data privacy. In addition to offering theoretical guarantees on model utility and the relative privacy of training and reference data, WERM consistently outperforms state-of-the-art empirical privacy defenses in nearly all relative privacy regimes.The third study addresses the convergence-related trade-offs in Collaborative Inference Systems (CISs), which are increasingly used in the Internet of Things (IoT) to enable smaller nodes in a network to offload part of their inference tasks to more powerful nodes. While Federated Learning (FL) is often used to jointly train models within CISs, traditional methods have overlooked the operational dynamics of these systems, such as heterogeneity in serving rates across nodes. We propose a novel FL approach explicitly designed for CISs, which accounts for varying serving rates and uneven data availability. Our framework provides theoretical guarantees and consistently outperforms state-of-the-art algorithms, particularly in scenarios where end devices handle high inference request rates.In conclusion, this thesis advances the field of privacy-preserving ML by addressing key trade-offs in differential privacy, empirical privacy defenses, and federated learning. The proposed methods provide new insights into balancing privacy with utility and other critical factors, offering practical solutions for integrating privacy-preserving techniques into real-world applications. These contributions aim to support the responsible and ethical deployment of AI technologies that prioritize data privacy and protection
Spiekermann, Sarah, Jana Korunovska i Christine Bauer. "Psychology of Ownership and Asset Defense: Why People Value their Personal Information Beyond Privacy". 2012. http://epub.wu.ac.at/3630/1/2012_ICIS_Facebook.pdf.
Pełny tekst źródłaKsiążki na temat "Empirical privacy defenses"
Lafollette, Hugh. The Empirical Evidence. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190873363.003.0006.
Pełny tekst źródłaLafollette, Hugh. In Defense of Gun Control. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190873363.001.0001.
Pełny tekst źródłaGanz, Aurora. Fuelling Insecurity. Policy Press, 2021. http://dx.doi.org/10.1332/policypress/9781529216691.001.0001.
Pełny tekst źródłaHeinze, Eric. Toward a Legal Concept of Hatred. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190465544.003.0006.
Pełny tekst źródłaClifton, Judith, Daniel Díaz Fuentes i David Howarth, red. Regional Development Banks in the World Economy. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198861089.001.0001.
Pełny tekst źródłaCzęści książek na temat "Empirical privacy defenses"
Augsberg, Ino. "In Defence of Ambiguity". W Methodology in Private Law Theory, 137–52. Oxford University PressOxford, 2024. http://dx.doi.org/10.1093/oso/9780198885306.003.0006.
Pełny tekst źródłaXu, Qiongka, Trevor Cohn i Olga Ohrimenko. "Fingerprint Attack: Client De-Anonymization in Federated Learning". W Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230590.
Pełny tekst źródłaFabre, Cécile. "Economic Espionage". W Spying Through a Glass Darkly, 72–91. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780198833765.003.0005.
Pełny tekst źródłaMarneffe, Peter de. "Self-Sovereignty, Drugs, and Prostitution". W Oxford Studies in Political Philosophy Volume 9, 241–59. Oxford University PressOxford, 2023. http://dx.doi.org/10.1093/oso/9780198877639.003.0009.
Pełny tekst źródłaBagg, Samuel Ely. "What Is State Capture?" W The Dispersion of Power, 79–107. Oxford University PressOxford, 2024. http://dx.doi.org/10.1093/oso/9780192848826.003.0005.
Pełny tekst źródłaStreszczenia konferencji na temat "Empirical privacy defenses"
Costa, Miguel, i Sandro Pinto. "David and Goliath: An Empirical Evaluation of Attacks and Defenses for QNNs at the Deep Edge". W 2024 IEEE 9th European Symposium on Security and Privacy (EuroS&P), 524–41. IEEE, 2024. http://dx.doi.org/10.1109/eurosp60621.2024.00035.
Pełny tekst źródłaJankovic, Aleksandar, i Rudolf Mayer. "An Empirical Evaluation of Adversarial Examples Defences, Combinations and Robustness Scores". W CODASPY '22: Twelveth ACM Conference on Data and Application Security and Privacy. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3510548.3519370.
Pełny tekst źródłaFerreira, Raul, Vagner Praia, Heraldo Filho, Fabrício Bonecini, Andre Vieira i Felix Lopez. "Platform of the Brazilian CSOs: Open Government Data and Crowdsourcing for the Promotion of Citizenship". W XIII Simpósio Brasileiro de Sistemas de Informação. Sociedade Brasileira de Computação, 2017. http://dx.doi.org/10.5753/sbsi.2017.6021.
Pełny tekst źródła