Academic literature on the topic 'Empirical privacy defenses'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Empirical privacy defenses.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Empirical privacy defenses"
Kaplan, Caelin, Chuan Xu, Othmane Marfoq, Giovanni Neglia, and Anderson Santana de Oliveira. "A Cautionary Tale: On the Role of Reference Data in Empirical Privacy Defenses." Proceedings on Privacy Enhancing Technologies 2024, no. 1 (January 2024): 525–48. http://dx.doi.org/10.56553/popets-2024-0031.
Full textNakai, Tsunato, Ye Wang, Kota Yoshida, and Takeshi Fujino. "SEDMA: Self-Distillation with Model Aggregation for Membership Privacy." Proceedings on Privacy Enhancing Technologies 2024, no. 1 (January 2024): 494–508. http://dx.doi.org/10.56553/popets-2024-0029.
Full textOzdayi, Mustafa Safa, Murat Kantarcioglu, and Yulia R. Gel. "Defending against Backdoors in Federated Learning with Robust Learning Rate." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 10 (May 18, 2021): 9268–76. http://dx.doi.org/10.1609/aaai.v35i10.17118.
Full textWang, Tianhao, Yuheng Zhang, and Ruoxi Jia. "Improving Robustness to Model Inversion Attacks via Mutual Information Regularization." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (May 18, 2021): 11666–73. http://dx.doi.org/10.1609/aaai.v35i13.17387.
Full textPrimus, Eve. "The Problematic Structure of Indigent Defense Delivery." Michigan Law Review, no. 122.2 (2023): 205. http://dx.doi.org/10.36644/mlr.122.2.problematic.
Full textSangero, Boaz. "A New Defense for Self-Defense." Buffalo Criminal Law Review 9, no. 2 (January 1, 2006): 475–559. http://dx.doi.org/10.1525/nclr.2006.9.2.475.
Full textChen, Jiyu, Yiwen Guo, Qianjun Zheng, and Hao Chen. "Protect privacy of deep classification networks by exploiting their generative power." Machine Learning 110, no. 4 (April 2021): 651–74. http://dx.doi.org/10.1007/s10994-021-05951-6.
Full textMiao, Lu, Weibo Li, Jia Zhao, Xin Zhou, and Yao Wu. "Differential Private Defense Against Backdoor Attacks in Federated Learning." Frontiers in Computing and Intelligent Systems 9, no. 2 (August 28, 2024): 31–39. http://dx.doi.org/10.54097/dyt1nn60.
Full textAbbasi Tadi, Ali, Saroj Dayal, Dima Alhadidi, and Noman Mohammed. "Comparative Analysis of Membership Inference Attacks in Federated and Centralized Learning." Information 14, no. 11 (November 19, 2023): 620. http://dx.doi.org/10.3390/info14110620.
Full textPERSKY, JOSEPH. "Rawls's Thin (Millean) Defense of Private Property." Utilitas 22, no. 2 (May 10, 2010): 134–47. http://dx.doi.org/10.1017/s0953820810000051.
Full textDissertations / Theses on the topic "Empirical privacy defenses"
Kaplan, Caelin. "Compromis inhérents à l'apprentissage automatique préservant la confidentialité." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4045.
Full textAs machine learning (ML) models are increasingly integrated into a wide range of applications, ensuring the privacy of individuals' data is becoming more important than ever. However, privacy-preserving ML techniques often result in reduced task-specific utility and may negatively impact other essential factors like fairness, robustness, and interpretability. These challenges have limited the widespread adoption of privacy-preserving methods. This thesis aims to address these challenges through two primary goals: (1) to deepen the understanding of key trade-offs in three privacy-preserving ML techniques—differential privacy, empirical privacy defenses, and federated learning; (2) to propose novel methods and algorithms that improve utility and effectiveness while maintaining privacy protections. The first study in this thesis investigates how differential privacy impacts fairness across groups defined by sensitive attributes. While previous assumptions suggested that differential privacy could exacerbate unfairness in ML models, our experiments demonstrate that selecting an optimal model architecture and tuning hyperparameters for DP-SGD (Differentially Private Stochastic Gradient Descent) can mitigate fairness disparities. Using standard ML fairness datasets, we show that group disparities in metrics like demographic parity, equalized odds, and predictive parity are often reduced or remain negligible when compared to non-private baselines, challenging the prevailing notion that differential privacy worsens fairness for underrepresented groups. The second study focuses on empirical privacy defenses, which aim to protect training data privacy while minimizing utility loss. Most existing defenses assume access to reference data---an additional dataset from the same or a similar distribution as the training data. However, previous works have largely neglected to evaluate the privacy risks associated with reference data. To address this, we conducted the first comprehensive analysis of reference data privacy in empirical defenses. We proposed a baseline defense method, Weighted Empirical Risk Minimization (WERM), which allows for a clearer understanding of the trade-offs between model utility, training data privacy, and reference data privacy. In addition to offering theoretical guarantees on model utility and the relative privacy of training and reference data, WERM consistently outperforms state-of-the-art empirical privacy defenses in nearly all relative privacy regimes.The third study addresses the convergence-related trade-offs in Collaborative Inference Systems (CISs), which are increasingly used in the Internet of Things (IoT) to enable smaller nodes in a network to offload part of their inference tasks to more powerful nodes. While Federated Learning (FL) is often used to jointly train models within CISs, traditional methods have overlooked the operational dynamics of these systems, such as heterogeneity in serving rates across nodes. We propose a novel FL approach explicitly designed for CISs, which accounts for varying serving rates and uneven data availability. Our framework provides theoretical guarantees and consistently outperforms state-of-the-art algorithms, particularly in scenarios where end devices handle high inference request rates.In conclusion, this thesis advances the field of privacy-preserving ML by addressing key trade-offs in differential privacy, empirical privacy defenses, and federated learning. The proposed methods provide new insights into balancing privacy with utility and other critical factors, offering practical solutions for integrating privacy-preserving techniques into real-world applications. These contributions aim to support the responsible and ethical deployment of AI technologies that prioritize data privacy and protection
Spiekermann, Sarah, Jana Korunovska, and Christine Bauer. "Psychology of Ownership and Asset Defense: Why People Value their Personal Information Beyond Privacy." 2012. http://epub.wu.ac.at/3630/1/2012_ICIS_Facebook.pdf.
Full textBooks on the topic "Empirical privacy defenses"
Lafollette, Hugh. The Empirical Evidence. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190873363.003.0006.
Full textLafollette, Hugh. In Defense of Gun Control. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190873363.001.0001.
Full textGanz, Aurora. Fuelling Insecurity. Policy Press, 2021. http://dx.doi.org/10.1332/policypress/9781529216691.001.0001.
Full textHeinze, Eric. Toward a Legal Concept of Hatred. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190465544.003.0006.
Full textClifton, Judith, Daniel Díaz Fuentes, and David Howarth, eds. Regional Development Banks in the World Economy. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198861089.001.0001.
Full textBook chapters on the topic "Empirical privacy defenses"
Augsberg, Ino. "In Defence of Ambiguity." In Methodology in Private Law Theory, 137–52. Oxford University PressOxford, 2024. http://dx.doi.org/10.1093/oso/9780198885306.003.0006.
Full textXu, Qiongka, Trevor Cohn, and Olga Ohrimenko. "Fingerprint Attack: Client De-Anonymization in Federated Learning." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230590.
Full textFabre, Cécile. "Economic Espionage." In Spying Through a Glass Darkly, 72–91. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780198833765.003.0005.
Full textMarneffe, Peter de. "Self-Sovereignty, Drugs, and Prostitution." In Oxford Studies in Political Philosophy Volume 9, 241–59. Oxford University PressOxford, 2023. http://dx.doi.org/10.1093/oso/9780198877639.003.0009.
Full textBagg, Samuel Ely. "What Is State Capture?" In The Dispersion of Power, 79–107. Oxford University PressOxford, 2024. http://dx.doi.org/10.1093/oso/9780192848826.003.0005.
Full textConference papers on the topic "Empirical privacy defenses"
Costa, Miguel, and Sandro Pinto. "David and Goliath: An Empirical Evaluation of Attacks and Defenses for QNNs at the Deep Edge." In 2024 IEEE 9th European Symposium on Security and Privacy (EuroS&P), 524–41. IEEE, 2024. http://dx.doi.org/10.1109/eurosp60621.2024.00035.
Full textJankovic, Aleksandar, and Rudolf Mayer. "An Empirical Evaluation of Adversarial Examples Defences, Combinations and Robustness Scores." In CODASPY '22: Twelveth ACM Conference on Data and Application Security and Privacy. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3510548.3519370.
Full textFerreira, Raul, Vagner Praia, Heraldo Filho, Fabrício Bonecini, Andre Vieira, and Felix Lopez. "Platform of the Brazilian CSOs: Open Government Data and Crowdsourcing for the Promotion of Citizenship." In XIII Simpósio Brasileiro de Sistemas de Informação. Sociedade Brasileira de Computação, 2017. http://dx.doi.org/10.5753/sbsi.2017.6021.
Full text