Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Empirical privacy defenses“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Empirical privacy defenses" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Empirical privacy defenses"
Kaplan, Caelin, Chuan Xu, Othmane Marfoq, Giovanni Neglia und Anderson Santana de Oliveira. „A Cautionary Tale: On the Role of Reference Data in Empirical Privacy Defenses“. Proceedings on Privacy Enhancing Technologies 2024, Nr. 1 (Januar 2024): 525–48. http://dx.doi.org/10.56553/popets-2024-0031.
Der volle Inhalt der QuelleNakai, Tsunato, Ye Wang, Kota Yoshida und Takeshi Fujino. „SEDMA: Self-Distillation with Model Aggregation for Membership Privacy“. Proceedings on Privacy Enhancing Technologies 2024, Nr. 1 (Januar 2024): 494–508. http://dx.doi.org/10.56553/popets-2024-0029.
Der volle Inhalt der QuelleOzdayi, Mustafa Safa, Murat Kantarcioglu und Yulia R. Gel. „Defending against Backdoors in Federated Learning with Robust Learning Rate“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 10 (18.05.2021): 9268–76. http://dx.doi.org/10.1609/aaai.v35i10.17118.
Der volle Inhalt der QuelleWang, Tianhao, Yuheng Zhang und Ruoxi Jia. „Improving Robustness to Model Inversion Attacks via Mutual Information Regularization“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 13 (18.05.2021): 11666–73. http://dx.doi.org/10.1609/aaai.v35i13.17387.
Der volle Inhalt der QuellePrimus, Eve. „The Problematic Structure of Indigent Defense Delivery“. Michigan Law Review, Nr. 122.2 (2023): 205. http://dx.doi.org/10.36644/mlr.122.2.problematic.
Der volle Inhalt der QuelleSangero, Boaz. „A New Defense for Self-Defense“. Buffalo Criminal Law Review 9, Nr. 2 (01.01.2006): 475–559. http://dx.doi.org/10.1525/nclr.2006.9.2.475.
Der volle Inhalt der QuelleChen, Jiyu, Yiwen Guo, Qianjun Zheng und Hao Chen. „Protect privacy of deep classification networks by exploiting their generative power“. Machine Learning 110, Nr. 4 (April 2021): 651–74. http://dx.doi.org/10.1007/s10994-021-05951-6.
Der volle Inhalt der QuelleMiao, Lu, Weibo Li, Jia Zhao, Xin Zhou und Yao Wu. „Differential Private Defense Against Backdoor Attacks in Federated Learning“. Frontiers in Computing and Intelligent Systems 9, Nr. 2 (28.08.2024): 31–39. http://dx.doi.org/10.54097/dyt1nn60.
Der volle Inhalt der QuelleAbbasi Tadi, Ali, Saroj Dayal, Dima Alhadidi und Noman Mohammed. „Comparative Analysis of Membership Inference Attacks in Federated and Centralized Learning“. Information 14, Nr. 11 (19.11.2023): 620. http://dx.doi.org/10.3390/info14110620.
Der volle Inhalt der QuellePERSKY, JOSEPH. „Rawls's Thin (Millean) Defense of Private Property“. Utilitas 22, Nr. 2 (10.05.2010): 134–47. http://dx.doi.org/10.1017/s0953820810000051.
Der volle Inhalt der QuelleDissertationen zum Thema "Empirical privacy defenses"
Kaplan, Caelin. „Compromis inhérents à l'apprentissage automatique préservant la confidentialité“. Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4045.
Der volle Inhalt der QuelleAs machine learning (ML) models are increasingly integrated into a wide range of applications, ensuring the privacy of individuals' data is becoming more important than ever. However, privacy-preserving ML techniques often result in reduced task-specific utility and may negatively impact other essential factors like fairness, robustness, and interpretability. These challenges have limited the widespread adoption of privacy-preserving methods. This thesis aims to address these challenges through two primary goals: (1) to deepen the understanding of key trade-offs in three privacy-preserving ML techniques—differential privacy, empirical privacy defenses, and federated learning; (2) to propose novel methods and algorithms that improve utility and effectiveness while maintaining privacy protections. The first study in this thesis investigates how differential privacy impacts fairness across groups defined by sensitive attributes. While previous assumptions suggested that differential privacy could exacerbate unfairness in ML models, our experiments demonstrate that selecting an optimal model architecture and tuning hyperparameters for DP-SGD (Differentially Private Stochastic Gradient Descent) can mitigate fairness disparities. Using standard ML fairness datasets, we show that group disparities in metrics like demographic parity, equalized odds, and predictive parity are often reduced or remain negligible when compared to non-private baselines, challenging the prevailing notion that differential privacy worsens fairness for underrepresented groups. The second study focuses on empirical privacy defenses, which aim to protect training data privacy while minimizing utility loss. Most existing defenses assume access to reference data---an additional dataset from the same or a similar distribution as the training data. However, previous works have largely neglected to evaluate the privacy risks associated with reference data. To address this, we conducted the first comprehensive analysis of reference data privacy in empirical defenses. We proposed a baseline defense method, Weighted Empirical Risk Minimization (WERM), which allows for a clearer understanding of the trade-offs between model utility, training data privacy, and reference data privacy. In addition to offering theoretical guarantees on model utility and the relative privacy of training and reference data, WERM consistently outperforms state-of-the-art empirical privacy defenses in nearly all relative privacy regimes.The third study addresses the convergence-related trade-offs in Collaborative Inference Systems (CISs), which are increasingly used in the Internet of Things (IoT) to enable smaller nodes in a network to offload part of their inference tasks to more powerful nodes. While Federated Learning (FL) is often used to jointly train models within CISs, traditional methods have overlooked the operational dynamics of these systems, such as heterogeneity in serving rates across nodes. We propose a novel FL approach explicitly designed for CISs, which accounts for varying serving rates and uneven data availability. Our framework provides theoretical guarantees and consistently outperforms state-of-the-art algorithms, particularly in scenarios where end devices handle high inference request rates.In conclusion, this thesis advances the field of privacy-preserving ML by addressing key trade-offs in differential privacy, empirical privacy defenses, and federated learning. The proposed methods provide new insights into balancing privacy with utility and other critical factors, offering practical solutions for integrating privacy-preserving techniques into real-world applications. These contributions aim to support the responsible and ethical deployment of AI technologies that prioritize data privacy and protection
Spiekermann, Sarah, Jana Korunovska und Christine Bauer. „Psychology of Ownership and Asset Defense: Why People Value their Personal Information Beyond Privacy“. 2012. http://epub.wu.ac.at/3630/1/2012_ICIS_Facebook.pdf.
Der volle Inhalt der QuelleBücher zum Thema "Empirical privacy defenses"
Lafollette, Hugh. The Empirical Evidence. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190873363.003.0006.
Der volle Inhalt der QuelleLafollette, Hugh. In Defense of Gun Control. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190873363.001.0001.
Der volle Inhalt der QuelleGanz, Aurora. Fuelling Insecurity. Policy Press, 2021. http://dx.doi.org/10.1332/policypress/9781529216691.001.0001.
Der volle Inhalt der QuelleHeinze, Eric. Toward a Legal Concept of Hatred. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190465544.003.0006.
Der volle Inhalt der QuelleClifton, Judith, Daniel Díaz Fuentes und David Howarth, Hrsg. Regional Development Banks in the World Economy. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198861089.001.0001.
Der volle Inhalt der QuelleBuchteile zum Thema "Empirical privacy defenses"
Augsberg, Ino. „In Defence of Ambiguity“. In Methodology in Private Law Theory, 137–52. Oxford University PressOxford, 2024. http://dx.doi.org/10.1093/oso/9780198885306.003.0006.
Der volle Inhalt der QuelleXu, Qiongka, Trevor Cohn und Olga Ohrimenko. „Fingerprint Attack: Client De-Anonymization in Federated Learning“. In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230590.
Der volle Inhalt der QuelleFabre, Cécile. „Economic Espionage“. In Spying Through a Glass Darkly, 72–91. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780198833765.003.0005.
Der volle Inhalt der QuelleMarneffe, Peter de. „Self-Sovereignty, Drugs, and Prostitution“. In Oxford Studies in Political Philosophy Volume 9, 241–59. Oxford University PressOxford, 2023. http://dx.doi.org/10.1093/oso/9780198877639.003.0009.
Der volle Inhalt der QuelleBagg, Samuel Ely. „What Is State Capture?“ In The Dispersion of Power, 79–107. Oxford University PressOxford, 2024. http://dx.doi.org/10.1093/oso/9780192848826.003.0005.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Empirical privacy defenses"
Costa, Miguel, und Sandro Pinto. „David and Goliath: An Empirical Evaluation of Attacks and Defenses for QNNs at the Deep Edge“. In 2024 IEEE 9th European Symposium on Security and Privacy (EuroS&P), 524–41. IEEE, 2024. http://dx.doi.org/10.1109/eurosp60621.2024.00035.
Der volle Inhalt der QuelleJankovic, Aleksandar, und Rudolf Mayer. „An Empirical Evaluation of Adversarial Examples Defences, Combinations and Robustness Scores“. In CODASPY '22: Twelveth ACM Conference on Data and Application Security and Privacy. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3510548.3519370.
Der volle Inhalt der QuelleFerreira, Raul, Vagner Praia, Heraldo Filho, Fabrício Bonecini, Andre Vieira und Felix Lopez. „Platform of the Brazilian CSOs: Open Government Data and Crowdsourcing for the Promotion of Citizenship“. In XIII Simpósio Brasileiro de Sistemas de Informação. Sociedade Brasileira de Computação, 2017. http://dx.doi.org/10.5753/sbsi.2017.6021.
Der volle Inhalt der Quelle