Academic literature on the topic 'Empirical privacy defenses'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Empirical privacy defenses.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Empirical privacy defenses"

1

Kaplan, Caelin, Chuan Xu, Othmane Marfoq, Giovanni Neglia, and Anderson Santana de Oliveira. "A Cautionary Tale: On the Role of Reference Data in Empirical Privacy Defenses." Proceedings on Privacy Enhancing Technologies 2024, no. 1 (January 2024): 525–48. http://dx.doi.org/10.56553/popets-2024-0031.

Full text
Abstract:
Within the realm of privacy-preserving machine learning, empirical privacy defenses have been proposed as a solution to achieve satisfactory levels of training data privacy without a significant drop in model utility. Most existing defenses against membership inference attacks assume access to reference data, defined as an additional dataset coming from the same (or a similar) underlying distribution as training data. Despite the common use of reference data, previous works are notably reticent about defining and evaluating reference data privacy. As gains in model utility and/or training data privacy may come at the expense of reference data privacy, it is essential that all three aspects are duly considered. In this paper, we conduct the first comprehensive analysis of empirical privacy defenses. First, we examine the availability of reference data and its privacy treatment in previous works and demonstrate its necessity for fairly comparing defenses. Second, we propose a baseline defense that enables the utility-privacy tradeoff with respect to both training and reference data to be easily understood. Our method is formulated as an empirical risk minimization with a constraint on the generalization error, which, in practice, can be evaluated as a weighted empirical risk minimization (WERM) over the training and reference datasets. Although we conceived of WERM as a simple baseline, our experiments show that, surprisingly, it outperforms the most well-studied and current state-of-the-art empirical privacy defenses using reference data for nearly all relative privacy levels of reference and training data. Our investigation also reveals that these existing methods are unable to trade off reference data privacy for model utility and/or training data privacy, and thus fail to operate outside of the high reference data privacy case. Overall, our work highlights the need for a proper evaluation of the triad model utility / training data privacy / reference data privacy when comparing privacy defenses.
APA, Harvard, Vancouver, ISO, and other styles
2

Nakai, Tsunato, Ye Wang, Kota Yoshida, and Takeshi Fujino. "SEDMA: Self-Distillation with Model Aggregation for Membership Privacy." Proceedings on Privacy Enhancing Technologies 2024, no. 1 (January 2024): 494–508. http://dx.doi.org/10.56553/popets-2024-0029.

Full text
Abstract:
Membership inference attacks (MIAs) are important measures to evaluate potential risks of privacy leakage from machine learning (ML) models. State-of-the-art MIA defenses have achieved favorable privacy-utility trade-offs using knowledge distillation on split training datasets. However, such defenses increase computational costs as a large number of the ML models must be trained on the split datasets. In this study, we proposed a new MIA defense, called SEDMA, based on self-distillation using model aggregation to mitigate the MIAs, inspired by the model parameter averaging as used in federated learning. The key idea of SEDMA is to split the training dataset into several parts and aggregate multiple ML models trained on each split for self-distillation. The intuitive explanation of SEDMA is that model aggregation prevents model over-fitting by smoothing information related to the training data among the multiple ML models and preserving the model utility, such as in federated learning. Through our experiments on major benchmark datasets (Purchase100, Texas100, and CIFAR100), we show that SEDMA outperforms state-of-the-art MIA defenses in terms of membership privacy (MIA accuracy), model accuracy, and computational costs. Specifically, SEDMA incurs at most approximately 3 - 5% model accuracy drop, while achieving the lowest MIA accuracy in state-of-the-art empirical MIA defenses. For computational costs, SEDMA takes significantly less processing time than a defense with the state-of-the-art privacy-utility trade-offs in previous defenses. SEDMA achieves both favorable privacy-utility trade-offs and low computational costs.
APA, Harvard, Vancouver, ISO, and other styles
3

Ozdayi, Mustafa Safa, Murat Kantarcioglu, and Yulia R. Gel. "Defending against Backdoors in Federated Learning with Robust Learning Rate." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 10 (May 18, 2021): 9268–76. http://dx.doi.org/10.1609/aaai.v35i10.17118.

Full text
Abstract:
Federated learning (FL) allows a set of agents to collaboratively train a model without sharing their potentially sensitive data. This makes FL suitable for privacy-preserving applications. At the same time, FL is susceptible to adversarial attacks due to decentralized and unvetted data. One important line of attacks against FL is the backdoor attacks. In a backdoor attack, an adversary tries to embed a backdoor functionality to the model during training that can later be activated to cause a desired misclassification. To prevent backdoor attacks, we propose a lightweight defense that requires minimal change to the FL protocol. At a high level, our defense is based on carefully adjusting the aggregation server's learning rate, per dimension and per round, based on the sign information of agents' updates. We first conjecture the necessary steps to carry a successful backdoor attack in FL setting, and then, explicitly formulate the defense based on our conjecture. Through experiments, we provide empirical evidence that supports our conjecture, and we test our defense against backdoor attacks under different settings. We observe that either backdoor is completely eliminated, or its accuracy is significantly reduced. Overall, our experiments suggest that our defense significantly outperforms some of the recently proposed defenses in the literature. We achieve this by having minimal influence over the accuracy of the trained models. In addition, we also provide convergence rate analysis for our proposed scheme.
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Tianhao, Yuheng Zhang, and Ruoxi Jia. "Improving Robustness to Model Inversion Attacks via Mutual Information Regularization." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (May 18, 2021): 11666–73. http://dx.doi.org/10.1609/aaai.v35i13.17387.

Full text
Abstract:
This paper studies defense mechanisms against model inversion (MI) attacks -- a type of privacy attacks aimed at inferring information about the training data distribution given the access to a target machine learning model. Existing defense mechanisms rely on model-specific heuristics or noise injection. While being able to mitigate attacks, existing methods significantly hinder model performance. There remains a question of how to design a defense mechanism that is applicable to a variety of models and achieves better utility-privacy tradeoff. In this paper, we propose the Mutual Information Regularization based Defense (MID) against MI attacks. The key idea is to limit the information about the model input contained in the prediction, thereby limiting the ability of an adversary to infer the private training attributes from the model prediction. Our defense principle is model-agnostic and we present tractable approximations to the regularizer for linear regression, decision trees, and neural networks, which have been successfully attacked by prior work if not attached with any defenses. We present a formal study of MI attacks by devising a rigorous game-based definition and quantifying the associated information leakage. Our theoretical analysis sheds light on the inefficacy of DP in defending against MI attacks, which has been empirically observed in several prior works. Our experiments demonstrate that MID leads to state-of-the-art performance for a variety of MI attacks, target models and datasets.
APA, Harvard, Vancouver, ISO, and other styles
5

Primus, Eve. "The Problematic Structure of Indigent Defense Delivery." Michigan Law Review, no. 122.2 (2023): 205. http://dx.doi.org/10.36644/mlr.122.2.problematic.

Full text
Abstract:
The national conversation about criminal justice reform largely ignores the critical need for structural reforms in the provision of indigent defense. In most parts of the country, decisions about how to structure the provision of indigent defense are made at the local level, resulting in a fragmented patchwork of different indigent defense delivery systems. In most counties, if an indigent criminal defendant gets representation at all, it comes from assigned counsel or flat-fee contract lawyers rather than public defenders. In those assigned-counsel and flat-fee contract systems, the lawyers representing indigent defendants have financial incentives to get rid of assigned criminal cases as quickly as possible. Those incentives fuel mass incarceration because the lawyers put less time into each case than their public defender counterparts and achieve poorer outcomes for their clients. Moreover, empirical research shows that assigned-counsel and flat-fee contract systems are economically more costly to the public fisc than public defender systems. This Article collects data from across the country to show how prevalent assigned- counsel and contract systems remain, explains why arguments in favor of substantial reliance on the private bar to provide for indigent defense are outdated, argues that more states need to move toward state-structured public defender models, and outlines how it is politically possible for stakeholders to get there.
APA, Harvard, Vancouver, ISO, and other styles
6

Sangero, Boaz. "A New Defense for Self-Defense." Buffalo Criminal Law Review 9, no. 2 (January 1, 2006): 475–559. http://dx.doi.org/10.1525/nclr.2006.9.2.475.

Full text
Abstract:
Abstract Private defense, like self-defense, has been virtually undisputed both in the past and present and even taken for granted, and perhaps particularly for this reason, sufficient attention has not always been given to the rationale underlying private defense. As a result, the legal arrangements set for private defense in the different legal systems are deficient, inconsistent, and, at times, replete with internal contradictions. This article seeks to propose a sound rationale for the concept of private defense. It begins by attempting to clearly and precisely delineate the scope of the defense and weed out cases that are occasionally (and, I maintain, mistakenly) included in the framework of its scope by means of two general and imperative distinctions: between justification and excuse and between the definitive components of offenses and those of defenses. With regard to the first distinction, I consider the validity of its application and its possible implications for private defense. Since the validity of the second distinction is undisputed as an empirical fact (at least formally) in all modern penal codes, the question raised is whether there is a significant difference between the definition of offenses and the definition of defenses. The answer to this question is relevant to a number of issues, and of particular relevance to private defense are its implications for the application of the principle of legality and with regard to the mental element that should be required of the actor in such situations. Next I embark on a discussion of the various theories competing for predominance as elucidations of private defense. These theories and this discussion then serve as the background and foundation for the construction of the article's proposed rationale for private defense. The novelty of this rationale is in its integrative approach, melding a number of the proposed justifications for self-defense, rather than taking the traditional path of espousing one all-excluding rationale.
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Jiyu, Yiwen Guo, Qianjun Zheng, and Hao Chen. "Protect privacy of deep classification networks by exploiting their generative power." Machine Learning 110, no. 4 (April 2021): 651–74. http://dx.doi.org/10.1007/s10994-021-05951-6.

Full text
Abstract:
AbstractResearch showed that deep learning models are vulnerable to membership inference attacks, which aim to determine if an example is in the training set of the model. We propose a new framework to defend against this sort of attack. Our key insight is that if we retrain the original classifier with a new dataset that is independent of the original training set while their elements are sampled from the same distribution, the retrained classifier will leak no information that cannot be inferred from the distribution about the original training set. Our framework consists of three phases. First, we transferred the original classifier to a Joint Energy-based Model (JEM) to exploit the model’s implicit generative power. Then, we sampled from the JEM to create a new dataset. Finally, we used the new dataset to retrain or fine-tune the original classifier. We empirically studied different transfer learning schemes for the JEM and fine-tuning/retraining strategies for the classifier against shadow-model attacks. Our evaluation shows that our framework can suppress the attacker’s membership advantage to a negligible level while keeping the classifier’s accuracy acceptable. We compared it with other state-of-the-art defenses considering adaptive attackers and showed our defense is effective even under the worst-case scenario. Besides, we also found that combining other defenses with our framework often achieves better robustness. Our code will be made available at https://github.com/ChenJiyu/meminf-defense.git.
APA, Harvard, Vancouver, ISO, and other styles
8

Miao, Lu, Weibo Li, Jia Zhao, Xin Zhou, and Yao Wu. "Differential Private Defense Against Backdoor Attacks in Federated Learning." Frontiers in Computing and Intelligent Systems 9, no. 2 (August 28, 2024): 31–39. http://dx.doi.org/10.54097/dyt1nn60.

Full text
Abstract:
Federated learning has been applied in a wide variety of applications, in which clients upload their local updates instead of providing their datasets to jointly train a global model. However, the training process of federated learning is vulnerable to adversarial attacks (e.g., backdoor attack) in presence of malicious clients. Previous works showed that differential privacy (DP) can be used to defend against backdoor attacks, at the cost of vastly losing model utility. In this work, we study two kinds of backdoor attacks and propose a method based on differential privacy, called Clip Norm Decay (CND) to defend against them, which maintains utility when defending against backdoor attacks with DP. CND decreases the clipping threshold of model updates through the whole training process to reduce the injected noise. Empirical results show that CND can substantially enhance the accuracy of the main task. In particular, CND bounds the norm of malicious updates by adaptively setting the appropriate thresholds according to the current model updates. Empirical results show that CND can substantially enhance the accuracy of the main task when defending against backdoor attacks. Moreover, extensive experiments demonstrate that our method performs better defense than the original DP, further reducing the attack success rate, even in a strong assumption of threat model. Additional experiments about property inference attack indicate that CND also maintains utility when defending against privacy attacks and does not weaken the privacy preservation of DP.
APA, Harvard, Vancouver, ISO, and other styles
9

Abbasi Tadi, Ali, Saroj Dayal, Dima Alhadidi, and Noman Mohammed. "Comparative Analysis of Membership Inference Attacks in Federated and Centralized Learning." Information 14, no. 11 (November 19, 2023): 620. http://dx.doi.org/10.3390/info14110620.

Full text
Abstract:
The vulnerability of machine learning models to membership inference attacks, which aim to determine whether a specific record belongs to the training dataset, is explored in this paper. Federated learning allows multiple parties to independently train a model without sharing or centralizing their data, offering privacy advantages. However, when private datasets are used in federated learning and model access is granted, the risk of membership inference attacks emerges, potentially compromising sensitive data. To address this, effective defenses in a federated learning environment must be developed without compromising the utility of the target model. This study empirically investigates and compares membership inference attack methodologies in both federated and centralized learning environments, utilizing diverse optimizers and assessing attacks with and without defenses on image and tabular datasets. The findings demonstrate that a combination of knowledge distillation and conventional mitigation techniques (such as Gaussian dropout, Gaussian noise, and activity regularization) significantly mitigates the risk of information leakage in both federated and centralized settings.
APA, Harvard, Vancouver, ISO, and other styles
10

PERSKY, JOSEPH. "Rawls's Thin (Millean) Defense of Private Property." Utilitas 22, no. 2 (May 10, 2010): 134–47. http://dx.doi.org/10.1017/s0953820810000051.

Full text
Abstract:
This article suggests that Rawls's break with early utilitarians is not so much over the greatest happiness principle as it is over the relation of the institution of private property to justice. In this respect Rawls is very close to John Stuart Mill, arguing for a cleansed or tamed version of the institution. That said, Rawls's defense of private property remains very thin and highly idealized, again following Mill. If Hume and Bentham fail to demonstrate their claims, Rawls and Mill do little better. Rawls, like Mill, has constructed a challenging standard, admits to severe limitations on our empirical knowledge, and remains deeply ambivalent over the role of private property.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Empirical privacy defenses"

1

Kaplan, Caelin. "Compromis inhérents à l'apprentissage automatique préservant la confidentialité." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4045.

Full text
Abstract:
À mesure que les modèles d'apprentissage automatique (ML) sont de plus en plus intégrés dans un large éventail d'applications, il devient plus important que jamais de garantir la confidentialité des données des individus. Cependant, les techniques actuelles entraînent souvent une perte d'utilité et peuvent affecter des facteurs comme l'équité et l'interprétabilité. Cette thèse vise à approfondir la compréhension des compromis dans trois techniques de ML respectueuses de la vie privée : la confidentialité différentielle, les défenses empiriques, et l'apprentissage fédéré, et à proposer des méthodes qui améliorent leur efficacité tout en maintenant la protection de la vie privée. La première étude examine l'impact de la confidentialité différentielle sur l'équité entre les groupes définis par des attributs sensibles. Alors que certaines hypothèses précédentes suggéraient que la confidentialité différentielle pourrait exacerber l'injustice dans les modèles ML, nos expériences montrent que la sélection d'une architecture de modèle optimale et le réglage des hyperparamètres pour DP-SGD (Descente de Gradient Stochastique Différentiellement Privée) peuvent atténuer les disparités d'équité. En utilisant des ensembles de données standards dans la littérature sur l'équité du ML, nous montrons que les disparités entre les groupes pour les métriques telles que la parité démographique, l'égalité des chances et la parité prédictive sont souvent réduites ou négligeables par rapport aux modèles non privés. La deuxième étude se concentre sur les défenses empiriques de la vie privée, qui visent à protéger les données d'entraînement tout en minimisant la perte d'utilité. La plupart des défenses existantes supposent l'accès à des données de référence — un ensemble de données supplémentaire provenant de la même distribution (ou similaire) que les données d'entraînement. Cependant, les travaux antérieurs n'ont que rarement évalué les risques de confidentialité associés aux données de référence. Pour y remédier, nous avons réalisé la première analyse complète de la confidentialité des données de référence dans les défenses empiriques. Nous avons proposé une méthode de défense de référence, la minimisation du risque empirique pondéré (WERM), qui permet de mieux comprendre les compromis entre l'utilité du modèle, la confidentialité des données d'entraînement et celle des données de référence. En plus d'offrir des garanties théoriques, WERM surpasse régulièrement les défenses empiriques de pointe dans presque tous les régimes de confidentialité relatifs. La troisième étude aborde les compromis liés à la convergence dans les systèmes d'inférence collaborative (CIS), de plus en plus utilisés dans l'Internet des objets (IoT) pour permettre aux nœuds plus petits de décharger une partie de leurs tâches d'inférence vers des nœuds plus puissants. Alors que l'apprentissage fédéré (FL) est souvent utilisé pour entraîner conjointement les modèles dans ces systèmes, les méthodes traditionnelles ont négligé la dynamique opérationnelle, comme l'hétérogénéité des taux de service entre les nœuds. Nous proposons une approche FL novatrice, spécialement conçue pour les CIS, qui prend en compte les taux de service variables et la disponibilité inégale des données. Notre cadre offre des garanties théoriques et surpasse systématiquement les algorithmes de pointe, en particulier dans les scénarios où les appareils finaux gèrent des taux de requêtes d'inférence élevés. En conclusion, cette thèse contribue à l'amélioration des techniques de ML respectueuses de la vie privée en analysant les compromis entre confidentialité, utilité et autres facteurs. Les méthodes proposées offrent des solutions pratiques pour intégrer ces techniques dans des applications réelles, en assurant une meilleure protection des données personnelles
As machine learning (ML) models are increasingly integrated into a wide range of applications, ensuring the privacy of individuals' data is becoming more important than ever. However, privacy-preserving ML techniques often result in reduced task-specific utility and may negatively impact other essential factors like fairness, robustness, and interpretability. These challenges have limited the widespread adoption of privacy-preserving methods. This thesis aims to address these challenges through two primary goals: (1) to deepen the understanding of key trade-offs in three privacy-preserving ML techniques—differential privacy, empirical privacy defenses, and federated learning; (2) to propose novel methods and algorithms that improve utility and effectiveness while maintaining privacy protections. The first study in this thesis investigates how differential privacy impacts fairness across groups defined by sensitive attributes. While previous assumptions suggested that differential privacy could exacerbate unfairness in ML models, our experiments demonstrate that selecting an optimal model architecture and tuning hyperparameters for DP-SGD (Differentially Private Stochastic Gradient Descent) can mitigate fairness disparities. Using standard ML fairness datasets, we show that group disparities in metrics like demographic parity, equalized odds, and predictive parity are often reduced or remain negligible when compared to non-private baselines, challenging the prevailing notion that differential privacy worsens fairness for underrepresented groups. The second study focuses on empirical privacy defenses, which aim to protect training data privacy while minimizing utility loss. Most existing defenses assume access to reference data---an additional dataset from the same or a similar distribution as the training data. However, previous works have largely neglected to evaluate the privacy risks associated with reference data. To address this, we conducted the first comprehensive analysis of reference data privacy in empirical defenses. We proposed a baseline defense method, Weighted Empirical Risk Minimization (WERM), which allows for a clearer understanding of the trade-offs between model utility, training data privacy, and reference data privacy. In addition to offering theoretical guarantees on model utility and the relative privacy of training and reference data, WERM consistently outperforms state-of-the-art empirical privacy defenses in nearly all relative privacy regimes.The third study addresses the convergence-related trade-offs in Collaborative Inference Systems (CISs), which are increasingly used in the Internet of Things (IoT) to enable smaller nodes in a network to offload part of their inference tasks to more powerful nodes. While Federated Learning (FL) is often used to jointly train models within CISs, traditional methods have overlooked the operational dynamics of these systems, such as heterogeneity in serving rates across nodes. We propose a novel FL approach explicitly designed for CISs, which accounts for varying serving rates and uneven data availability. Our framework provides theoretical guarantees and consistently outperforms state-of-the-art algorithms, particularly in scenarios where end devices handle high inference request rates.In conclusion, this thesis advances the field of privacy-preserving ML by addressing key trade-offs in differential privacy, empirical privacy defenses, and federated learning. The proposed methods provide new insights into balancing privacy with utility and other critical factors, offering practical solutions for integrating privacy-preserving techniques into real-world applications. These contributions aim to support the responsible and ethical deployment of AI technologies that prioritize data privacy and protection
APA, Harvard, Vancouver, ISO, and other styles
2

Spiekermann, Sarah, Jana Korunovska, and Christine Bauer. "Psychology of Ownership and Asset Defense: Why People Value their Personal Information Beyond Privacy." 2012. http://epub.wu.ac.at/3630/1/2012_ICIS_Facebook.pdf.

Full text
Abstract:
Analysts, investors and entrepreneurs have for long recognized the value of comprehensive user profiles. While there is a market for trading such personal information among companies, the users, who are actually the providers of such information, are not asked to the negotiations table. To date, there is little information on how users value their personal information. In an online survey-based experiment 1059 Facebook users revealed how much they would be willing to pay for keeping their personal information. Our study reveals that as soon as people learn that some third party is interested in their personal information (asset consciousness prime), the value their information to a much higher degree than without this prime and start to defend their asset. Furthermore, we found that people develop a psychology of ownership towards their personal information. In fact, this construct is a significant contributor to information valuation, much higher than privacy concerns. (author's abstract)
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Empirical privacy defenses"

1

Lafollette, Hugh. The Empirical Evidence. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190873363.003.0006.

Full text
Abstract:
I summarize the proffered evidence of the benefits and the costs of private gun ownership. I focus on the common argument that privately owning firearms is a vital means of self-defense. I isolate the two pillars of this argument: one, that there are 2.5 million defensive gun uses (DGUs) each year; two, that requiring states to issue gun carry permits to any adult who is not expressly disqualified (former felons or mentally ill) saves countless lives. I then summarize the empirical arguments offered by pro-control advocates: high gun prevalence increases homicides, suicides, and gun accidents. Finally, I explain the agnostic findings of the National Academies of Science study group.
APA, Harvard, Vancouver, ISO, and other styles
2

Lafollette, Hugh. In Defense of Gun Control. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190873363.001.0001.

Full text
Abstract:
The gun control debate is more complex than most disputants acknowledge. We are not tasked with answering a single question: Should we have gun control? There are three distinct policy questions confronting us: Who should we permit to have which guns, and how should we regulate the acquisition, storage, and carrying of guns people may legitimately own? To answer these questions we must decide whether (and which) people have a right to bear arms, what kind of right they have, and how stringent it is. We must also evaluate divergent empirical claims about (a) the role of guns in causing harm, and (b) the degree to which private ownership of guns can protect innocent civilians from attacks by criminals, either in their homes or in public. This book sorts through the conceptual, moral, and empirical claims to fairly assess arguments for and against serious gun control. I argue that the United States needs far more gun control than we currently have in most jurisdictions.
APA, Harvard, Vancouver, ISO, and other styles
3

Ganz, Aurora. Fuelling Insecurity. Policy Press, 2021. http://dx.doi.org/10.1332/policypress/9781529216691.001.0001.

Full text
Abstract:
This book explores energy securitization in Azerbaijan through a sociological approach that combines discourse with a practice-oriented analysis. The study focuses on the national, international and private actors involved in the labour of energy security and their diverse sets of practices. Its empirical findings indicate that in Azerbaijan, energy securitization lacks the unitary and homogeneous character of its ideal type. Its heterogeneity interlaces internal security with external security, military with civil, defence with enforcement, coercion with control. It relies on surveillance and policing technologies as much as on maritime defence and counterterrorism; it intertwines the national and the international, as well as the public and the private domains of politics; it builds ties amidst manifold security actors and institutions that belong in different social universes; it merges security logic and neoliberal rationales and techniques. Energy securitisation encircles local dynamics and structures into patterns of international cooperation and corporate strategy. The rhetoric emphasis and the routinized character of energy security practices have trivialized any possible alternative and made invisible its costs. In particular, this book reflects on the multiple forms of abuse and violence and the poor energy choices tied to the processes of energy securitisation.
APA, Harvard, Vancouver, ISO, and other styles
4

Heinze, Eric. Toward a Legal Concept of Hatred. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190465544.003.0006.

Full text
Abstract:
Antidiscrimination law focuses on material conduct. A legal concept of hatred, by contrast, focuses on attitudes, as manifest notably through hate speech bans. Democracies by definition assign higher-law status to expression within public discourse. Such expression can, in principle, be legally curtailed only through a showing that it would likely cause some legally cognizable harm. Defenders of bans, struggling with standard empirical claims, have overtly or tacitly applied “anti-Cartesian” phenomenological and sociolinguistic theories to challenge dominant norms that largely limit such harm to demonstrable material causation. Such notions of harm cannot, however, be reconciled with higher-law norms barring viewpoint-selective penalties on expression. Still, a democracy retains alternative means of combating hateful attitudes, including formal and public educational policy, and codes of professional practice in the public and private sectors.
APA, Harvard, Vancouver, ISO, and other styles
5

Clifton, Judith, Daniel Díaz Fuentes, and David Howarth, eds. Regional Development Banks in the World Economy. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198861089.001.0001.

Full text
Abstract:
Regional development banks (RDB) have become increasingly important in the world economy, but have also been relatively under-researched to date. This timely volume addresses this lack of attention by providing a comprehensive, comparative, and empirically informed analysis of their origins, evolution, and contemporary role in the world economy through to the second decade of the twenty-first century. The editors provide an analytical framework that includes a revised categorization of RDB by geographic operation and function. In part one, the chapter authors offer detailed analyses of the origins, evolution, and contemporary role of the major RDB, including the Inter-American Development Bank, the African Development Bank, the Asian Development Bank, the European Investment Bank, the Central American Bank, the Andean Development Corporation, the European Bank for Reconstruction and Development, and the Asian Infrastructure Investment Bank. In part two, the authors engage in comparative analyses of key topics on RDB, examining their initial design and their changing business models, their shifting role in promoting policies supported by the United States as hegemon and the private sector. The volume ends with a critical reflection on the role played by RDB to date and a strong defence of the need for these banks in an increasingly complex world economy.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Empirical privacy defenses"

1

Augsberg, Ino. "In Defence of Ambiguity." In Methodology in Private Law Theory, 137–52. Oxford University PressOxford, 2024. http://dx.doi.org/10.1093/oso/9780198885306.003.0006.

Full text
Abstract:
Abstract The aim of classical legal methodology is to obtain unambiguous answers to clearly defined legal questions. However, a closer look shows that this goal is not only missed de facto, but also de iure. The law itself contains concepts that undermine its own disambiguation. This perspective could also provide a different view of the contrast between so-called ‘realist’, i.e. empirical understandings of law and more formalistic or dogmatic approaches. Traditionally, formalism is supposed to enable the coherence of the law towards the outside world, thus ensuring internal consistency. However, the formalist claim may also serve another function. It could also be used as an instrument to preserve the internal ambiguity of the law by protecting it from imported false certainties. Conceived in this way, formalism itself appears as an ambiguous figure.
APA, Harvard, Vancouver, ISO, and other styles
2

Xu, Qiongka, Trevor Cohn, and Olga Ohrimenko. "Fingerprint Attack: Client De-Anonymization in Federated Learning." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230590.

Full text
Abstract:
Federated Learning allows collaborative training without data sharing in settings where participants do not trust the central server and one another. Privacy can be further improved by ensuring that communication between the participants and the server is anonymized through a shuffle; decoupling the participant identity from their data. This paper seeks to examine whether such a defense is adequate to guarantee anonymity, by proposing a novel fingerprinting attack over gradients sent by the participants to the server. We show that clustering of gradients can easily break the anonymization in an empirical study of learning federated language models on two language corpora. We then show that training with differential privacy can provide a practical defense against our fingerprint attack.
APA, Harvard, Vancouver, ISO, and other styles
3

Fabre, Cécile. "Economic Espionage." In Spying Through a Glass Darkly, 72–91. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780198833765.003.0005.

Full text
Abstract:
Economic espionage is a tried and tested tool of statecraft. Rulers have long resorted to it so as to help their own firms gain a competitive commercial advantage; strengthen national security; promote their citizens’ vital interests; and advance their geopolitical and strategic aims on the world stage. There is little scholarly work in that area. The stupefyingly extensive empirical literature on espionage tends to concentrate on state-on-state intelligence activities. This chapter provides a qualified defence of state-sponsored economic espionage against private businesses. It starts with a defence of the right to economic secrecy. It then mounts a defence of economic espionage as the acquisition of economic secrets. The final section responds to four objections.
APA, Harvard, Vancouver, ISO, and other styles
4

Marneffe, Peter de. "Self-Sovereignty, Drugs, and Prostitution." In Oxford Studies in Political Philosophy Volume 9, 241–59. Oxford University PressOxford, 2023. http://dx.doi.org/10.1093/oso/9780198877639.003.0009.

Full text
Abstract:
Abstract Portugal and the state of Oregon have decriminalized drugs, but they have not legalized them. There are no criminal penalties for using drugs or possessing small quantities, but there are criminal penalties for the commercial manufacture and sale of drugs. Sweden, Norway, and Denmark have decriminalized prostitution, but they have not legalized it. There are no criminal penalties for the sale of sexual services by private individuals, but there are criminal penalties for operating a sex business such as a brothel or escort agency. This chapter defends one possible rationale for these policies: that laws that prohibit the use of drugs or the sale sex violate our right of self-sovereignty—the right we have to control our own minds and bodies—but laws that prohibit us from engaging in related commercial enterprises do not. The chapter presents a theory of self-sovereignty and explains why, given this theory and certain normative and empirical assumptions, it makes sense to hold that whereas criminalization violates our right of self-sovereignty, nonlegalization does not. For this reason, one cannot validly infer from the premise that criminalization violates our rights that nonlegalization does too.
APA, Harvard, Vancouver, ISO, and other styles
5

Bagg, Samuel Ely. "What Is State Capture?" In The Dispersion of Power, 79–107. Oxford University PressOxford, 2024. http://dx.doi.org/10.1093/oso/9780192848826.003.0005.

Full text
Abstract:
Abstract This chapter begins to articulate the core ideal defended in the book: democracy as resisting state capture. This ideal conceives democracy as a set of practices that help to promote the public interest by protecting public power from capture at the hands of any group. The aim of this chapter is to elaborate the core concept of “state capture,” and it begins by examining its relationship to other key terms such as democracy and the public interest, before exploring the very diverse range of forms state capture can take. Defined as the use of public power to pursue private interests at the expense of the public, the concept of state capture is an umbrella term encompassing problems as diverse as regulatory capture, corruption, clientelism, authoritarianism, oligarchy, and racial caste systems, and the chapter draws from historical and social scientific research on all of these phenomena. It then situates these particular literatures within the broader framework provided by two recent comprehensive theories of political economy, both of which demonstrate how pervasive state capture by a narrow elite characterized nearly all state-based societies in human history. Where these theories emphasize the progress achieved by liberal democratic societies in this regard, however, this chapter also stresses the significance of certain forms of capture that persist and even intensify in those societies. Engaging extensively with empirical research, it devotes special attention to two forms of capture that are especially severe and pervasive across all modern democracies: those benefiting categorically advantaged groups and wealthy elites.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Empirical privacy defenses"

1

Costa, Miguel, and Sandro Pinto. "David and Goliath: An Empirical Evaluation of Attacks and Defenses for QNNs at the Deep Edge." In 2024 IEEE 9th European Symposium on Security and Privacy (EuroS&P), 524–41. IEEE, 2024. http://dx.doi.org/10.1109/eurosp60621.2024.00035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jankovic, Aleksandar, and Rudolf Mayer. "An Empirical Evaluation of Adversarial Examples Defences, Combinations and Robustness Scores." In CODASPY '22: Twelveth ACM Conference on Data and Application Security and Privacy. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3510548.3519370.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ferreira, Raul, Vagner Praia, Heraldo Filho, Fabrício Bonecini, Andre Vieira, and Felix Lopez. "Platform of the Brazilian CSOs: Open Government Data and Crowdsourcing for the Promotion of Citizenship." In XIII Simpósio Brasileiro de Sistemas de Informação. Sociedade Brasileira de Computação, 2017. http://dx.doi.org/10.5753/sbsi.2017.6021.

Full text
Abstract:
In Brazil and around the world, Civil Society Organizations (CSOs) provide valuable public services for society. Through CSOs, people have organized and defended their rights, communities and interests, and can fully exercise their collective potential, often acting in partnership with governments to carry out public policies and/or develop their own projects, financed by the private financing or being self-sucient. Public transparency and availability of quality data are requirements for analyzing the strength and capacity of these organizations. Understanding the distribution of non-governmental organizations across the world and at the national scale, their areas of updating, projects in progress, and their execution capacity, is critical to promote the financing conditions of CSOs, to make it visible and to make it more e↵ective, transparent, and strong. With these goals in mind, we developed the Civil Society Organizations Platform1, an open, free and public on-line portal that provides a wide variety of information on the profile and performance of the population of CSOs in Brazil. Its core mission is to provide data, knowledge, and information on the role played by the almost 400,000 CSOs in activity in Brazil and their cooperation with the public administration in delivering public policies and services. We show how we developed this platform, the integration with several di↵erent databases, the challenges of working with open government data and how we integrated a lot of recent open source technologies in all spheres of system development. The first empirical results are shown and some new features regarding public data are presented.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography