Literatura académica sobre el tema "Computational Differential Privacy"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Computational Differential Privacy".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Computational Differential Privacy"

1

Bhavani Sankar Telaprolu. "Privacy-Preserving Federated Learning in Healthcare - A Secure AI Framework". International Journal of Scientific Research in Computer Science, Engineering and Information Technology 10, n.º 3 (16 de julio de 2024): 703–7. https://doi.org/10.32628/cseit2410347.

Texto completo
Resumen
Federated Learning (FL) has transformed AI applications in healthcare by enabling collaborative model training across multiple institutions while preserving patient data privacy. Despite its advantages, FL remains susceptible to security vulnerabilities, including model inversion attacks, adversarial data poisoning, and communication inefficiencies, necessitating enhanced privacy-preserving mechanisms. In response, this study introduces Privacy-Preserving Federated Learning (PPFL), an advanced FL framework integrating Secure Multi-Party Computation (SMPC), Differential Privacy (DP), and Homomorphic Encryption (HE) to ensure data confidentiality while maintaining computational efficiency. I rigorously evaluate PPFL using Federated Averaging (FedAvg), Secure Aggregation (SecAgg), and Differentially Private Stochastic Gradient Descent (DP-SGD) across real-world healthcare datasets. The results indicate that this approach achieves up to an 85% reduction in model inversion attack success rates, enhances privacy efficiency by 30%, and maintains accuracy retention between 95.2% and 98.3%, significantly improving security without compromising model performance. Furthermore, comparative visual analyses illustrate trade-offs between privacy and accuracy, scalability trends, and computational overhead. This study also explores scalability challenges, computational trade-offs, and real-world deployment considerations in multi-institutional hospital networks, paving the way for secure, scalable, and privacy-preserving AI adoption in healthcare environments.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Et. al., Dr Priyank Jain,. "Differentially Private Data Release: Bias Weight Perturbation Method - A Novel Approach". Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, n.º 10 (28 de abril de 2021): 7165–73. http://dx.doi.org/10.17762/turcomat.v12i10.5607.

Texto completo
Resumen
Differential privacy plays the important role to preserve the individual data. In this research work, discussing a novel approach of releasing private data to the public, which is differentially private, called Bias Weight Perturbation Method. The approach follow here align with principle of differential privacy, it also used concept of statistical distance and statistical sample similarity to quantify the synthetic data generation loss, which is then used to validate our results. Our proposed approach make use of the deep generative models for providing privacy and it further produce synthetic dataset which can be released to public for further use.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Kii, Masanobu, Atsunori Ichikawa y Takayuki Miura. "Lightweight Two-Party Secure Sampling Protocol for Differential Privacy". Proceedings on Privacy Enhancing Technologies 2025, n.º 1 (enero de 2025): 23–36. http://dx.doi.org/10.56553/popets-2025-0003.

Texto completo
Resumen
Secure sampling is a secure multiparty computation protocol that allows a receiver to sample random numbers from a specified non-uniform distribution. It is a fundamental tool for privacy-preserving analysis since adding controlled noise is the most basic and frequently used method to achieve differential privacy. The well-known approaches to constructing a two-party secure sampling protocol are transforming uniform random values into non-uniform ones by computations (e.g., logarithm or binary circuits) or table-lookup. However, they require a large computational or communication cost to achieve a strong differential privacy guarantee. This work addresses this problem with our novel lightweight two-party secure sampling protocol. Our protocol consists of random table-lookup from a small table with the 1-out of-n oblivious transfer and only additions. Furthermore, we provide algorithms for making a table to achieve differential privacy. Our method can reduce the communication cost for (1.0, 2^(-40))-differential privacy from 183GB (naive construction) to 7.4MB.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Meisingseth, Fredrik y Christian Rechberger. "SoK: Computational and Distributed Differential Privacy for MPC". Proceedings on Privacy Enhancing Technologies 2025, n.º 1 (enero de 2025): 420–39. http://dx.doi.org/10.56553/popets-2025-0023.

Texto completo
Resumen
In the last fifteen years, there has been a steady stream of works combining differential privacy with various other cryptographic disciplines, particularly that of multi-party computation, yielding both practical and theoretical unification. As a part of that unification, due to the rich definitional nature of both fields, there have been many proposed definitions of differential privacy adapted to the given use cases and cryptographic tools at hand, resulting in computational and/or distributed versions of differential privacy. In this work, we offer a systemisation of such definitions, with a focus on definitions that are both computational and tailored for a multi-party setting. We order the definitions according to the distribution model and computational perspective and propose a viewpoint on when given definitions should be seen as instantiations of the same generalised notion. The ordering highlights a clear, and sometimes strict, hierarchy between the definitions, where utility (accuracy) can be traded for stronger privacy guarantees or lesser trust assumptions. Further, we survey theoretical results relating the definitions and extend some of them. We also discuss the state of well-known open questions and suggest new open problems to study. Finally, we consider aspects of the practical use of the different notions, hopefully giving guidance also to future applied work.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Kim, Jongwook. "DistOD: A Hybrid Privacy-Preserving and Distributed Framework for Origin–Destination Matrix Computation". Electronics 13, n.º 22 (19 de noviembre de 2024): 4545. http://dx.doi.org/10.3390/electronics13224545.

Texto completo
Resumen
The origin–destination (OD) matrix is a critical tool in understanding human mobility, with diverse applications. However, constructing OD matrices can pose significant privacy challenges, as sensitive information about individual mobility patterns may be exposed. In this paper, we propose DistOD, a hybrid privacy-preserving and distributed framework for the aggregation and computation of OD matrices without relying on a trusted central server. The proposed framework makes several key contributions. First, we propose a distributed method that enables multiple participating parties to collaboratively identify hotspot areas, which are regions frequently traveled between by individuals across these parties. To optimize the data utility and minimize the computational overhead, we introduce a hybrid privacy-preserving mechanism. This mechanism applies distributed differential privacy in hotspot areas to ensure high data utility, while using localized differential privacy in non-hotspot regions to reduce the computational costs. By combining these approaches, our method achieves an effective balance between computational efficiency and the accuracy of the OD matrix. Extensive experiments on real-world datasets show that DistOD consistently provides higher data utility than methods based solely on localized differential privacy, as well as greater efficiency than approaches based solely on distributed differential privacy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Fang, Juanru y Ke Yi. "Privacy Amplification by Sampling under User-level Differential Privacy". Proceedings of the ACM on Management of Data 2, n.º 1 (12 de marzo de 2024): 1–26. http://dx.doi.org/10.1145/3639289.

Texto completo
Resumen
Random sampling is an effective tool for reducing the computational costs of query processing in large databases. It has also been used frequently for private data analysis, in particular, under differential privacy (DP). An interesting phenomenon that the literature has identified, is that sampling can amplify the privacy guarantee of a mechanism, which in turn leads to reduced noise scales that have to be injected. All existing privacy amplification results only hold in the standard, record-level DP model. Recently, user-level differential privacy (user-DP) has gained a lot of attention as it protects all data records contributed by any particular user, thus offering stronger privacy protection. Sampling-based mechanisms under user-DP have not been explored so far, except naively running the mechanism on a sample without privacy amplification, which results in large DP noises. In fact, sampling is in even more demand under user-DP, since all state-of-the-art user-DP mechanisms have high computational costs due to the complex relationships between users and records. In this paper, we take the first step towards the study of privacy amplification by sampling under user-DP, and give the amplification results for two common user-DP sampling strategies: simple sampling and sample-and-explore. The experimental results show that these sampling-based mechanisms can be a useful tool to obtain some quick and reasonably accurate estimates on large private datasets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Alborch Escobar, Ferran, Sébastien Canard, Fabien Laguillaumie y Duong Hieu Phan. "Computational Differential Privacy for Encrypted Databases Supporting Linear Queries". Proceedings on Privacy Enhancing Technologies 2024, n.º 4 (octubre de 2024): 583–604. http://dx.doi.org/10.56553/popets-2024-0131.

Texto completo
Resumen
Differential privacy is a fundamental concept for protecting individual privacy in databases while enabling data analysis. Conceptually, it is assumed that the adversary has no direct access to the database, and therefore, encryption is not necessary. However, with the emergence of cloud computing and the << on-cloud >> storage of vast databases potentially contributed by multiple parties, it is becoming increasingly necessary to consider the possibility of the adversary having (at least partial) access to sensitive databases. A consequence is that, to protect the on-line database, it is now necessary to employ encryption. At PoPETs'19, it was the first time that the notion of differential privacy was considered for encrypted databases, but only for a limited type of query, namely histograms. Subsequently, a new type of query, summation, was considered at CODASPY'22. These works achieve statistical differential privacy, by still assuming that the adversary has no access to the encrypted database. In this paper, we take an essential step further by assuming that the adversary can eventually access the encrypted data, making it impossible to achieve statistical differential privacy because the security of encryption (beyond the one-time pad) relies on computational assumptions. Therefore, the appropriate privacy notion for encrypted databases that we target is computational differential privacy, which was introduced by Beimel et al. at CRYPTO '08. In our work, we focus on the case of functional encryption, which is an extensively studied primitive permitting some authorized computation over encrypted data. Technically, we show that any randomized functional encryption scheme that satisfies simulation-based security and differential privacy of the output can achieve computational differential privacy for multiple queries to one database. Our work also extends the summation query to a much broader range of queries, specifically linear queries, by utilizing inner-product functional encryption. Hence, we provide an instantiation for inner-product functionalities by proving its simulation soundness and present a concrete randomized inner-product functional encryption with computational differential privacy against multiple queries. In terms of efficiency, our protocol is almost as practical as the underlying inner product functional encryption scheme. As evidence, we provide a full benchmark, based on our concrete implementation for databases with up to 1 000 000 entries. Our work can be considered as a step towards achieving privacy-preserving encrypted databases for a wide range of query types and considering the involvement of multiple database owners.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Liu, Hai, Zhenqiang Wu, Yihui Zhou, Changgen Peng, Feng Tian y Laifeng Lu. "Privacy-Preserving Monotonicity of Differential Privacy Mechanisms". Applied Sciences 8, n.º 11 (28 de octubre de 2018): 2081. http://dx.doi.org/10.3390/app8112081.

Texto completo
Resumen
Differential privacy mechanisms can offer a trade-off between privacy and utility by using privacy metrics and utility metrics. The trade-off of differential privacy shows that one thing increases and another decreases in terms of privacy metrics and utility metrics. However, there is no unified trade-off measurement of differential privacy mechanisms. To this end, we proposed the definition of privacy-preserving monotonicity of differential privacy, which measured the trade-off between privacy and utility. First, to formulate the trade-off, we presented the definition of privacy-preserving monotonicity based on computational indistinguishability. Second, building on privacy metrics of the expected estimation error and entropy, we theoretically and numerically showed privacy-preserving monotonicity of Laplace mechanism, Gaussian mechanism, exponential mechanism, and randomized response mechanism. In addition, we also theoretically and numerically analyzed the utility monotonicity of these several differential privacy mechanisms based on utility metrics of modulus of characteristic function and variant of normalized entropy. Third, according to the privacy-preserving monotonicity of differential privacy, we presented a method to seek trade-off under a semi-honest model and analyzed a unilateral trade-off under a rational model. Therefore, privacy-preserving monotonicity can be used as a criterion to evaluate the trade-off between privacy and utility in differential privacy mechanisms under the semi-honest model. However, privacy-preserving monotonicity results in a unilateral trade-off of the rational model, which can lead to severe consequences.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Pavan Kumar Vadrevu. "Scalable Approaches for Enhancing Privacy in Blockchain Networks: A Comprehensive Review of Differential Privacy Techniques". Journal of Information Systems Engineering and Management 10, n.º 8s (31 de enero de 2025): 635–48. https://doi.org/10.52783/jisem.v10i8s.1119.

Texto completo
Resumen
The rapid adoption of blockchain technology in a number of industries, such as supply chain management, healthcare, and finance, has intensified concerns surrounding data privacy. As sensitive information is stored and shared on decentralized networks, the inherent cryptographic mechanisms of blockchain provide robust security. However, the transparency of public ledgers can unintentionally expose sensitive data, resulting in potential privacy risks and regulatory challenges. Differential privacy has emerged as a promising approach to protect individual data while preserving the usability of shared datasets. By enabling data analysis without revealing individual data points, differential privacy is well-suited for anonymizing transactions, smart contract interactions, and other blockchain activities. However, integrating differential privacy into blockchain systems presents several challenges, including ensuring scalability, balancing privacy with data utility, and managing computational overhead. This review, "Scalable Approaches for Enhancing Privacy in Blockchain Networks: A Comprehensive Review of Differential Privacy Techniques," examines 50 recent studies published between 2023 and 2024 that investigate differential privacy techniques in blockchain networks. It highlights various scalable approaches and their effectiveness in enhancing privacy. The findings indicate that these methods can significantly improve privacy protection, provide flexibility for both public and private blockchains, and assist in complying with regulatory requirements. This establishes differential privacy as a vital tool for secure blockchain implementation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Hong, Yiyang, Xingwen Zhao, Hui Zhu y Hui Li. "A Blockchain-Integrated Divided-Block Sparse Matrix Transformation Differential Privacy Data Publishing Model". Security and Communication Networks 2021 (7 de diciembre de 2021): 1–15. http://dx.doi.org/10.1155/2021/2418539.

Texto completo
Resumen
With the rapid development of information technology, people benefit more and more from big data. At the same time, it becomes a great concern that how to obtain optimal outputs from big data publishing and sharing management while protecting privacy. Many researchers seek to realize differential privacy protection in massive high-dimensional datasets using the method of principal component analysis. However, these algorithms are inefficient in processing and do not take into account the different privacy protection needs of each attribute in high-dimensional datasets. To address the above problem, we design a Divided-block Sparse Matrix Transformation Differential Privacy Data Publishing Algorithm (DSMT-DP). In this algorithm, different levels of privacy budget parameters are assigned to different attributes according to the required privacy protection level of each attribute, taking into account the privacy protection needs of different levels of attributes. Meanwhile, the use of the divided-block scheme and the sparse matrix transformation scheme can improve the computational efficiency of the principal component analysis method for handling large amounts of high-dimensional sensitive data, and we demonstrate that the proposed algorithm satisfies differential privacy. Our experimental results show that the mean square error of the proposed algorithm is smaller than the traditional differential privacy algorithm with the same privacy parameters, and the computational efficiency can be improved. Further, we combine this algorithm with blockchain and propose an Efficient Privacy Data Publishing and Sharing Model based on the blockchain. Publishing and sharing private data on this model not only resist strong background knowledge attacks from adversaries outside the system but also prevent stealing and tampering of data by not-completely-honest participants inside the system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Tesis sobre el tema "Computational Differential Privacy"

1

Alborch, escobar Ferran. "Private Data Analysis over Encrypted Databases : Mixing Functional Encryption with Computational Differential Privacy". Electronic Thesis or Diss., Institut polytechnique de Paris, 2025. http://www.theses.fr/2025IPPAT003.

Texto completo
Resumen
Dans l'actuelle société numérique, les données dominent le monde. Associées la plupart du temps à des individus, leur exploitation doit respecter la vie privée de ces derniers. Cette contrainte a donné naissance au paradigme de confidentialité différentielle, qui permet de protéger les individus lors de requêtes sur des bases contenant des données les concernant. Mais avec l'émergence du "cloud computing'', il devient nécessaire de prendre en compte la confidentialité du stockage de ces dernières dans le cloud, en utilisant du chiffrement. Cette thèse étudie comment assurer à la fois la confidentialité et le respect de la vie privée de ces bases de données externalisées en combinant deux primitives : la confidentialité différentielle calculatoire et le chiffrement fonctionnel. Dans un premier temps, nous étudions les liens entre la confidentialité différentielle calculatoire et le chiffrement fonctionnel pour des fonctions aléatoires d'un point de vue générique. Nous analysons la confidentialité dans un cadre où un analyste malicieux peut accéder aux données chiffrées stockés dans un serveur soit par corruption soit par brèche de sécurité, et nous prouvons qu'un schéma de chiffrement fonctionnel aléatoire sûr et pour la famille de fonctions appropriée garantie la confidentialité différentielle calculatoire du système. Dans second temps, nous construisons des schémas de chiffrement fonctionnel aléatoire pour certaines familles de fonctions utiles, et nous les prouvons sûrs dans le modèle standard sous des hypothèses très étudiées. Les familles de fonctions que nous étudions sont les fonctions linéaires, utilisées par exemple pour requêtes de comptage, des histogrammes et régressions linéaires, et les fonctions quadratiques, utilisées par exemple pour régressions quadratiques et tests d'hypothèses. Les schémas proposés sont utilisés avec le premier résultat pour construire des bases des données chiffrées pour fonctions linéaires et quadratiques respectivement. Finalement, nous implémentons les deux schémas de chiffrement fonctionnel pour analyser leur efficacité. Cela montre que nos constructions sont pratiques pour des bases de données avec 1 000 000 entrées pour des requêtes linéaires et des bases de données avec 10 000 entrées pour des requêtes quadratiques
In our current digitalized society, data is ruling the world. But as it is most of the time related to individuals, its exploitation should respect the privacy of the latter. This issue has raised the differential privacy paradigm, which permits to protect individuals when querying databases containing data about them. But with the emergence of cloud computing, it is becoming increasingly necessary to also consider the confidentiality of "on-cloud'' storage confidentiality of such vast databases, using encryption techniques. This thesis studies how to provide both privacy and confidentiality of such outsourced databases by mixing two primitives: computational differential privacy and functional encryption. First, we study the relationship between computational differential privacy and functional encryption for randomized functions in a generic way. We analyze the privacy of the setting where a malicious analyst may access the encrypted data stored in a server, either by corrupting or breaching it, and prove that a secure randomized functional encryption scheme supporting the appropriate family of functions guarantees the computational differential privacy of the system. Second, we construct efficient randomized functional encryption schemes for certain useful families of functions, and we prove them secure in the standard model under well-known assumptions. The families of functions considered are linear functions, used for example in counting queries, histograms and linear regressions, and quadratic functions, used for example in quadratic regressions and hypothesis testing. The schemes built are then used together with the first result to construct encrypted databases for their corresponding family of queries. Finally, we implement both randomized functional encryption schemes to analyze their efficiency. This shows that our constructions are practical for databases with up to 1 000 000 entries in the case of linear queries and databases with up to 10 000 database entries in the case of quadratic queries
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Leukam, Lako Franklin. "Protection des données à caractère personnel pour les services énergétiques". Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAS004.

Texto completo
Resumen
Les réseaux électriques intelligents sont des briques importantes dans la lutte contre le changement climatique. Ces réseaux intelligents permettent l'introduction massive des énergies renouvelables, qui sont intermittentes, tout en garantissant la stabilité du réseau, c'est-à-dire d'assurer en temps réel l'équilibre entre la demande et la production d'énergie en aval du réseau. Cette gestion de la stabilité du réseau est possible grâce aux compteurs communicants installés dans des foyers permettant au gestionnaire de réseau de distribution de collecter les données de consommation et de production des consommateurs et des producteurs à un pas de temps allant jusqu'à 10 min en France. Ces données de consommations en temps réel permettent de fournir de nouveaux services énergétiques, comme la prévision de consommation des clients, ou encore la demande d'effacement à la pointe. Les services d'effacement permettent d'éviter les pics de consommation dans un quartier en s'assurant qu'à chaque instant, la consommation des utilisateurs ne dépasse pas la puissance maximale fournie au niveau du quartier. Cependant, la collecte des données de consommation pose des problèmes de vie privée. En effet, les données de consommation d'un foyer permettent de révéler le comportement des habitants de ce foyer : présences, absences, heures de lever, heures de coucher, religion, etc. Cette thèse vise à proposer de nouveaux services énergétiques, tout en protégeant la vie privée des consommateurs. Nous proposons cinq contributions qui se rapportent à deux thèmes: 1- La transformation d'un algorithme de demande d'effacement à la pointe en le rendant respectueux de la vie privée. Cette transformation utilise du calcul multipartite sécurisé, permettant de réaliser des calculs sur des données individuelles sans jamais les révéler. 2- La publication de sommes de consommation des foyers en préservant la vie privée et une bonne utilité. Cette publication utilise la confidentialité différentielle, garantissant que la publication ne permet pas de révéler indirectement les consommations individuelles des foyers. Ces sommes de consommation permettent notamment de réaliser des prévisions de consommation
Smart grids are important bricks in the fight against climate change. Smart grids allow the massive introduction of renewable energies, which are intermittent, while guaranteeing grid stability, i.e., ensuring a real-time balance between demand and production in the power grid. The management of grid stability is possible thanks to smart meters installed in households, allowing the distribution system operator to collect consumption/production data from consumers/producers at a time step of up to 10 min in France. This real-time consumption data enables to provide new energy services, such as customer consumption forecasts or demand response. Demand response services help to avoid consumption peaks in a neighborhood by ensuring that, at all times, users' consumption does not exceed the maximum power of the local grid. However, the collection of users' consumptions is a key privacy concern. Indeed, individual consumption data reflect the use of all electric appliances by inhabitants in a household over time, and enable to deduce the behaviors, activities, age or preferences of the inhabitants. This thesis aims to propose new energy services, while protecting the privacy of consumers. We propose five contributions that relate to two themes:1- The transformation of a demand response algorithm by making it privacy friendly. This transformation uses secure multiparty computation, allowing to compute an aggregate, such as a sum of users’ consumption, without disclosing any individual consumption.2- The publication of sum of users' consumption while preserving privacy and good utility. This publication uses differential privacy, ensuring that the publication of the sum does not indirectly reveal individual users' consumption. Among other energy services, these sums of consumption enable to perform consumption forecasts
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

(6565679), Fang-Yu Rao. "Privacy-Enhancing Techniques for Data Analytics". Thesis, 2019.

Buscar texto completo
Resumen

Organizations today collect and aggregate huge amounts of data from individuals under various scenarios and for different purposes. Such aggregation of individuals’ data when combined with techniques of data analytics allows organizations to make informed decisions and predictions. But in many situations, different portions of the data associated with individuals are collected and curated by different organizations. To derive more accurate conclusions and predictions, those organization may want to conduct the analysis based on their joint data, which cannot be simply accomplished by each organization exchanging its own data with other organizations due to the sensitive nature of data. Developing approaches for collaborative privacy-preserving data analytics, however, is a nontrivial task. At least two major challenges have to be addressed. The first challenge is that the security of the data possessed by each organization should always be properly protected during and after the collaborative analysis process, whereas the second challenge is the high computational complexity usually accompanied by cryptographic primitives used to build such privacy-preserving protocols.


In this dissertation, based on widely adopted primitives in cryptography, we address the aforementioned challenges by developing techniques for data analytics that not only allow multiple mutually distrustful parties to perform data analysis on their joint data in a privacy-preserving manner, but also reduce the time required to complete the analysis. More specifically, using three common data analytics tasks as concrete examples, we show how to construct the respective privacy-preserving protocols under two different scenarios: (1) the protocols are executed by a collaborative process only involving the participating parties; (2) the protocols are outsourced to some service providers in the cloud. Two types of optimization for improving the efficiency of those protocols are also investigated. The first type allows each participating party access to a statistically controlled leakage so as to reduce the amount of required computation, while the second type utilizes the parallelism that could be incorporated into the task and pushes some computation to the offline phase to reduce the time needed for each participating party without any additional leakage. Extensive experiments are also conducted on real-world datasets to demonstrate the effectiveness of our proposed techniques.

Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Computational Differential Privacy"

1

Mironov, Ilya, Omkant Pandey, Omer Reingold y Salil Vadhan. "Computational Differential Privacy". En Advances in Cryptology - CRYPTO 2009, 126–42. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-03356-8_8.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Pejó, Balázs y Damien Desfontaines. "Computational Power (C)". En Guide to Differential Privacy Modifications, 55–57. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96398-9_9.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Ben Hamida, Sana, Hichem Mrabet y Abderrazak Jemai. "How Differential Privacy Reinforces Privacy of Machine Learning Models?" En Advances in Computational Collective Intelligence, 661–73. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-16210-7_54.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Valovich, Filipp y Francesco Aldà. "Computational Differential Privacy from Lattice-Based Cryptography". En Number-Theoretic Methods in Cryptology, 121–41. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-76620-1_8.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Kang, Yilin, Jian Li, Yong Liu y Weiping Wang. "Data Heterogeneity Differential Privacy: From Theory to Algorithm". En Computational Science – ICCS 2023, 119–33. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-35995-8_9.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Tao, Fuqiang, Zhe Sun, Rui Liang, Rundong Shao, Yuhan Chai y Yangyang Wang. "FEDSET: Federated Random Forest Based on Differential Privacy". En Computational and Experimental Simulations in Engineering, 791–806. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-42987-3_55.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Podsevalov, Ivan, Alexei Podsevalov y Vladimir Korkhov. "Differential Privacy for Statistical Data of Educational Institutions". En Computational Science and Its Applications – ICCSA 2022 Workshops, 603–15. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-10542-5_41.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Groce, Adam, Jonathan Katz y Arkady Yerukhimovich. "Limits of Computational Differential Privacy in the Client/Server Setting". En Theory of Cryptography, 417–31. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-19571-6_25.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Qian, Jiaqing, Yanan Chen y Sanxiu Jiao. "DPFL-AES: Differential Privacy Federated Learning Based on Adam Early Stopping". En Computational and Experimental Simulations in Engineering, 905–19. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-42515-8_64.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Bun, Mark, Yi-Hsiu Chen y Salil Vadhan. "Separating Computational and Statistical Differential Privacy in the Client-Server Model". En Theory of Cryptography, 607–34. Berlin, Heidelberg: Springer Berlin Heidelberg, 2016. http://dx.doi.org/10.1007/978-3-662-53641-4_23.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Computational Differential Privacy"

1

Li, Xianzhi, Ran Zmigrod, Zhiqiang Ma, Xiaomo Liu y Xiaodan Zhu. "Fine-Tuning Language Models with Differential Privacy through Adaptive Noise Allocation". En Findings of the Association for Computational Linguistics: EMNLP 2024, 8368–75. Stroudsburg, PA, USA: Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.findings-emnlp.491.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Gupta, Yash, Jeswin M. S, Aniruddh Mantrala, Davin Henry Monteiro, Adhithi M y M. N. Thippeswamy. "Enhancing Differential Privacy in Federated Learning via Quantum Computation and Algorithms". En 2024 8th International Conference on Computational System and Information Technology for Sustainable Solutions (CSITSS), 1–6. IEEE, 2024. https://doi.org/10.1109/csitss64042.2024.10816807.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Vu, Doan Nam Long, Timour Igamberdiev y Ivan Habernal. "Granularity is crucial when applying differential privacy to text: An investigation for neural machine translation". En Findings of the Association for Computational Linguistics: EMNLP 2024, 507–27. Stroudsburg, PA, USA: Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.findings-emnlp.29.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Tajima, Arisa, Wei Jiang, Virendra Marathe y Hamid Mozaffari. "Enhanced Private Decision Trees using Secure Multiparty Computation and Differential Privacy". En 2024 IEEE International Conference on Knowledge Graph (ICKG), 352–59. IEEE, 2024. https://doi.org/10.1109/ickg63256.2024.00051.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Flemings, James y Murali Annavaram. "Differentially Private Knowledge Distillation via Synthetic Text Generation". En Findings of the Association for Computational Linguistics ACL 2024, 12957–68. Stroudsburg, PA, USA: Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.findings-acl.769.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Chen, Bo, Baike She, Calvin Hawkins, Alex Benvenuti, Brandon Fallin, Philip E. Paré y Matthew Hale. "Differentially Private Computation of Basic Reproduction Numbers in Networked Epidemic Models". En 2024 American Control Conference (ACC), 4422–27. IEEE, 2024. http://dx.doi.org/10.23919/acc60939.2024.10644264.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Ramesh, Krithika, Nupoor Gandhi, Pulkit Madaan, Lisa Bauer, Charith Peris y Anjalie Field. "Evaluating Differentially Private Synthetic Data Generation in High-Stakes Domains". En Findings of the Association for Computational Linguistics: EMNLP 2024, 15254–69. Stroudsburg, PA, USA: Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.findings-emnlp.894.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Meisenbacher, Stephen, Maulik Chevli, Juraj Vladika y Florian Matthes. "DP-MLM: Differentially Private Text Rewriting Using Masked Language Models". En Findings of the Association for Computational Linguistics ACL 2024, 9314–28. Stroudsburg, PA, USA: Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.findings-acl.554.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Ghazi, Badih, Rahul Ilango, Pritish Kamath, Ravi Kumar y Pasin Manurangsi. "Towards Separating Computational and Statistical Differential Privacy". En 2023 IEEE 64th Annual Symposium on Foundations of Computer Science (FOCS). IEEE, 2023. http://dx.doi.org/10.1109/focs57990.2023.00042.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Kotevska, Olivera, Folami Alamudun y Christopher Stanley. "Optimal Balance of Privacy and Utility with Differential Privacy Deep Learning Frameworks". En 2021 International Conference on Computational Science and Computational Intelligence (CSCI). IEEE, 2021. http://dx.doi.org/10.1109/csci54926.2021.00141.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Informes sobre el tema "Computational Differential Privacy"

1

Rannenberg, Kai, Sebastian Pape, Frédéric Tronnier y Sascha Löbner. Study on the Technical Evaluation of De-Identification Procedures for Personal Data in the Automotive Sector. Universitätsbibliothek Johann Christian Senckenberg, octubre de 2021. http://dx.doi.org/10.21248/gups.63413.

Texto completo
Resumen
The aim of this study was to identify and evaluate different de-identification techniques that may be used in several mobility-related use cases. To do so, four use cases have been defined in accordance with a project partner that focused on the legal aspects of this project, as well as with the VDA/FAT working group. Each use case aims to create different legal and technical issues with regards to the data and information that are to be gathered, used and transferred in the specific scenario. Use cases should therefore differ in the type and frequency of data that is gathered as well as the level of privacy and the speed of computation that is needed for the data. Upon identifying use cases, a systematic literature review has been performed to identify suitable de-identification techniques to provide data privacy. Additionally, external databases have been considered as data that is expected to be anonymous might be reidentified through the combination of existing data with such external data. For each case, requirements and possible attack scenarios were created to illustrate where exactly privacy-related issues could occur and how exactly such issues could impact data subjects, data processors or data controllers. Suitable de-identification techniques should be able to withstand these attack scenarios. Based on a series of additional criteria, de-identification techniques are then analyzed for each use case. Possible solutions are then discussed individually in chapters 6.1 - 6.2. It is evident that no one-size-fits-all approach to protect privacy in the mobility domain exists. While all techniques that are analyzed in detail in this report, e.g., homomorphic encryption, differential privacy, secure multiparty computation and federated learning, are able to successfully protect user privacy in certain instances, their overall effectiveness differs depending on the specifics of each use case.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía