Littérature scientifique sur le sujet « Utility-privacy trade-off »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Utility-privacy trade-off ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Utility-privacy trade-off"

1

Liu, Hai, Zhenqiang Wu, Yihui Zhou, Changgen Peng, Feng Tian et Laifeng Lu. « Privacy-Preserving Monotonicity of Differential Privacy Mechanisms ». Applied Sciences 8, no 11 (28 octobre 2018) : 2081. http://dx.doi.org/10.3390/app8112081.

Texte intégral
Résumé :
Differential privacy mechanisms can offer a trade-off between privacy and utility by using privacy metrics and utility metrics. The trade-off of differential privacy shows that one thing increases and another decreases in terms of privacy metrics and utility metrics. However, there is no unified trade-off measurement of differential privacy mechanisms. To this end, we proposed the definition of privacy-preserving monotonicity of differential privacy, which measured the trade-off between privacy and utility. First, to formulate the trade-off, we presented the definition of privacy-preserving monotonicity based on computational indistinguishability. Second, building on privacy metrics of the expected estimation error and entropy, we theoretically and numerically showed privacy-preserving monotonicity of Laplace mechanism, Gaussian mechanism, exponential mechanism, and randomized response mechanism. In addition, we also theoretically and numerically analyzed the utility monotonicity of these several differential privacy mechanisms based on utility metrics of modulus of characteristic function and variant of normalized entropy. Third, according to the privacy-preserving monotonicity of differential privacy, we presented a method to seek trade-off under a semi-honest model and analyzed a unilateral trade-off under a rational model. Therefore, privacy-preserving monotonicity can be used as a criterion to evaluate the trade-off between privacy and utility in differential privacy mechanisms under the semi-honest model. However, privacy-preserving monotonicity results in a unilateral trade-off of the rational model, which can lead to severe consequences.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Avent, Brendan, Javier González, Tom Diethe, Andrei Paleyes et Borja Balle. « Automatic Discovery of Privacy–Utility Pareto Fronts ». Proceedings on Privacy Enhancing Technologies 2020, no 4 (1 octobre 2020) : 5–23. http://dx.doi.org/10.2478/popets-2020-0060.

Texte intégral
Résumé :
AbstractDifferential privacy is a mathematical framework for privacy-preserving data analysis. Changing the hyperparameters of a differentially private algorithm allows one to trade off privacy and utility in a principled way. Quantifying this trade-off in advance is essential to decision-makers tasked with deciding how much privacy can be provided in a particular application while maintaining acceptable utility. Analytical utility guarantees offer a rigorous tool to reason about this tradeoff, but are generally only available for relatively simple problems. For more complex tasks, such as training neural networks under differential privacy, the utility achieved by a given algorithm can only be measured empirically. This paper presents a Bayesian optimization methodology for efficiently characterizing the privacy– utility trade-off of any differentially private algorithm using only empirical measurements of its utility. The versatility of our method is illustrated on a number of machine learning tasks involving multiple models, optimizers, and datasets.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Gobinathan, B., M. A. Mukunthan, S. Surendran, K. Somasundaram, Syed Abdul Moeed, P. Niranjan, V. Gouthami et al. « A Novel Method to Solve Real Time Security Issues in Software Industry Using Advanced Cryptographic Techniques ». Scientific Programming 2021 (28 décembre 2021) : 1–9. http://dx.doi.org/10.1155/2021/3611182.

Texte intégral
Résumé :
In recent times, the utility and privacy are trade-off factors with the performance of one factor tends to sacrifice the other. Therefore, the dataset cannot be published without privacy. It is henceforth crucial to maintain an equilibrium between the utility and privacy of data. In this paper, a novel technique on trade-off between the utility and privacy is developed, where the former is developed with a metaheuristic algorithm and the latter is developed using a cryptographic model. The utility is carried out with the process of clustering, and the privacy model encrypts and decrypts the model. At first, the input datasets are clustered, and after clustering, the privacy of data is maintained. The simulation is conducted on the manufacturing datasets over various existing models. The results show that the proposed model shows improved clustering accuracy and data privacy than the existing models. The evaluation with the proposed model shows a trade-off privacy preservation and utility clustering in smart manufacturing datasets.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Zeng, Xia, Chuanchuan Yang et Bin Dai. « Utility–Privacy Trade-Off in Distributed Machine Learning Systems ». Entropy 24, no 9 (14 septembre 2022) : 1299. http://dx.doi.org/10.3390/e24091299.

Texte intégral
Résumé :
In distributed machine learning (DML), though clients’ data are not directly transmitted to the server for model training, attackers can obtain the sensitive information of clients by analyzing the local gradient parameters uploaded by clients. For this case, we use the differential privacy (DP) mechanism to protect the clients’ local parameters. In this paper, from an information-theoretic point of view, we study the utility–privacy trade-off in DML with the help of the DP mechanism. Specifically, three cases including independent clients’ local parameters with independent DP noise, dependent clients’ local parameters with independent/dependent DP noise are considered. Mutual information and conditional mutual information are used to characterize utility and privacy, respectively. First, we show the relationship between utility and privacy for the three cases. Then, we show the optimal noise variance that achieves the maximal utility under a certain level of privacy. Finally, the results of this paper are further illustrated by numerical results
Styles APA, Harvard, Vancouver, ISO, etc.
5

Srivastava, Saurabh, Vinay P. Namboodiri et T. V. Prabhakar. « Achieving Privacy-Utility Trade-off in existing Software Systems ». Journal of Physics : Conference Series 1454 (février 2020) : 012004. http://dx.doi.org/10.1088/1742-6596/1454/1/012004.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Wunderlich, Dominik, Daniel Bernau, Francesco Aldà, Javier Parra-Arnau et Thorsten Strufe. « On the Privacy–Utility Trade-Off in Differentially Private Hierarchical Text Classification ». Applied Sciences 12, no 21 (4 novembre 2022) : 11177. http://dx.doi.org/10.3390/app122111177.

Texte intégral
Résumé :
Hierarchical text classification consists of classifying text documents into a hierarchy of classes and sub-classes. Although Artificial Neural Networks have proved useful to perform this task, unfortunately, they can leak training data information to adversaries due to training data memorization. Using differential privacy during model training can mitigate leakage attacks against trained models, enabling the models to be shared safely at the cost of reduced model accuracy. This work investigates the privacy–utility trade-off in hierarchical text classification with differential privacy guarantees, and it identifies neural network architectures that offer superior trade-offs. To this end, we use a white-box membership inference attack to empirically assess the information leakage of three widely used neural network architectures. We show that large differential privacy parameters already suffice to completely mitigate membership inference attacks, thus resulting only in a moderate decrease in model utility. More specifically, for large datasets with long texts, we observed Transformer-based models to achieve an overall favorable privacy–utility trade-off, while for smaller datasets with shorter texts, convolutional neural networks are preferable.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Mohammed, Kabiru, Aladdin Ayesh et Eerke Boiten. « Complementing Privacy and Utility Trade-Off with Self-Organising Maps ». Cryptography 5, no 3 (17 août 2021) : 20. http://dx.doi.org/10.3390/cryptography5030020.

Texte intégral
Résumé :
In recent years, data-enabled technologies have intensified the rate and scale at which organisations collect and analyse data. Data mining techniques are applied to realise the full potential of large-scale data analysis. These techniques are highly efficient in sifting through big data to extract hidden knowledge and assist evidence-based decisions, offering significant benefits to their adopters. However, this capability is constrained by important legal, ethical and reputational concerns. These concerns arise because they can be exploited to allow inferences to be made on sensitive data, thus posing severe threats to individuals’ privacy. Studies have shown Privacy-Preserving Data Mining (PPDM) can adequately address this privacy risk and permit knowledge extraction in mining processes. Several published works in this area have utilised clustering techniques to enforce anonymisation models on private data, which work by grouping the data into clusters using a quality measure and generalising the data in each group separately to achieve an anonymisation threshold. However, existing approaches do not work well with high-dimensional data, since it is difficult to develop good groupings without incurring excessive information loss. Our work aims to complement this balancing act by optimising utility in PPDM processes. To illustrate this, we propose a hybrid approach, that combines self-organising maps with conventional privacy-based clustering algorithms. We demonstrate through experimental evaluation, that results from our approach produce more utility for data mining tasks and outperforms conventional privacy-based clustering algorithms. This approach can significantly enable large-scale analysis of data in a privacy-preserving and trustworthy manner.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Kiranagi, Manasi, Devika Dhoble, Madeeha Tahoor et Dr Rekha Patil. « Finding Optimal Path and Privacy Preserving for Wireless Network ». International Journal for Research in Applied Science and Engineering Technology 10, no 10 (31 octobre 2022) : 360–65. http://dx.doi.org/10.22214/ijraset.2022.46949.

Texte intégral
Résumé :
Abstract: Privacy-preserving routing protocols in wireless networks frequently utilize additional artificial traffic to hide the source-destination identities of the communicating pair. Usually, the addition of artificial traffic is done heuristically with no guarantees that the transmission cost, latency, etc., are optimized in every network topology. We explicitly examine the privacyutility trade-off problem for wireless networks and develop a novel privacy-preserving routing algorithm called Optimal Privacy Enhancing Routing Algorithm (OPERA). OPERA uses a statistical decision-making framework to optimize the privacy of the routing protocol given a utility (or cost) constraint. We consider global adversaries with both Lossless and lossy observations that use the Bayesian maximum-a-posteriori (MAP) estimation strategy. We formulate the privacy-utility trade-off problem as a linear program which can be efficiently solved.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Cai, Lin, Jinchuan Tang, Shuping Dang et Gaojie Chen. « Privacy protection and utility trade-off for social graph embedding ». Information Sciences 676 (août 2024) : 120866. http://dx.doi.org/10.1016/j.ins.2024.120866.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Rassouli, Borzoo, et Deniz Gunduz. « Optimal Utility-Privacy Trade-Off With Total Variation Distance as a Privacy Measure ». IEEE Transactions on Information Forensics and Security 15 (2020) : 594–603. http://dx.doi.org/10.1109/tifs.2019.2903658.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Utility-privacy trade-off"

1

Aldà, Francesco [Verfasser], Hans Ulrich [Gutachter] Simon et Alexander [Gutachter] May. « On the trade-off between privacy and utility in statistical data analysis / Francesco Aldà ; Gutachter : Hans Ulrich Simon, Alexander May ; Fakultät für Mathematik ». Bochum : Ruhr-Universität Bochum, 2018. http://d-nb.info/1161942416/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Kaplan, Caelin. « Compromis inhérents à l'apprentissage automatique préservant la confidentialité ». Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4045.

Texte intégral
Résumé :
À mesure que les modèles d'apprentissage automatique (ML) sont de plus en plus intégrés dans un large éventail d'applications, il devient plus important que jamais de garantir la confidentialité des données des individus. Cependant, les techniques actuelles entraînent souvent une perte d'utilité et peuvent affecter des facteurs comme l'équité et l'interprétabilité. Cette thèse vise à approfondir la compréhension des compromis dans trois techniques de ML respectueuses de la vie privée : la confidentialité différentielle, les défenses empiriques, et l'apprentissage fédéré, et à proposer des méthodes qui améliorent leur efficacité tout en maintenant la protection de la vie privée. La première étude examine l'impact de la confidentialité différentielle sur l'équité entre les groupes définis par des attributs sensibles. Alors que certaines hypothèses précédentes suggéraient que la confidentialité différentielle pourrait exacerber l'injustice dans les modèles ML, nos expériences montrent que la sélection d'une architecture de modèle optimale et le réglage des hyperparamètres pour DP-SGD (Descente de Gradient Stochastique Différentiellement Privée) peuvent atténuer les disparités d'équité. En utilisant des ensembles de données standards dans la littérature sur l'équité du ML, nous montrons que les disparités entre les groupes pour les métriques telles que la parité démographique, l'égalité des chances et la parité prédictive sont souvent réduites ou négligeables par rapport aux modèles non privés. La deuxième étude se concentre sur les défenses empiriques de la vie privée, qui visent à protéger les données d'entraînement tout en minimisant la perte d'utilité. La plupart des défenses existantes supposent l'accès à des données de référence — un ensemble de données supplémentaire provenant de la même distribution (ou similaire) que les données d'entraînement. Cependant, les travaux antérieurs n'ont que rarement évalué les risques de confidentialité associés aux données de référence. Pour y remédier, nous avons réalisé la première analyse complète de la confidentialité des données de référence dans les défenses empiriques. Nous avons proposé une méthode de défense de référence, la minimisation du risque empirique pondéré (WERM), qui permet de mieux comprendre les compromis entre l'utilité du modèle, la confidentialité des données d'entraînement et celle des données de référence. En plus d'offrir des garanties théoriques, WERM surpasse régulièrement les défenses empiriques de pointe dans presque tous les régimes de confidentialité relatifs. La troisième étude aborde les compromis liés à la convergence dans les systèmes d'inférence collaborative (CIS), de plus en plus utilisés dans l'Internet des objets (IoT) pour permettre aux nœuds plus petits de décharger une partie de leurs tâches d'inférence vers des nœuds plus puissants. Alors que l'apprentissage fédéré (FL) est souvent utilisé pour entraîner conjointement les modèles dans ces systèmes, les méthodes traditionnelles ont négligé la dynamique opérationnelle, comme l'hétérogénéité des taux de service entre les nœuds. Nous proposons une approche FL novatrice, spécialement conçue pour les CIS, qui prend en compte les taux de service variables et la disponibilité inégale des données. Notre cadre offre des garanties théoriques et surpasse systématiquement les algorithmes de pointe, en particulier dans les scénarios où les appareils finaux gèrent des taux de requêtes d'inférence élevés. En conclusion, cette thèse contribue à l'amélioration des techniques de ML respectueuses de la vie privée en analysant les compromis entre confidentialité, utilité et autres facteurs. Les méthodes proposées offrent des solutions pratiques pour intégrer ces techniques dans des applications réelles, en assurant une meilleure protection des données personnelles
As machine learning (ML) models are increasingly integrated into a wide range of applications, ensuring the privacy of individuals' data is becoming more important than ever. However, privacy-preserving ML techniques often result in reduced task-specific utility and may negatively impact other essential factors like fairness, robustness, and interpretability. These challenges have limited the widespread adoption of privacy-preserving methods. This thesis aims to address these challenges through two primary goals: (1) to deepen the understanding of key trade-offs in three privacy-preserving ML techniques—differential privacy, empirical privacy defenses, and federated learning; (2) to propose novel methods and algorithms that improve utility and effectiveness while maintaining privacy protections. The first study in this thesis investigates how differential privacy impacts fairness across groups defined by sensitive attributes. While previous assumptions suggested that differential privacy could exacerbate unfairness in ML models, our experiments demonstrate that selecting an optimal model architecture and tuning hyperparameters for DP-SGD (Differentially Private Stochastic Gradient Descent) can mitigate fairness disparities. Using standard ML fairness datasets, we show that group disparities in metrics like demographic parity, equalized odds, and predictive parity are often reduced or remain negligible when compared to non-private baselines, challenging the prevailing notion that differential privacy worsens fairness for underrepresented groups. The second study focuses on empirical privacy defenses, which aim to protect training data privacy while minimizing utility loss. Most existing defenses assume access to reference data---an additional dataset from the same or a similar distribution as the training data. However, previous works have largely neglected to evaluate the privacy risks associated with reference data. To address this, we conducted the first comprehensive analysis of reference data privacy in empirical defenses. We proposed a baseline defense method, Weighted Empirical Risk Minimization (WERM), which allows for a clearer understanding of the trade-offs between model utility, training data privacy, and reference data privacy. In addition to offering theoretical guarantees on model utility and the relative privacy of training and reference data, WERM consistently outperforms state-of-the-art empirical privacy defenses in nearly all relative privacy regimes.The third study addresses the convergence-related trade-offs in Collaborative Inference Systems (CISs), which are increasingly used in the Internet of Things (IoT) to enable smaller nodes in a network to offload part of their inference tasks to more powerful nodes. While Federated Learning (FL) is often used to jointly train models within CISs, traditional methods have overlooked the operational dynamics of these systems, such as heterogeneity in serving rates across nodes. We propose a novel FL approach explicitly designed for CISs, which accounts for varying serving rates and uneven data availability. Our framework provides theoretical guarantees and consistently outperforms state-of-the-art algorithms, particularly in scenarios where end devices handle high inference request rates.In conclusion, this thesis advances the field of privacy-preserving ML by addressing key trade-offs in differential privacy, empirical privacy defenses, and federated learning. The proposed methods provide new insights into balancing privacy with utility and other critical factors, offering practical solutions for integrating privacy-preserving techniques into real-world applications. These contributions aim to support the responsible and ethical deployment of AI technologies that prioritize data privacy and protection
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Utility-privacy trade-off"

1

Alvim, Mário S., Miguel E. Andrés, Konstantinos Chatzikokolakis, Pierpaolo Degano et Catuscia Palamidessi. « Differential Privacy : On the Trade-Off between Utility and Information Leakage ». Dans Lecture Notes in Computer Science, 39–54. Berlin, Heidelberg : Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-29420-4_3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Tang, Jingye, Tianqing Zhu, Ping Xiong, Yu Wang et Wei Ren. « Privacy and Utility Trade-Off for Textual Analysis via Calibrated Multivariate Perturbations ». Dans Network and System Security, 342–53. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-65745-1_20.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Rafiei, Majid, Frederik Wangelik et Wil M. P. van der Aalst. « TraVaS : Differentially Private Trace Variant Selection for Process Mining ». Dans Lecture Notes in Business Information Processing, 114–26. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-27815-0_9.

Texte intégral
Résumé :
AbstractIn the area of industrial process mining, privacy-preserving event data publication is becoming increasingly relevant. Consequently, the trade-off between high data utility and quantifiable privacy poses new challenges. State-of-the-art research mainly focuses on differentially private trace variant construction based on prefix expansion methods. However, these algorithms face several practical limitations such as high computational complexity, introducing fake variants, removing frequent variants, and a bounded variant length. In this paper, we introduce a new approach for direct differentially private trace variant release which uses anonymized partition selection strategies to overcome the aforementioned restraints. Experimental results on real-life event data show that our algorithm outperforms state-of-the-art methods in terms of both plain data utility and result utility preservation.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Thouvenot, Maxime, Olivier Curé, Lynda Temal, Sarra Ben Abbès et Philippe Calvez. « Knowledge Graph Publishing with Anatomy, Toward a New Privacy and Utility Trade-Off ». Dans Advances in Knowledge Discovery and Management, 55–79. Cham : Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-40403-0_3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Peng, Lian, et Meikang Qiu. « AI in Healthcare Data Privacy-Preserving : Enhanced Trade-Off Between Security and Utility ». Dans Knowledge Science, Engineering and Management, 349–60. Singapore : Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-5498-4_27.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Liu, Yang, et Andrew Simpson. « On the Trade-Off Between Privacy and Utility in Mobile Services : A Qualitative Study ». Dans Computer Security, 261–78. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-42048-2_17.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Ghatak, Debolina, et Kouichi Sakurai. « A Survey on Privacy Preserving Synthetic Data Generation and a Discussion on a Privacy-Utility Trade-off Problem ». Dans Communications in Computer and Information Science, 167–80. Singapore : Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-7769-5_13.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Demir, Mehmet Özgün, Ali Emre Pusane, Guido Dartmann et Güneş Karabulut Kurt. « Utility privacy trade-off in communication systems ». Dans Big Data Analytics for Cyber-Physical Systems, 293–314. Elsevier, 2019. http://dx.doi.org/10.1016/b978-0-12-816637-6.00014-2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Kaabachi, Bayrem, Jérémie Despraz, Thierry Meurers, Fabian Prasser et Jean Louis Raisaro. « Generation and Evaluation of Synthetic Data in a University Hospital Setting ». Dans Studies in Health Technology and Informatics. IOS Press, 2022. http://dx.doi.org/10.3233/shti220420.

Texte intégral
Résumé :
In this study, we propose a unified evaluation framework for systematically assessing the utility-privacy trade-off of synthetic data generation (SDG) models. These SDG models are adapted to deal with longitudinal or tabular data stemming from electronic health records (EHR) containing both discrete and numeric features. Our evaluation framework considers different data sharing scenarios and attacker models.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Corbucci, Luca, Mikko A. Heikkilä, David Solans Noguero, Anna Monreale et Nicolas Kourtellis. « PUFFLE : Balancing Privacy, Utility, and Fairness in Federated Learning ». Dans Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240671.

Texte intégral
Résumé :
Training and deploying Machine Learning models that simultaneously adhere to principles of fairness and privacy while ensuring good utility poses a significant challenge. The interplay between these three factors of trustworthiness is frequently underestimated and remains insufficiently explored. Consequently, many efforts focus on ensuring only two of these factors, neglecting one in the process. The decentralization of the datasets and the variations in distributions among the clients exacerbate the complexity of achieving this ethical trade-off in the context of Federated Learning (FL). For the first time in FL literature, we address these three factors of trustworthiness. We introduce PUFFLE, a high-level parameterised approach that can help in the exploration of the balance between utility, privacy, and fairness in FL scenarios. We prove that PUFFLE can be effective across diverse datasets, models, and data distributions, reducing the model unfairness up to 75%, with a maximum reduction in the utility of 17% in the worst-case scenario, while maintaining strict privacy guarantees during the FL training.
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Utility-privacy trade-off"

1

Pohlhausen, Jule, Francesco Nespoli et Joerg Bitzer. « Long-Term Conversation Analysis : Privacy-Utility Trade-Off Under Noise and Reverberation ». Dans 2024 18th International Workshop on Acoustic Signal Enhancement (IWAENC), 404–8. IEEE, 2024. http://dx.doi.org/10.1109/iwaenc61483.2024.10694640.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Nam, Seung-Hyun, Hyun-Young Park et Si-Hyeon Lee. « Achieving the Exactly Optimal Privacy-Utility Trade-Off with Low Communication Cost via Shared Randomness ». Dans 2024 IEEE International Symposium on Information Theory (ISIT), 3065–70. IEEE, 2024. http://dx.doi.org/10.1109/isit57864.2024.10619385.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Baselizadeh, Adel, Diana Saplacan Lindblom, Weria Khaksar, Md Zia Uddin et Jim Torresen. « Comparative Analysis of Vision-Based Sensors for Human Monitoring in Care Robots : Exploring the Utility-Privacy Trade-off ». Dans 2024 33rd IEEE International Conference on Robot and Human Interactive Communication (ROMAN), 1794–801. IEEE, 2024. http://dx.doi.org/10.1109/ro-man60168.2024.10731223.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Erdogdu, Murat A., et Nadia Fawaz. « Privacy-utility trade-off under continual observation ». Dans 2015 IEEE International Symposium on Information Theory (ISIT). IEEE, 2015. http://dx.doi.org/10.1109/isit.2015.7282766.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Zhou, Yihui, Guangchen Song, Hai Liu et Laifeng Lu. « Privacy-Utility Trade-Off of K-Subset Mechanism ». Dans 2018 International Conference on Networking and Network Applications (NaNA). IEEE, 2018. http://dx.doi.org/10.1109/nana.2018.8648741.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Li, Mengqian, Youliang Tian, Junpeng Zhang, Dandan Fan et Dongmei Zhao. « The Trade-off Between Privacy and Utility in Local Differential Privacy ». Dans 2021 International Conference on Networking and Network Applications (NaNA). IEEE, 2021. http://dx.doi.org/10.1109/nana53684.2021.00071.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Sreekumar, Sreejith, et Deniz Gunduz. « Optimal Privacy-Utility Trade-off under a Rate Constraint ». Dans 2019 IEEE International Symposium on Information Theory (ISIT). IEEE, 2019. http://dx.doi.org/10.1109/isit.2019.8849330.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Alvim, Mário S., Natasha Fernandes, Annabelle McIver et Gabriel H. Nunes. « The Privacy-Utility Trade-off in the Topics API ». Dans CCS '24 : ACM SIGSAC Conference on Computer and Communications Security, 1106–20. New York, NY, USA : ACM, 2024. https://doi.org/10.1145/3658644.3670368.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Zhou, Jinhao, Zhou Su, Jianbing Ni, Yuntao Wang, Yanghe Pan et Rui Xing. « Personalized Privacy-Preserving Federated Learning : Optimized Trade-off Between Utility and Privacy ». Dans GLOBECOM 2022 - 2022 IEEE Global Communications Conference. IEEE, 2022. http://dx.doi.org/10.1109/globecom48099.2022.10000793.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Demir, Mehmet Oezguen, Selahattin Goekceli, Guido Dartmann, Volker Luecken, Gerd Ascheid et Guenes Karabulut Kurt. « Utility Privacy Trade-Off for Noisy Channels in OFDM Systems ». Dans 2017 IEEE 86th Vehicular Technology Conference (VTC-Fall). IEEE, 2017. http://dx.doi.org/10.1109/vtcfall.2017.8288194.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie