Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Utility-privacy trade-off.

Artykuły w czasopismach na temat „Utility-privacy trade-off”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Utility-privacy trade-off”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Liu, Hai, Zhenqiang Wu, Yihui Zhou, Changgen Peng, Feng Tian i Laifeng Lu. "Privacy-Preserving Monotonicity of Differential Privacy Mechanisms". Applied Sciences 8, nr 11 (28.10.2018): 2081. http://dx.doi.org/10.3390/app8112081.

Pełny tekst źródła
Streszczenie:
Differential privacy mechanisms can offer a trade-off between privacy and utility by using privacy metrics and utility metrics. The trade-off of differential privacy shows that one thing increases and another decreases in terms of privacy metrics and utility metrics. However, there is no unified trade-off measurement of differential privacy mechanisms. To this end, we proposed the definition of privacy-preserving monotonicity of differential privacy, which measured the trade-off between privacy and utility. First, to formulate the trade-off, we presented the definition of privacy-preserving monotonicity based on computational indistinguishability. Second, building on privacy metrics of the expected estimation error and entropy, we theoretically and numerically showed privacy-preserving monotonicity of Laplace mechanism, Gaussian mechanism, exponential mechanism, and randomized response mechanism. In addition, we also theoretically and numerically analyzed the utility monotonicity of these several differential privacy mechanisms based on utility metrics of modulus of characteristic function and variant of normalized entropy. Third, according to the privacy-preserving monotonicity of differential privacy, we presented a method to seek trade-off under a semi-honest model and analyzed a unilateral trade-off under a rational model. Therefore, privacy-preserving monotonicity can be used as a criterion to evaluate the trade-off between privacy and utility in differential privacy mechanisms under the semi-honest model. However, privacy-preserving monotonicity results in a unilateral trade-off of the rational model, which can lead to severe consequences.
Style APA, Harvard, Vancouver, ISO itp.
2

Avent, Brendan, Javier González, Tom Diethe, Andrei Paleyes i Borja Balle. "Automatic Discovery of Privacy–Utility Pareto Fronts". Proceedings on Privacy Enhancing Technologies 2020, nr 4 (1.10.2020): 5–23. http://dx.doi.org/10.2478/popets-2020-0060.

Pełny tekst źródła
Streszczenie:
AbstractDifferential privacy is a mathematical framework for privacy-preserving data analysis. Changing the hyperparameters of a differentially private algorithm allows one to trade off privacy and utility in a principled way. Quantifying this trade-off in advance is essential to decision-makers tasked with deciding how much privacy can be provided in a particular application while maintaining acceptable utility. Analytical utility guarantees offer a rigorous tool to reason about this tradeoff, but are generally only available for relatively simple problems. For more complex tasks, such as training neural networks under differential privacy, the utility achieved by a given algorithm can only be measured empirically. This paper presents a Bayesian optimization methodology for efficiently characterizing the privacy– utility trade-off of any differentially private algorithm using only empirical measurements of its utility. The versatility of our method is illustrated on a number of machine learning tasks involving multiple models, optimizers, and datasets.
Style APA, Harvard, Vancouver, ISO itp.
3

Gobinathan, B., M. A. Mukunthan, S. Surendran, K. Somasundaram, Syed Abdul Moeed, P. Niranjan, V. Gouthami i in. "A Novel Method to Solve Real Time Security Issues in Software Industry Using Advanced Cryptographic Techniques". Scientific Programming 2021 (28.12.2021): 1–9. http://dx.doi.org/10.1155/2021/3611182.

Pełny tekst źródła
Streszczenie:
In recent times, the utility and privacy are trade-off factors with the performance of one factor tends to sacrifice the other. Therefore, the dataset cannot be published without privacy. It is henceforth crucial to maintain an equilibrium between the utility and privacy of data. In this paper, a novel technique on trade-off between the utility and privacy is developed, where the former is developed with a metaheuristic algorithm and the latter is developed using a cryptographic model. The utility is carried out with the process of clustering, and the privacy model encrypts and decrypts the model. At first, the input datasets are clustered, and after clustering, the privacy of data is maintained. The simulation is conducted on the manufacturing datasets over various existing models. The results show that the proposed model shows improved clustering accuracy and data privacy than the existing models. The evaluation with the proposed model shows a trade-off privacy preservation and utility clustering in smart manufacturing datasets.
Style APA, Harvard, Vancouver, ISO itp.
4

Zeng, Xia, Chuanchuan Yang i Bin Dai. "Utility–Privacy Trade-Off in Distributed Machine Learning Systems". Entropy 24, nr 9 (14.09.2022): 1299. http://dx.doi.org/10.3390/e24091299.

Pełny tekst źródła
Streszczenie:
In distributed machine learning (DML), though clients’ data are not directly transmitted to the server for model training, attackers can obtain the sensitive information of clients by analyzing the local gradient parameters uploaded by clients. For this case, we use the differential privacy (DP) mechanism to protect the clients’ local parameters. In this paper, from an information-theoretic point of view, we study the utility–privacy trade-off in DML with the help of the DP mechanism. Specifically, three cases including independent clients’ local parameters with independent DP noise, dependent clients’ local parameters with independent/dependent DP noise are considered. Mutual information and conditional mutual information are used to characterize utility and privacy, respectively. First, we show the relationship between utility and privacy for the three cases. Then, we show the optimal noise variance that achieves the maximal utility under a certain level of privacy. Finally, the results of this paper are further illustrated by numerical results
Style APA, Harvard, Vancouver, ISO itp.
5

Srivastava, Saurabh, Vinay P. Namboodiri i T. V. Prabhakar. "Achieving Privacy-Utility Trade-off in existing Software Systems". Journal of Physics: Conference Series 1454 (luty 2020): 012004. http://dx.doi.org/10.1088/1742-6596/1454/1/012004.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Wunderlich, Dominik, Daniel Bernau, Francesco Aldà, Javier Parra-Arnau i Thorsten Strufe. "On the Privacy–Utility Trade-Off in Differentially Private Hierarchical Text Classification". Applied Sciences 12, nr 21 (4.11.2022): 11177. http://dx.doi.org/10.3390/app122111177.

Pełny tekst źródła
Streszczenie:
Hierarchical text classification consists of classifying text documents into a hierarchy of classes and sub-classes. Although Artificial Neural Networks have proved useful to perform this task, unfortunately, they can leak training data information to adversaries due to training data memorization. Using differential privacy during model training can mitigate leakage attacks against trained models, enabling the models to be shared safely at the cost of reduced model accuracy. This work investigates the privacy–utility trade-off in hierarchical text classification with differential privacy guarantees, and it identifies neural network architectures that offer superior trade-offs. To this end, we use a white-box membership inference attack to empirically assess the information leakage of three widely used neural network architectures. We show that large differential privacy parameters already suffice to completely mitigate membership inference attacks, thus resulting only in a moderate decrease in model utility. More specifically, for large datasets with long texts, we observed Transformer-based models to achieve an overall favorable privacy–utility trade-off, while for smaller datasets with shorter texts, convolutional neural networks are preferable.
Style APA, Harvard, Vancouver, ISO itp.
7

Mohammed, Kabiru, Aladdin Ayesh i Eerke Boiten. "Complementing Privacy and Utility Trade-Off with Self-Organising Maps". Cryptography 5, nr 3 (17.08.2021): 20. http://dx.doi.org/10.3390/cryptography5030020.

Pełny tekst źródła
Streszczenie:
In recent years, data-enabled technologies have intensified the rate and scale at which organisations collect and analyse data. Data mining techniques are applied to realise the full potential of large-scale data analysis. These techniques are highly efficient in sifting through big data to extract hidden knowledge and assist evidence-based decisions, offering significant benefits to their adopters. However, this capability is constrained by important legal, ethical and reputational concerns. These concerns arise because they can be exploited to allow inferences to be made on sensitive data, thus posing severe threats to individuals’ privacy. Studies have shown Privacy-Preserving Data Mining (PPDM) can adequately address this privacy risk and permit knowledge extraction in mining processes. Several published works in this area have utilised clustering techniques to enforce anonymisation models on private data, which work by grouping the data into clusters using a quality measure and generalising the data in each group separately to achieve an anonymisation threshold. However, existing approaches do not work well with high-dimensional data, since it is difficult to develop good groupings without incurring excessive information loss. Our work aims to complement this balancing act by optimising utility in PPDM processes. To illustrate this, we propose a hybrid approach, that combines self-organising maps with conventional privacy-based clustering algorithms. We demonstrate through experimental evaluation, that results from our approach produce more utility for data mining tasks and outperforms conventional privacy-based clustering algorithms. This approach can significantly enable large-scale analysis of data in a privacy-preserving and trustworthy manner.
Style APA, Harvard, Vancouver, ISO itp.
8

Kiranagi, Manasi, Devika Dhoble, Madeeha Tahoor i Dr Rekha Patil. "Finding Optimal Path and Privacy Preserving for Wireless Network". International Journal for Research in Applied Science and Engineering Technology 10, nr 10 (31.10.2022): 360–65. http://dx.doi.org/10.22214/ijraset.2022.46949.

Pełny tekst źródła
Streszczenie:
Abstract: Privacy-preserving routing protocols in wireless networks frequently utilize additional artificial traffic to hide the source-destination identities of the communicating pair. Usually, the addition of artificial traffic is done heuristically with no guarantees that the transmission cost, latency, etc., are optimized in every network topology. We explicitly examine the privacyutility trade-off problem for wireless networks and develop a novel privacy-preserving routing algorithm called Optimal Privacy Enhancing Routing Algorithm (OPERA). OPERA uses a statistical decision-making framework to optimize the privacy of the routing protocol given a utility (or cost) constraint. We consider global adversaries with both Lossless and lossy observations that use the Bayesian maximum-a-posteriori (MAP) estimation strategy. We formulate the privacy-utility trade-off problem as a linear program which can be efficiently solved.
Style APA, Harvard, Vancouver, ISO itp.
9

Cai, Lin, Jinchuan Tang, Shuping Dang i Gaojie Chen. "Privacy protection and utility trade-off for social graph embedding". Information Sciences 676 (sierpień 2024): 120866. http://dx.doi.org/10.1016/j.ins.2024.120866.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Rassouli, Borzoo, i Deniz Gunduz. "Optimal Utility-Privacy Trade-Off With Total Variation Distance as a Privacy Measure". IEEE Transactions on Information Forensics and Security 15 (2020): 594–603. http://dx.doi.org/10.1109/tifs.2019.2903658.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Franzen, Daniel, Claudia Müller-Birn i Odette Wegwarth. "Communicating the Privacy-Utility Trade-off: Supporting Informed Data Donation with Privacy Decision Interfaces for Differential Privacy". Proceedings of the ACM on Human-Computer Interaction 8, CSCW1 (17.04.2024): 1–56. http://dx.doi.org/10.1145/3637309.

Pełny tekst źródła
Streszczenie:
Data collections, such as those from citizen science projects, can provide valuable scientific insights or help the public to make decisions based on real demand. At the same time, the collected data might cause privacy risks for their volunteers, for example, by revealing sensitive information. Similar but less apparent trade-offs exist for data collected while using social media or other internet-based services. One approach to addressing these privacy risks might be to anonymize the data, for example, by using Differential Privacy (DP). DP allows for tuning and, consequently, communicating the trade-off between the data contributors' privacy and the resulting data utility for insights. However, there is little research that explores how to communicate the existing trade-off to users. % We contribute to closing this research gap by designing interactive elements and visualizations that specifically support people's understanding of this privacy-utility trade-off. We evaluated our user interfaces in a user study (N=378). Our results show that a combination of graphical risk visualization and interactive risk exploration best supports the informed decision, \ie the privacy decision is consistent with users' privacy concerns. Additionally, we found that personal attributes, such as numeracy, and the need for cognition, significantly influence the decision behavior and the privacy usability of privacy decision interfaces. In our recommendations, we encourage data collectors, such as citizen science project coordinators, to communicate existing privacy risks to their volunteers since such communication does not impact donation rates. %Understanding such privacy risks can also be part of typical training efforts in citizen science projects. %DP allows volunteers to balance their privacy concerns with their wish to contribute to the project. From a design perspective, we emphasize the complexity of the decision situation and the resulting need to design with usability for all population groups in mind. % We hope that our study will inspire further research from the human-computer interaction community that will unlock the full potential of DP for a broad audience and ultimately contribute to a societal understanding of acceptable privacy losses in specific data contexts.
Style APA, Harvard, Vancouver, ISO itp.
12

Tasnim, Naima, Jafar Mohammadi, Anand D. Sarwate i Hafiz Imtiaz. "Approximating Functions with Approximate Privacy for Applications in Signal Estimation and Learning". Entropy 25, nr 5 (22.05.2023): 825. http://dx.doi.org/10.3390/e25050825.

Pełny tekst źródła
Streszczenie:
Large corporations, government entities and institutions such as hospitals and census bureaus routinely collect our personal and sensitive information for providing services. A key technological challenge is designing algorithms for these services that provide useful results, while simultaneously maintaining the privacy of the individuals whose data are being shared. Differential privacy (DP) is a cryptographically motivated and mathematically rigorous approach for addressing this challenge. Under DP, a randomized algorithm provides privacy guarantees by approximating the desired functionality, leading to a privacy–utility trade-off. Strong (pure DP) privacy guarantees are often costly in terms of utility. Motivated by the need for a more efficient mechanism with better privacy–utility trade-off, we propose Gaussian FM, an improvement to the functional mechanism (FM) that offers higher utility at the expense of a weakened (approximate) DP guarantee. We analytically show that the proposed Gaussian FM algorithm can offer orders of magnitude smaller noise compared to the existing FM algorithms. We further extend our Gaussian FM algorithm to decentralized-data settings by incorporating the CAPE protocol and propose capeFM. Our method can offer the same level of utility as its centralized counterparts for a range of parameter choices. We empirically show that our proposed algorithms outperform existing state-of-the-art approaches on synthetic and real datasets.
Style APA, Harvard, Vancouver, ISO itp.
13

Kremer, Steve. "Security and Privacy Column". ACM SIGLOG News 10, nr 1 (styczeń 2023): 3. http://dx.doi.org/10.1145/3584676.3584679.

Pełny tekst źródła
Streszczenie:
Differential privacy is nowadays considered the "gold standard" when releasing information, e.g., statistics, on sensitive data. To avoid leaking too much sensitive data noise-adding mechanisms may be used. ϵ-differential privacy measures the amount of privacy ϵ that such a mechanism ensures. Of course, adding too much noise results into useless, random information, while adding not enough may lead to privacy violations. This problem is known as the privacy-utility trade-off and raises a natural optimality question: how can we maximise utility for a given amount of privacy ϵ.
Style APA, Harvard, Vancouver, ISO itp.
14

De, Abir, i Soumen Chakrabarti. "Differentially Private Link Prediction with Protected Connections". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 1 (18.05.2021): 63–71. http://dx.doi.org/10.1609/aaai.v35i1.16078.

Pełny tekst źródła
Streszczenie:
Link prediction (LP) algorithms propose to each node a ranked list of nodes that are currently non-neighbors, as the most likely candidates for future linkage. Owing to increasing concerns about privacy, users (nodes) may prefer to keep some of their connections protected or private. Motivated by this observation, our goal is to design a differentially private LP algorithm, which trades off between privacy of the protected node-pairs and the link prediction accuracy. More specifically, we first propose a form of differential privacy on graphs, which models the privacy loss only of those node-pairs which are marked as protected. Next, we develop DPLP, a learning to rank algorithm, which applies a monotone transform to base scores from a non-private LP system, and then adds noise. DPLP is trained with a privacy induced ranking loss, which optimizes the ranking utility for a given maximum allowed level of privacy leakage of the protected node-pairs. Under a recently introduced latent node embedding model, we present a formal trade-off between privacy and LP utility. Extensive experiments with several real-life graphs and several LP heuristics show that DPLP can trade off between privacy and predictive performance more effectively than several alternatives.
Style APA, Harvard, Vancouver, ISO itp.
15

Miller, Jim. "Who Are You? The Trade-Off between Information Utility and Privacy". IEEE Internet Computing 12, nr 4 (lipiec 2008): 93–96. http://dx.doi.org/10.1109/mic.2008.91.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Zhan, Yuting, Hamed Haddadi i Afra Mashhadi. "Privacy-Aware Adversarial Network in Human Mobility Prediction". Proceedings on Privacy Enhancing Technologies 2023, nr 1 (styczeń 2023): 556–70. http://dx.doi.org/10.56553/popets-2023-0032.

Pełny tekst źródła
Streszczenie:
As mobile devices and location-based services are increasingly developed in different smart city scenarios and applications, many unexpected privacy leakages have arisen due to geolocated data collection and sharing. User re-identification and other sensitive inferences are major privacy threats when geolocated data are shared with cloud-assisted applications. Significantly, four spatio-temporal points are enough to uniquely identify 95% of the individuals, which exacerbates personal information leakages. To tackle malicious purposes such as user re-identification, we propose an LSTM-based adversarial mechanism with representation learning to attain a privacy-preserving feature representation of the original geolocated data (i.e., mobility data) for a sharing purpose. These representations aim to maximally reduce the chance of user re-identification and full data reconstruction with a minimal utility budget (i.e., loss). We train the mechanism by quantifying privacy-utility trade-off of mobility datasets in terms of trajectory reconstruction risk, user re-identification risk, and mobility predictability. We report an exploratory analysis that enables the user to assess this trade-off with a specific loss function and its weight parameters. The extensive comparison results on four representative mobility datasets demonstrate the superiority of our proposed architecture in mobility privacy protection and the efficiency of the proposed privacy-preserving features extractor. We show that the privacy of mobility traces attains decent protection at the cost of marginal mobility utility. Our results also show that by exploring the Pareto optimal setting, we can simultaneously increase both privacy (45%) and utility (32%).
Style APA, Harvard, Vancouver, ISO itp.
17

Chen, Youqin, Zhengquan Xu, Jianzhang Chen i Shan Jia. "B-DP: Dynamic Collection and Publishing of Continuous Check-In Data with Best-Effort Differential Privacy". Entropy 24, nr 3 (14.03.2022): 404. http://dx.doi.org/10.3390/e24030404.

Pełny tekst źródła
Streszczenie:
Differential privacy (DP) has become a de facto standard to achieve data privacy. However, the utility of DP solutions with the premise of privacy priority is often unacceptable in real-world applications. In this paper, we propose the best-effort differential privacy (B-DP) to promise the preference for utility first and design two new metrics including the point belief degree and the regional average belief degree to evaluate its privacy from a new perspective of preference for privacy. Therein, the preference for privacy and utility is referred to as expected privacy protection (EPP) and expected data utility (EDU), respectively. We also investigate how to realize B-DP with an existing DP mechanism (KRR) and a newly constructed mechanism (EXPQ) in the dynamic check-in data collection and publishing. Extensive experiments on two real-world check-in datasets verify the effectiveness of the concept of B-DP. Our newly constructed EXPQ can also satisfy a better B-DP than KRR to provide a good trade-off between privacy and utility.
Style APA, Harvard, Vancouver, ISO itp.
18

Chandrasekaran, Varun, Chuhan Gao, Brian Tang, Kassem Fawaz, Somesh Jha i Suman Banerjee. "Face-Off: Adversarial Face Obfuscation". Proceedings on Privacy Enhancing Technologies 2021, nr 2 (29.01.2021): 369–90. http://dx.doi.org/10.2478/popets-2021-0032.

Pełny tekst źródła
Streszczenie:
Abstract Advances in deep learning have made face recognition technologies pervasive. While useful to social media platforms and users, this technology carries significant privacy threats. Coupled with the abundant information they have about users, service providers can associate users with social interactions, visited places, activities, and preferences–some of which the user may not want to share. Additionally, facial recognition models used by various agencies are trained by data scraped from social media platforms. Existing approaches to mitigate associated privacy risks result in an imbalanced trade-off between privacy and utility. In this paper, we address this trade-off by proposing Face-Off, a privacy-preserving framework that introduces strategic perturbations to images of the user’s face to prevent it from being correctly recognized. To realize Face-Off, we overcome a set of challenges related to the black-box nature of commercial face recognition services, and the scarcity of literature for adversarial attacks on metric networks. We implement and evaluate Face-Off to find that it deceives three commercial face recognition services from Microsoft, Amazon, and Face++. Our user study with 423 participants further shows that the perturbations come at an acceptable cost for the users.
Style APA, Harvard, Vancouver, ISO itp.
19

Wu, Qihong, Jinchuan Tang, Shuping Dang i Gaojie Chen. "Data privacy and utility trade-off based on mutual information neural estimator". Expert Systems with Applications 207 (listopad 2022): 118012. http://dx.doi.org/10.1016/j.eswa.2022.118012.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Yao, Xin, Juan Yu, Jianmin Han, Jianfeng Lu, Hao Peng, Yijia Wu i Xiaoqian Cao. "DP-CSM: Efficient Differentially Private Synthesis for Human Mobility Trajectory with Coresets and Staircase Mechanism". ISPRS International Journal of Geo-Information 11, nr 12 (5.12.2022): 607. http://dx.doi.org/10.3390/ijgi11120607.

Pełny tekst źródła
Streszczenie:
Generating differentially private synthetic human mobility trajectories from real trajectories is a commonly used approach for privacy-preserving trajectory publishing. However, existing synthetic trajectory generation methods suffer from the drawbacks of poor scalability and suboptimal privacy–utility trade-off, due to continuous spatial space, high dimentionality of trajectory data and the suboptimal noise addition mechanism. To overcome the drawbacks, we propose DP-CSM, a novel differentially private trajectory generation method using coreset clustering and the staircase mechanism, to generate differentially private synthetic trajectories in two main steps. Firstly, it generates generalized locations for each timestamp, and utilizes coreset-based clustering to improve scalability. Secondly, it reconstructs synthetic trajectories with the generalized locations, and uses the staircase mechanism to avoid the over-perturbation of noises and maintain utility of synthetic trajectories. We choose three state-of-the-art clustering-based generation methods as the comparative baselines, and conduct comprehensive experiments on three real-world datasets to evaluate the performance of DP-CSM. Experimental results show that DP-CSM achieves better privacy–utility trade-off than the three baselines, and significantly outperforms the three baselines in terms of efficiency.
Style APA, Harvard, Vancouver, ISO itp.
21

Zhang, Xiao-Yu, Stefanie Kuenzel, José-Rodrigo Córdoba-Pachón i Chris Watkins. "Privacy-Functionality Trade-Off: A Privacy-Preserving Multi-Channel Smart Metering System". Energies 13, nr 12 (21.06.2020): 3221. http://dx.doi.org/10.3390/en13123221.

Pełny tekst źródła
Streszczenie:
While smart meters can provide households with more autonomy regarding their energy consumption, they can also be a significant intrusion into the household’s privacy. There is abundant research implementing protection methods for different aspects (e.g., noise-adding and data aggregation, data down-sampling); while the private data are protected as sensitive information is hidden, some of the compulsory functions such as Time-of-use (TOU) billing or value-added services are sacrificed. Moreover, some methods, such as rechargeable batteries and homomorphic encryption, require an expensive energy storage system or central processor with high computation ability, which is unrealistic for mass roll-out. In this paper, we propose a privacy-preserving smart metering system which is a combination of existing data aggregation and data down-sampling mechanisms. The system takes an angle based on the ethical concerns about privacy and it implements a hybrid privacy-utility trade-off strategy, without sacrificing functionality. In the proposed system, the smart meter plays the role of assistant processor rather than information sender/receiver, and it enables three communication channels to transmit different temporal resolution data to protect privacy and allow freedom of choice: high frequency feed-level/substation-level data are adopted for grid operation and management purposes, low frequency household-level data are used for billing, and a privacy-preserving valued-add service channel to provide third party (TP) services. In the end of the paper, the privacy performance is evaluated to examine whether the proposed system satisfies the privacy and functionality requirements.
Style APA, Harvard, Vancouver, ISO itp.
22

Zhao, Jianzhe, Keming Mao, Chenxi Huang i Yuyang Zeng. "Utility Optimization of Federated Learning with Differential Privacy". Discrete Dynamics in Nature and Society 2021 (8.10.2021): 1–14. http://dx.doi.org/10.1155/2021/3344862.

Pełny tekst źródła
Streszczenie:
Secure and trusted cross-platform knowledge sharing is significant for modern intelligent data analysis. To address the trade-off problems between privacy and utility in complex federated learning, a novel differentially private federated learning framework is proposed. First, the impact of data heterogeneity of participants on global model accuracy is analyzed quantitatively based on 1-Wasserstein distance. Then, we design a multilevel and multiparticipant dynamic allocation method of privacy budget to reduce the injected noise, and the utility can be improved efficiently. Finally, they are integrated, and a novel adaptive differentially private federated learning algorithm (A-DPFL) is designed. Comprehensive experiments on redefined non-I.I.D MNIST and CIFAR-10 datasets are conducted, and the results demonstrate the superiority of model accuracy, convergence, and robustness.
Style APA, Harvard, Vancouver, ISO itp.
23

Xue, Lulu, Shengshan Hu, Ruizhi Zhao, Leo Yu Zhang, Shengqing Hu, Lichao Sun i Dezhong Yao. "Revisiting Gradient Pruning: A Dual Realization for Defending against Gradient Attacks". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 6 (24.03.2024): 6404–12. http://dx.doi.org/10.1609/aaai.v38i6.28460.

Pełny tekst źródła
Streszczenie:
Collaborative learning (CL) is a distributed learning framework that aims to protect user privacy by allowing users to jointly train a model by sharing their gradient updates only. However, gradient inversion attacks (GIAs), which recover users' training data from shared gradients, impose severe privacy threats to CL. Existing defense methods adopt different techniques, e.g., differential privacy, cryptography, and perturbation defenses, to defend against the GIAs. Nevertheless, all current defense methods suffer from a poor trade-off between privacy, utility, and efficiency. To mitigate the weaknesses of existing solutions, we propose a novel defense method, Dual Gradient Pruning (DGP), based on gradient pruning, which can improve communication efficiency while preserving the utility and privacy of CL. Specifically, DGP slightly changes gradient pruning with a stronger privacy guarantee. And DGP can also significantly improve communication efficiency with a theoretical analysis of its convergence and generalization. Our extensive experiments show that DGP can effectively defend against the most powerful GIAs and reduce the communication cost without sacrificing the model's utility.
Style APA, Harvard, Vancouver, ISO itp.
24

Zhou, Xingcai, i Yu Xiang. "ADMM-Based Differential Privacy Learning for Penalized Quantile Regression on Distributed Functional Data". Mathematics 10, nr 16 (16.08.2022): 2954. http://dx.doi.org/10.3390/math10162954.

Pełny tekst źródła
Streszczenie:
Alternating Direction Method of Multipliers (ADMM) is a widely used machine learning tool in distributed environments. In the paper, we propose an ADMM-based differential privacy learning algorithm (FDP-ADMM) on penalized quantile regression for distributed functional data. The FDP-ADMM algorithm can resist adversary attacks to avoid the possible privacy leakage in distributed networks, which is designed by functional principal analysis, an approximate augmented Lagrange function, ADMM algorithm, and privacy policy via Gaussian mechanism with time-varying variance. It is also a noise-resilient, convergent, and computationally effective distributed learning algorithm, even if for high privacy protection. The theoretical analysis on privacy and convergence guarantees is derived and offers a privacy–utility trade-off: a weaker privacy guarantee would result in better utility. The evaluations on simulation-distributed functional datasets have demonstrated the effectiveness of the FDP-ADMM algorithm even if under high privacy guarantee.
Style APA, Harvard, Vancouver, ISO itp.
25

Li, Qiyu, Chunlai Zhou, Biao Qin i Zhiqiang Xu. "Local Differential Privacy for Belief Functions". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 9 (28.06.2022): 10025–33. http://dx.doi.org/10.1609/aaai.v36i9.21241.

Pełny tekst źródła
Streszczenie:
In this paper, we propose two new definitions of local differential privacy for belief functions. One is based on Shafer’s semantics of randomly coded messages and the other from the perspective of imprecise probabilities. We show that such basic properties as composition and post-processing also hold for our new definitions. Moreover, we provide a hypothesis testing framework for these definitions and study the effect of "don’t know" in the trade-off between privacy and utility in discrete distribution estimation.
Style APA, Harvard, Vancouver, ISO itp.
26

Cao, Hui, Shubo Liu, Renfang Zhao i Xingxing Xiong. "IFed: A novel federated learning framework for local differential privacy in Power Internet of Things". International Journal of Distributed Sensor Networks 16, nr 5 (maj 2020): 155014772091969. http://dx.doi.org/10.1177/1550147720919698.

Pełny tekst źródła
Streszczenie:
Nowadays, wireless sensor network technology is being increasingly popular which is applied to a wide range of Internet of Things. Especially, Power Internet of Things is an important and rapidly growing section in Internet of Thing systems, which benefited from the application of wireless sensor networks to achieve fine-grained information collection. Meanwhile, the privacy risk is gradually exposed, which is the widespread concern for electricity power consumers. Non-intrusive load monitoring, in particular, is a technique to recover state of appliances from only the energy consumption data, which enables adversary inferring the behavior privacy of residents. There can be no doubt that applying local differential privacy to achieve privacy preserving in the local setting is more trustworthy than centralized approach for electricity customers. Although it is hard to control the risk and achieve the trade-off between privacy and utility by traditional local differential privacy obfuscation mechanisms, some existing obfuscation mechanisms based on artificial intelligence, called advanced obfuscation mechanisms, can achieve it. However, the large computing resource consumption to train the machine learning model is not affordable for most Power Internet of Thing terminal. In this article, to solve this problem, IFed was proposed—a novel federated learning framework that let electric provider who normally is adequate in computing resources to help Power Internet of Thing users. First, the optimized framework was proposed in which the trade-off between local differential privacy, data utility, and resource consumption was incorporated. Concurrently, the following problem of privacy preserving on the machine learning model transport between electricity provider and customers was noted and resolved. Last, users were categorized based on different levels of privacy requirements, and stronger privacy guarantee was provided for sensitive users. The formal local differential privacy analysis and the experiments demonstrated that IFed can fulfill the privacy requirements for Power Internet of Thing users.
Style APA, Harvard, Vancouver, ISO itp.
27

Boenisch, Franziska, Christopher Mühl, Roy Rinberg, Jannis Ihrig i Adam Dziedzic. "Individualized PATE: Differentially Private Machine Learning with Individual Privacy Guarantees". Proceedings on Privacy Enhancing Technologies 2023, nr 1 (styczeń 2023): 158–76. http://dx.doi.org/10.56553/popets-2023-0010.

Pełny tekst źródła
Streszczenie:
Applying machine learning (ML) to sensitive domains requires privacy protection of the underlying training data through formal privacy frameworks, such as differential privacy (DP). Yet, usually, the privacy of the training data comes at the cost of the resulting ML models' utility. One reason for this is that DP uses one uniform privacy budget epsilon for all training data points, which has to align with the strictest privacy requirement encountered among all data holders. In practice, different data holders have different privacy requirements and data points of data holders with lower requirements can contribute more information to the training process of the ML models. To account for this need, we propose two novel methods based on the Private Aggregation of Teacher Ensembles (PATE) framework to support the training of ML models with individualized privacy guarantees. We formally describe the methods, provide a theoretical analysis of their privacy bounds, and experimentally evaluate their effect on the final model's utility using the MNIST, SVHN, and Adult income datasets. Our empirical results show that the individualized privacy methods yield ML models of higher accuracy than the non-individualized baseline. Thereby, we improve the privacy-utility trade-off in scenarios in which different data holders consent to contribute their sensitive data at different individual privacy levels.
Style APA, Harvard, Vancouver, ISO itp.
28

Shibata, Hisaichi, Shouhei Hanaoka, Saori Koshino, Soichiro Miki, Yuki Sonoda i Osamu Abe. "Identity Diffuser: Preserving Abnormal Region of Interests While Diffusing Identity". Applied Sciences 14, nr 18 (20.09.2024): 8489. http://dx.doi.org/10.3390/app14188489.

Pełny tekst źródła
Streszczenie:
To release medical images that can be freely used in downstream processes while maintaining their utility, it is necessary to remove personal features from the images while preserving the lesion structures. Unlike previous studies that focused on removing lesion structures while preserving the individuality of medical images, this study proposes and validates a new framework that maintains the lesion structures while diffusing individual characteristics. In this framework, we apply local differential privacy techniques to provide theoretical guarantees of privacy protection. Additionally, to enhance the utility of protected medical images, we perform denoising using a diffusion model on the noise-contaminated medical images. Numerous chest X-rays generated by the proposed method were evaluated by physicians, revealing a trade-off between the level of privacy protection and utility. In other words, it was confirmed that increasing the level of personal information protection tends to result in relatively lower utility. This study potentially enables the release of certain types of medical images that were previously difficult to share.
Style APA, Harvard, Vancouver, ISO itp.
29

Grigoraș, Alexandru, i Florin Leon. "Synthetic Time Series Generation for Decision Intelligence Using Large Language Models". Mathematics 12, nr 16 (13.08.2024): 2494. http://dx.doi.org/10.3390/math12162494.

Pełny tekst źródła
Streszczenie:
A model for generating synthetic time series data using pre-trained large language models is proposed. Starting with the Google T5-base model, which employs an encoder–decoder transformer architecture, the model underwent pre-training on diverse datasets. It was then fine-tuned using the QLoRA technique, which reduces computational complexity by quantizing weight parameters. The process involves the tokenization of time series data through mean scaling and quantization. The performance of the model was evaluated with fidelity, utility, and privacy metrics, showing improvements in fidelity and utility but a trade-off with reduced privacy. The proposed model offers a foundation for decision intelligence systems.
Style APA, Harvard, Vancouver, ISO itp.
30

Thantharate, Pratik, Shyam Bhojwani i Anurag Thantharate. "DPShield: Optimizing Differential Privacy for High-Utility Data Analysis in Sensitive Domains". Electronics 13, nr 12 (14.06.2024): 2333. http://dx.doi.org/10.3390/electronics13122333.

Pełny tekst źródła
Streszczenie:
The proliferation of cloud computing has amplified the need for robust privacy-preserving technologies, particularly when dealing with sensitive financial and human resources (HR) data. However, traditional differential privacy methods often struggle to balance rigorous privacy protections with maintaining data utility. This study introduces DPShield, an optimized adaptive framework that enhances the trade-off between privacy guarantees and data utility in cloud environments. DPShield leverages advanced differential privacy techniques, including dynamic noise-injection mechanisms tailored to data sensitivity, cumulative privacy loss tracking, and domain-specific optimizations. Through comprehensive evaluations on synthetic financial and real-world HR datasets, DPShield demonstrated a remarkable 21.7% improvement in aggregate query accuracy over existing differential privacy approaches. Moreover, it maintained machine learning model accuracy within 5% of non-private benchmarks, ensuring high utility for predictive analytics. These achievements signify a major advancement in differential privacy, offering a scalable solution that harmonizes robust privacy assurances with practical data analysis needs. DPShield’s domain adaptability and seamless integration with cloud architectures underscore its potential as a versatile privacy-enhancing tool. This work bridges the gap between theoretical privacy guarantees and practical implementation demands, paving the way for more secure, ethical, and insightful data usage in cloud computing environments.
Style APA, Harvard, Vancouver, ISO itp.
31

Triastcyn, Aleksei, i Boi Faltings. "Generating Higher-Fidelity Synthetic Datasets with Privacy Guarantees". Algorithms 15, nr 7 (1.07.2022): 232. http://dx.doi.org/10.3390/a15070232.

Pełny tekst źródła
Streszczenie:
We consider the problem of enhancing user privacy in common data analysis and machine learning development tasks, such as data annotation and inspection, by substituting the real data with samples from a generative adversarial network. We propose employing Bayesian differential privacy as the means to achieve a rigorous theoretical guarantee while providing a better privacy-utility trade-off. We demonstrate experimentally that our approach produces higher-fidelity samples compared to prior work, allowing to (1) detect more subtle data errors and biases, and (2) reduce the need for real data labelling by achieving high accuracy when training directly on artificial samples.
Style APA, Harvard, Vancouver, ISO itp.
32

Vijayan, Naveen Edapurath. "Privacy-Preserving Analytics in HR Tech- Federated Learning and Differential Privacy Techniques for Sensitive Data". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, nr 11 (10.11.2024): 1–6. http://dx.doi.org/10.55041/ijsrem11473.

Pełny tekst źródła
Streszczenie:
This paper explores the application of privacy-preserving analytics in human resources (HR), focusing on the synergistic use of federated learning and differential privacy. As HR departments increasingly leverage data-driven insights, the protection of sensitive employee information becomes paramount. Federated learning enables collaborative model training without centralizing raw data, while differential privacy adds calibrated noise to ensure individual data remains indiscernible. Together, these techniques form a robust framework for safeguarding HR data while enabling advanced analytics. The paper discusses the challenges of handling sensitive HR information, examines the implementation of federated learning and differential privacy, and demonstrates their combined effectiveness in maintaining data utility while ensuring privacy. By adopting these approaches, organizations can derive valuable workforce insights, comply with data protection regulations, and foster employee trust. This research contributes to the growing field of ethical data use in HR, offering a blueprint for balancing analytical capabilities with privacy imperatives in the modern workplace. Keywords—Privacy-preserving analytics, Federated learning, Differential privacy, HR analytics, Data protection, Employee privacy, Decentralized learning, GDPR compliance, CCPA compliance, Sensitive data handling, Data-driven HR, Privacy-utility trade-off.
Style APA, Harvard, Vancouver, ISO itp.
33

Puaschunder, Julia. "Towards a Utility Theory of Privacy and Information Sharing". International Journal of Strategic Information Technology and Applications 10, nr 1 (styczeń 2019): 1–22. http://dx.doi.org/10.4018/ijsita.2019010101.

Pełny tekst źródła
Streszczenie:
Sustainability management has originally and—to this day—primarily been focused on environmental aspects. Today, enormous data storage capacities and computational power in the e-big data era have created unforeseen opportunities for big data hoarding corporations to reap hidden benefits from an individual's information sharing, which occurs bit by bit over time. This article presents a novel angle of sustainability, which is concerned with sensitive data protection given by the recently detected trade-off predicament between privacy and information sharing in the digital big data age. When individual decision makers face the privacy versus information sharing predicament in their corporate leadership, dignity and utility considerations could influence risk management and sustainability operations. Yet, to this day, there has not been a clear connection between dignity and utility of privacy and information sharing as risk management and sustainability drivers. The chapter unravels the legal foundations of dignity in privacy but also the behavioral economics of utility in communication and information sharing in order to draw a case of dignity and utility to be integrated into contemporary corporate governance, risk management and sustainability considerations of e-innovation.
Style APA, Harvard, Vancouver, ISO itp.
34

Buchholz, Erik, Alsharif Abuadbba, Shuo Wang, Surya Nepal i Salil S. Kanhere. "SoK: Can Trajectory Generation Combine Privacy and Utility?" Proceedings on Privacy Enhancing Technologies 2024, nr 3 (lipiec 2024): 75–93. http://dx.doi.org/10.56553/popets-2024-0068.

Pełny tekst źródła
Streszczenie:
While location trajectories represent a valuable data source for analyses and location-based services, they can reveal sensitive information, such as political and religious preferences. Differentially private publication mechanisms have been proposed to allow for analyses under rigorous privacy guarantees. However, the traditional protection schemes suffer from a limiting privacy-utility trade-off and are vulnerable to correlation and reconstruction attacks. Synthetic trajectory data generation and release represent a promising alternative to protection algorithms. While initial proposals achieve remarkable utility, they fail to provide rigorous privacy guarantees. This paper proposes a framework for designing a privacy-preserving trajectory publication approach by defining five design goals, particularly stressing the importance of choosing an appropriate Unit of Privacy. Based on this framework, we briefly discuss the existing trajectory protection approaches, emphasising their shortcomings. This work focuses on the systematisation of the state-of-the-art generative models for trajectories in the context of the proposed framework. We find that no existing solution satisfies all requirements. Thus, we perform an experimental study evaluating the applicability of six sequential generative models to the trajectory domain. Finally, we conclude that a generative trajectory model providing semantic guarantees remains an open research question and propose concrete next steps for future research.
Style APA, Harvard, Vancouver, ISO itp.
35

Kaplan, Caelin, Chuan Xu, Othmane Marfoq, Giovanni Neglia i Anderson Santana de Oliveira. "A Cautionary Tale: On the Role of Reference Data in Empirical Privacy Defenses". Proceedings on Privacy Enhancing Technologies 2024, nr 1 (styczeń 2024): 525–48. http://dx.doi.org/10.56553/popets-2024-0031.

Pełny tekst źródła
Streszczenie:
Within the realm of privacy-preserving machine learning, empirical privacy defenses have been proposed as a solution to achieve satisfactory levels of training data privacy without a significant drop in model utility. Most existing defenses against membership inference attacks assume access to reference data, defined as an additional dataset coming from the same (or a similar) underlying distribution as training data. Despite the common use of reference data, previous works are notably reticent about defining and evaluating reference data privacy. As gains in model utility and/or training data privacy may come at the expense of reference data privacy, it is essential that all three aspects are duly considered. In this paper, we conduct the first comprehensive analysis of empirical privacy defenses. First, we examine the availability of reference data and its privacy treatment in previous works and demonstrate its necessity for fairly comparing defenses. Second, we propose a baseline defense that enables the utility-privacy tradeoff with respect to both training and reference data to be easily understood. Our method is formulated as an empirical risk minimization with a constraint on the generalization error, which, in practice, can be evaluated as a weighted empirical risk minimization (WERM) over the training and reference datasets. Although we conceived of WERM as a simple baseline, our experiments show that, surprisingly, it outperforms the most well-studied and current state-of-the-art empirical privacy defenses using reference data for nearly all relative privacy levels of reference and training data. Our investigation also reveals that these existing methods are unable to trade off reference data privacy for model utility and/or training data privacy, and thus fail to operate outside of the high reference data privacy case. Overall, our work highlights the need for a proper evaluation of the triad model utility / training data privacy / reference data privacy when comparing privacy defenses.
Style APA, Harvard, Vancouver, ISO itp.
36

Xiao, Taihong, Yi-Hsuan Tsai, Kihyuk Sohn, Manmohan Chandraker i Ming-Hsuan Yang. "Adversarial Learning of Privacy-Preserving and Task-Oriented Representations". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 07 (3.04.2020): 12434–41. http://dx.doi.org/10.1609/aaai.v34i07.6930.

Pełny tekst źródła
Streszczenie:
Data privacy has emerged as an important issue as data-driven deep learning has been an essential component of modern machine learning systems. For instance, there could be a potential privacy risk of machine learning systems via the model inversion attack, whose goal is to reconstruct the input data from the latent representation of deep networks. Our work aims at learning a privacy-preserving and task-oriented representation to defend against such model inversion attacks. Specifically, we propose an adversarial reconstruction learning framework that prevents the latent representations decoded into original input data. By simulating the expected behavior of adversary, our framework is realized by minimizing the negative pixel reconstruction loss or the negative feature reconstruction (i.e., perceptual distance) loss. We validate the proposed method on face attribute prediction, showing that our method allows protecting visual privacy with a small decrease in utility performance. In addition, we show the utility-privacy trade-off with different choices of hyperparameter for negative perceptual distance loss at training, allowing service providers to determine the right level of privacy-protection with a certain utility performance. Moreover, we provide an extensive study with different selections of features, tasks, and the data to further analyze their influence on privacy protection.
Style APA, Harvard, Vancouver, ISO itp.
37

Vepakomma, Praneeth, Julia Balla i Ramesh Raskar. "PrivateMail: Supervised Manifold Learning of Deep Features with Privacy for Image Retrieval". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 8 (28.06.2022): 8503–11. http://dx.doi.org/10.1609/aaai.v36i8.20827.

Pełny tekst źródła
Streszczenie:
Differential Privacy offers strong guarantees such as immutable privacy under any post-processing. In this work, we propose a differentially private mechanism called PrivateMail for performing supervised manifold learning. We then apply it to the use case of private image retrieval to obtain nearest matches to a client’s target image from a server’s database. PrivateMail releases the target image as part of a differentially private manifold embedding. We give bounds on the global sensitivity of the manifold learning map in order to obfuscate and release embeddings with differential privacy inducing noise. We show that PrivateMail obtains a substantially better performance in terms of the privacy-utility trade off in comparison to several baselines on various datasets. We share code for applying PrivateMail at http://tiny.cc/PrivateMail.
Style APA, Harvard, Vancouver, ISO itp.
38

Takagi, Shun, Li Xiong, Fumiyuki Kato, Yang Cao i Masatoshi Yoshikawa. "HRNet: Differentially Private Hierarchical and Multi-Resolution Network for Human Mobility Data Synthesization". Proceedings of the VLDB Endowment 17, nr 11 (lipiec 2024): 3058–71. http://dx.doi.org/10.14778/3681954.3681983.

Pełny tekst źródła
Streszczenie:
Human mobility data offers valuable insights for many applications such as urban planning and pandemic response, but its use also raises privacy concerns. In this paper, we introduce the Hierarchical and Multi-Resolution Network (HRNet), a novel deep generative model specifically designed to synthesize realistic human mobility data while guaranteeing differential privacy. We first identify the key difficulties inherent in learning human mobility data under differential privacy. In response to these challenges, HRNet integrates three components: a hierarchical location encoding mechanism, multi-task learning across multiple resolutions, and private pre-training. These elements collectively enhance the model's ability under the constraints of differential privacy. Through extensive comparative experiments utilizing a real-world dataset, HRNet demonstrates a marked improvement over existing methods in balancing the utility-privacy trade-off.
Style APA, Harvard, Vancouver, ISO itp.
39

Miller, Jim. "Who Are You, Part II: More on the Trade-Off between Information Utility and Privacy". IEEE Internet Computing 12, nr 6 (listopad 2008): 91–93. http://dx.doi.org/10.1109/mic.2008.135.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Chen, E., Yang Cao i Yifei Ge. "A Generalized Shuffle Framework for Privacy Amplification: Strengthening Privacy Guarantees and Enhancing Utility". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 10 (24.03.2024): 11267–75. http://dx.doi.org/10.1609/aaai.v38i10.29005.

Pełny tekst źródła
Streszczenie:
The shuffle model of local differential privacy is an advanced method of privacy amplification designed to enhance privacy protection with high utility. It achieves this by randomly shuffling sensitive data, making linking individual data points to specific individuals more challenging. However, most existing studies have focused on the shuffle model based on (ε0,0)-Locally Differentially Private (LDP) randomizers, with limited consideration for complex scenarios such as (ε0,δ0)-LDP or personalized LDP (PLDP). This hinders a comprehensive understanding of the shuffle model's potential and limits its application in various settings. To bridge this research gap, we propose a generalized shuffle framework that can be applied to PLDP setting. This generalization allows for a broader exploration of the privacy-utility trade-off and facilitates the design of privacy-preserving analyses in diverse contexts. We prove that the shuffled PLDP process approximately preserves μ-Gaussian Differential Privacy with μ = O(1/√n). This approach allows us to avoid the limitations and potential inaccuracies associated with inequality estimations. To strengthen the privacy guarantee, we improve the lower bound by utilizing hypothesis testing instead of relying on rough estimations like the Chernoff bound or Hoeffding's inequality. Furthermore, extensive comparative evaluations clearly show that our approach outperforms existing methods in achieving strong central privacy guarantees while preserving the utility of the global model. We have also carefully designed corresponding algorithms for average function, frequency estimation, and stochastic gradient descent.
Style APA, Harvard, Vancouver, ISO itp.
41

Sohail, Syeda Amna, Faiza Allah Bukhsh i Maurice van Keulen. "Multilevel Privacy Assurance Evaluation of Healthcare Metadata". Applied Sciences 11, nr 22 (12.11.2021): 10686. http://dx.doi.org/10.3390/app112210686.

Pełny tekst źródła
Streszczenie:
Healthcare providers are legally bound to ensure the privacy preservation of healthcare metadata. Usually, privacy concerning research focuses on providing technical and inter-/intra-organizational solutions in a fragmented manner. In this wake, an overarching evaluation of the fundamental (technical, organizational, and third-party) privacy-preserving measures in healthcare metadata handling is missing. Thus, this research work provides a multilevel privacy assurance evaluation of privacy-preserving measures of the Dutch healthcare metadata landscape. The normative and empirical evaluation comprises the content analysis and process mining discovery and conformance checking techniques using real-world healthcare datasets. For clarity, we illustrate our evaluation findings using conceptual modeling frameworks, namely e3-value modeling and REA ontology. The conceptual modeling frameworks highlight the financial aspect of metadata share with a clear description of vital stakeholders, their mutual interactions, and respective exchange of information resources. The frameworks are further verified using experts’ opinions. Based on our empirical and normative evaluations, we provide the multilevel privacy assurance evaluation with a level of privacy increase and decrease. Furthermore, we verify that the privacy utility trade-off is crucial in shaping privacy increase/decrease because data utility in healthcare is vital for efficient, effective healthcare services and the financial facilitation of healthcare enterprises.
Style APA, Harvard, Vancouver, ISO itp.
42

Verma, Kishore S., A. Rajesh i Adeline J. S. Johnsana. "An Improved Classification Analysis on Utility Aware K-Anonymized Dataset". Journal of Computational and Theoretical Nanoscience 16, nr 2 (1.02.2019): 445–52. http://dx.doi.org/10.1166/jctn.2019.7748.

Pełny tekst źródła
Streszczenie:
K anonymization is one of the worldwide used approaches to protect the individual records from the privacy leakage attack of Privacy Preserving Data Mining (PPDM) arena. Typically anonymized dataset will impact the effectiveness of data mining results. Anyhow, currently researchers of PPDM progress in driving their efforts in finding out the optimum trade-off between privacy and utility. This work tends in bringing out the optimum classifier from a set of best classifiers of data mining approaches that are capable enough in generating value-added classifying results on utility aware k-anonymized data set. We performed the analytical approach on the data set that are anonymized in sense of accompanying the anonymity utility factors like null values count and transformation pattern loss. The experimentation is done with three widely used classifiers HNB, PART and J48 and these classifiers are analysed with Accuracy, F-measure, and ROC-AUC which are literately proved to be the perfect measures of classification. Our experimental analysis reveals the best classifiers on the utility aware anonymized data sets of Cell oriented Anonymization (CoA), Attribute oriented Anonymization (AoA) and Record oriented Anonymization (RoA).
Style APA, Harvard, Vancouver, ISO itp.
43

Mohammady, Meisam, Momen Oqaily, Lingyu Wang, Yuan Hong, Habib Louafi, Makan Pourzandi i Mourad Debbabi. "A Multi-view Approach to Preserve Privacy and Utility in Network Trace Anonymization". ACM Transactions on Privacy and Security 24, nr 3 (31.08.2021): 1–36. http://dx.doi.org/10.1145/3439732.

Pełny tekst źródła
Streszczenie:
As network security monitoring grows more sophisticated, there is an increasing need for outsourcing such tasks to third-party analysts. However, organizations are usually reluctant to share their network traces due to privacy concerns over sensitive information, e.g., network and system configuration, which may potentially be exploited for attacks. In cases where data owners are convinced to share their network traces, the data are typically subjected to certain anonymization techniques, e.g., CryptoPAn, which replaces real IP addresses with prefix-preserving pseudonyms. However, most such techniques either are vulnerable to adversaries with prior knowledge about some network flows in the traces or require heavy data sanitization or perturbation, which may result in a significant loss of data utility. In this article, we aim to preserve both privacy and utility through shifting the trade-off from between privacy and utility to between privacy and computational cost. The key idea is for the analysts to generate and analyze multiple anonymized views of the original network traces: Those views are designed to be sufficiently indistinguishable even to adversaries armed with prior knowledge, which preserves the privacy, whereas one of the views will yield true analysis results privately retrieved by the data owner, which preserves the utility. We formally analyze the privacy of our solution and experimentally evaluate it using real network traces provided by a major ISP. The experimental results show that our approach can significantly reduce the level of information leakage (e.g., less than 1% of the information leaked by CryptoPAn) with comparable utility.
Style APA, Harvard, Vancouver, ISO itp.
44

Liu, Xuan, Genlang Chen, Shiting Wen i Guanghui Song. "An Improved Sanitization Algorithm in Privacy-Preserving Utility Mining". Mathematical Problems in Engineering 2020 (25.04.2020): 1–14. http://dx.doi.org/10.1155/2020/7489045.

Pełny tekst źródła
Streszczenie:
High-utility pattern mining is an effective technique that extracts significant information from varied types of databases. However, the analysis of data with sensitive private information may cause privacy concerns. To achieve better trade-off between utility maximizing and privacy preserving, privacy-preserving utility mining (PPUM) has become an important research topic in recent years. The MSICF algorithm is a sanitization algorithm for PPUM. It selects the item based on the conflict count and identifies the victim transaction based on the concept of utility. Although MSICF is effective, the heuristic selection strategy can be improved to obtain a lower ratio of side effects. In our paper, we propose an improved sanitization approach named the Improved Maximum Sensitive Itemsets Conflict First Algorithm (IMSICF) to address this issue. It dynamically calculates conflict counts of sensitive items in the sanitization process. In addition, IMSICF chooses the transaction with the minimum number of nonsensitive itemsets and the maximum utility in a sensitive itemset for modification. Extensive experiments have been conducted on various datasets to evaluate the effectiveness of our proposed algorithm. The results show that IMSICF outperforms other state-of-the-art algorithms in terms of minimizing side effects on nonsensitive information. Moreover, the influence of correlation among itemsets on various sanitization algorithms’ performance is observed.
Style APA, Harvard, Vancouver, ISO itp.
45

Hirschprung, Ron S., i Shani Alkoby. "A Game Theory Approach for Assisting Humans in Online Information-Sharing". Information 13, nr 4 (2.04.2022): 183. http://dx.doi.org/10.3390/info13040183.

Pełny tekst źródła
Streszczenie:
Contemporary information-sharing environments such as Facebook offer a wide range of social and practical benefits. These environments, however, may also lead to privacy and security violations. Moreover, there is usually a trade-off between the benefits gained and the accompanying costs. Due to the uncertain nature of the information-sharing environment and the lack of technological literacy, the layperson user often fails miserably in balancing this trade-off. In this paper, we use game theory concepts to formally model this problem as a “game”, in which the players are the users and the pay-off function is a combination of the benefits and costs of the information-sharing process. We introduce a novel theoretical framework called Online Information-Sharing Assistance (OISA) to evaluate the interactive nature of the information-sharing trade-off problem. Using these theoretical foundations, we develop a set of AI agents that attempt to calculate a strategy for balancing this trade-off. Finally, as a proof of concept, we conduct an empirical study in a simulated Facebook environment in which human participants compete against OISA-based AI agents, showing that significantly higher utility can be achieved using OISA.
Style APA, Harvard, Vancouver, ISO itp.
46

Kamalaruban, Parameswaran, Victor Perrier, Hassan Jameel Asghar i Mohamed Ali Kaafar. "Not All Attributes are Created Equal: dX -Private Mechanisms for Linear Queries". Proceedings on Privacy Enhancing Technologies 2020, nr 1 (1.01.2020): 103–25. http://dx.doi.org/10.2478/popets-2020-0007.

Pełny tekst źródła
Streszczenie:
AbstractDifferential privacy provides strong privacy guarantees simultaneously enabling useful insights from sensitive datasets. However, it provides the same level of protection for all elements (individuals and attributes) in the data. There are practical scenarios where some data attributes need more/less protection than others. In this paper, we consider dX -privacy, an instantiation of the privacy notion introduced in [6], which allows this flexibility by specifying a separate privacy budget for each pair of elements in the data domain. We describe a systematic procedure to tailor any existing differentially private mechanism that assumes a query set and a sensitivity vector as input into its dX -private variant, specifically focusing on linear queries. Our proposed meta procedure has broad applications as linear queries form the basis of a range of data analysis and machine learning algorithms, and the ability to define a more flexible privacy budget across the data domain results in improved privacy/utility tradeoff in these applications. We propose several dX -private mechanisms, and provide theoretical guarantees on the trade-off between utility and privacy. We also experimentally demonstrate the effectiveness of our procedure, by evaluating our proposed dX -private Laplace mechanism on both synthetic and real datasets using a set of randomly generated linear queries.
Style APA, Harvard, Vancouver, ISO itp.
47

Et. al., Waleed M. Ead,. "A General Framework Information Loss of Utility-Based Anonymization in Data Publishing". Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, nr 5 (11.04.2021): 1450–56. http://dx.doi.org/10.17762/turcomat.v12i5.2102.

Pełny tekst źródła
Streszczenie:
To build anonymization, the data anonymizer must determine the following three issues: Firstly, which data to be preserved? Secondly, which adversary background knowledge used to disclosure the anonymized data? Thirdly, The usage of the anonymized data? We have different anonymization techniques from the previous three-question according to different adversary background knowledge and information usage (information utility). In other words, different anonymization techniques lead to different information loss. In this paper, we propose a general framework for the utility-based anonymization to minimize the information loss in data published with a trade-off grantee of achieving the required privacy level.
Style APA, Harvard, Vancouver, ISO itp.
48

Pilgram, Lisa, Thierry Meurers, Bradley Malin, Elke Schaeffner, Kai-Uwe Eckardt i Fabian Prasser. "The Costs of Anonymization: Case Study Using Clinical Data". Journal of Medical Internet Research 26 (24.04.2024): e49445. http://dx.doi.org/10.2196/49445.

Pełny tekst źródła
Streszczenie:
Background Sharing data from clinical studies can accelerate scientific progress, improve transparency, and increase the potential for innovation and collaboration. However, privacy concerns remain a barrier to data sharing. Certain concerns, such as reidentification risk, can be addressed through the application of anonymization algorithms, whereby data are altered so that it is no longer reasonably related to a person. Yet, such alterations have the potential to influence the data set’s statistical properties, such that the privacy-utility trade-off must be considered. This has been studied in theory, but evidence based on real-world individual-level clinical data is rare, and anonymization has not broadly been adopted in clinical practice. Objective The goal of this study is to contribute to a better understanding of anonymization in the real world by comprehensively evaluating the privacy-utility trade-off of differently anonymized data using data and scientific results from the German Chronic Kidney Disease (GCKD) study. Methods The GCKD data set extracted for this study consists of 5217 records and 70 variables. A 2-step procedure was followed to determine which variables constituted reidentification risks. To capture a large portion of the risk-utility space, we decided on risk thresholds ranging from 0.02 to 1. The data were then transformed via generalization and suppression, and the anonymization process was varied using a generic and a use case–specific configuration. To assess the utility of the anonymized GCKD data, general-purpose metrics (ie, data granularity and entropy), as well as use case–specific metrics (ie, reproducibility), were applied. Reproducibility was assessed by measuring the overlap of the 95% CI lengths between anonymized and original results. Results Reproducibility measured by 95% CI overlap was higher than utility obtained from general-purpose metrics. For example, granularity varied between 68.2% and 87.6%, and entropy varied between 25.5% and 46.2%, whereas the average 95% CI overlap was above 90% for all risk thresholds applied. A nonoverlapping 95% CI was detected in 6 estimates across all analyses, but the overwhelming majority of estimates exhibited an overlap over 50%. The use case–specific configuration outperformed the generic one in terms of actual utility (ie, reproducibility) at the same level of privacy. Conclusions Our results illustrate the challenges that anonymization faces when aiming to support multiple likely and possibly competing uses, while use case–specific anonymization can provide greater utility. This aspect should be taken into account when evaluating the associated costs of anonymized data and attempting to maintain sufficiently high levels of privacy for anonymized data. Trial Registration German Clinical Trials Register DRKS00003971; https://drks.de/search/en/trial/DRKS00003971 International Registered Report Identifier (IRRID) RR2-10.1093/ndt/gfr456
Style APA, Harvard, Vancouver, ISO itp.
49

Song, Yi, Xuesong Lu, Sadegh Nobari, Stéphane Bressan i Panagiotis Karras. "On the Privacy and Utility of Anonymized Social Networks". International Journal of Adaptive, Resilient and Autonomic Systems 4, nr 2 (kwiecień 2013): 1–34. http://dx.doi.org/10.4018/jaras.2013040101.

Pełny tekst źródła
Streszczenie:
One is either on Facebook or not. Of course, this assessment is controversial and its rationale arguable. It is nevertheless not far, for many, from the reason behind joining social media and publishing and sharing details of their professional and private lives. Not only the personal details that may be revealed, but also the structure of the networks are sources of invaluable information for any organization wanting to understand and learn about social groups, their dynamics and members. These organizations may or may not be benevolent. It is important to devise, design and evaluate solutions that guarantee some privacy. One approach that reconciles the different stakeholders’ requirement is the publication of a modified graph. The perturbation is hoped to be sufficient to protect members’ privacy while it maintains sufficient utility for analysts wanting to study the social media as a whole. In this paper, the authors try to empirically quantify the inevitable trade-off between utility and privacy. They do so for two state-of-the-art graph anonymization algorithms that protect against most structural attacks, the k-automorphism algorithm and the k-degree anonymity algorithm. The authors measure several metrics for a series of real graphs from various social media before and after their anonymization under various settings.
Style APA, Harvard, Vancouver, ISO itp.
50

Hemmatazad, Nolan, Robin Gandhi, Qiuming Zhu i Sanjukta Bhowmick. "The Intelligent Data Brokerage". International Journal of Privacy and Health Information Management 2, nr 1 (styczeń 2014): 22–33. http://dx.doi.org/10.4018/ijphim.2014010102.

Pełny tekst źródła
Streszczenie:
The anonymization of widely distributed or open data has been a topic of great interest to privacy advocates in recent years. The goal of anonymization in these cases is to make data available to a larger audience, extending the utility of the data to new environments and evolving use cases without compromising the personal information of individuals whose data are being distributed. The resounding issue with such practices is that, with any anonymity measure, there is a trade-off between privacy and utility, where maximizing one carries a cost to the other. In this paper, the authors propose a framework for the utility-preserving release of anonymized data, based on the idea of intelligent data brokerages. These brokerages act as intermediaries between users requesting access to information resources and an existing database management system (DBMS). Through the use of a formal language for interpreting user information requests, customizable anonymization policies, and optional natural language processing (NLP) capabilities, data brokerages can maximize the utility of data in-context when responding to user inquiries.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii