Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Computational Differential Privacy.

Статті в журналах з теми "Computational Differential Privacy"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Computational Differential Privacy".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Bhavani Sankar Telaprolu. "Privacy-Preserving Federated Learning in Healthcare - A Secure AI Framework." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 10, no. 3 (July 16, 2024): 703–7. https://doi.org/10.32628/cseit2410347.

Повний текст джерела
Анотація:
Federated Learning (FL) has transformed AI applications in healthcare by enabling collaborative model training across multiple institutions while preserving patient data privacy. Despite its advantages, FL remains susceptible to security vulnerabilities, including model inversion attacks, adversarial data poisoning, and communication inefficiencies, necessitating enhanced privacy-preserving mechanisms. In response, this study introduces Privacy-Preserving Federated Learning (PPFL), an advanced FL framework integrating Secure Multi-Party Computation (SMPC), Differential Privacy (DP), and Homomorphic Encryption (HE) to ensure data confidentiality while maintaining computational efficiency. I rigorously evaluate PPFL using Federated Averaging (FedAvg), Secure Aggregation (SecAgg), and Differentially Private Stochastic Gradient Descent (DP-SGD) across real-world healthcare datasets. The results indicate that this approach achieves up to an 85% reduction in model inversion attack success rates, enhances privacy efficiency by 30%, and maintains accuracy retention between 95.2% and 98.3%, significantly improving security without compromising model performance. Furthermore, comparative visual analyses illustrate trade-offs between privacy and accuracy, scalability trends, and computational overhead. This study also explores scalability challenges, computational trade-offs, and real-world deployment considerations in multi-institutional hospital networks, paving the way for secure, scalable, and privacy-preserving AI adoption in healthcare environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Et. al., Dr Priyank Jain,. "Differentially Private Data Release: Bias Weight Perturbation Method - A Novel Approach." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 10 (April 28, 2021): 7165–73. http://dx.doi.org/10.17762/turcomat.v12i10.5607.

Повний текст джерела
Анотація:
Differential privacy plays the important role to preserve the individual data. In this research work, discussing a novel approach of releasing private data to the public, which is differentially private, called Bias Weight Perturbation Method. The approach follow here align with principle of differential privacy, it also used concept of statistical distance and statistical sample similarity to quantify the synthetic data generation loss, which is then used to validate our results. Our proposed approach make use of the deep generative models for providing privacy and it further produce synthetic dataset which can be released to public for further use.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kii, Masanobu, Atsunori Ichikawa, and Takayuki Miura. "Lightweight Two-Party Secure Sampling Protocol for Differential Privacy." Proceedings on Privacy Enhancing Technologies 2025, no. 1 (January 2025): 23–36. http://dx.doi.org/10.56553/popets-2025-0003.

Повний текст джерела
Анотація:
Secure sampling is a secure multiparty computation protocol that allows a receiver to sample random numbers from a specified non-uniform distribution. It is a fundamental tool for privacy-preserving analysis since adding controlled noise is the most basic and frequently used method to achieve differential privacy. The well-known approaches to constructing a two-party secure sampling protocol are transforming uniform random values into non-uniform ones by computations (e.g., logarithm or binary circuits) or table-lookup. However, they require a large computational or communication cost to achieve a strong differential privacy guarantee. This work addresses this problem with our novel lightweight two-party secure sampling protocol. Our protocol consists of random table-lookup from a small table with the 1-out of-n oblivious transfer and only additions. Furthermore, we provide algorithms for making a table to achieve differential privacy. Our method can reduce the communication cost for (1.0, 2^(-40))-differential privacy from 183GB (naive construction) to 7.4MB.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Meisingseth, Fredrik, and Christian Rechberger. "SoK: Computational and Distributed Differential Privacy for MPC." Proceedings on Privacy Enhancing Technologies 2025, no. 1 (January 2025): 420–39. http://dx.doi.org/10.56553/popets-2025-0023.

Повний текст джерела
Анотація:
In the last fifteen years, there has been a steady stream of works combining differential privacy with various other cryptographic disciplines, particularly that of multi-party computation, yielding both practical and theoretical unification. As a part of that unification, due to the rich definitional nature of both fields, there have been many proposed definitions of differential privacy adapted to the given use cases and cryptographic tools at hand, resulting in computational and/or distributed versions of differential privacy. In this work, we offer a systemisation of such definitions, with a focus on definitions that are both computational and tailored for a multi-party setting. We order the definitions according to the distribution model and computational perspective and propose a viewpoint on when given definitions should be seen as instantiations of the same generalised notion. The ordering highlights a clear, and sometimes strict, hierarchy between the definitions, where utility (accuracy) can be traded for stronger privacy guarantees or lesser trust assumptions. Further, we survey theoretical results relating the definitions and extend some of them. We also discuss the state of well-known open questions and suggest new open problems to study. Finally, we consider aspects of the practical use of the different notions, hopefully giving guidance also to future applied work.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kim, Jongwook. "DistOD: A Hybrid Privacy-Preserving and Distributed Framework for Origin–Destination Matrix Computation." Electronics 13, no. 22 (November 19, 2024): 4545. http://dx.doi.org/10.3390/electronics13224545.

Повний текст джерела
Анотація:
The origin–destination (OD) matrix is a critical tool in understanding human mobility, with diverse applications. However, constructing OD matrices can pose significant privacy challenges, as sensitive information about individual mobility patterns may be exposed. In this paper, we propose DistOD, a hybrid privacy-preserving and distributed framework for the aggregation and computation of OD matrices without relying on a trusted central server. The proposed framework makes several key contributions. First, we propose a distributed method that enables multiple participating parties to collaboratively identify hotspot areas, which are regions frequently traveled between by individuals across these parties. To optimize the data utility and minimize the computational overhead, we introduce a hybrid privacy-preserving mechanism. This mechanism applies distributed differential privacy in hotspot areas to ensure high data utility, while using localized differential privacy in non-hotspot regions to reduce the computational costs. By combining these approaches, our method achieves an effective balance between computational efficiency and the accuracy of the OD matrix. Extensive experiments on real-world datasets show that DistOD consistently provides higher data utility than methods based solely on localized differential privacy, as well as greater efficiency than approaches based solely on distributed differential privacy.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Fang, Juanru, and Ke Yi. "Privacy Amplification by Sampling under User-level Differential Privacy." Proceedings of the ACM on Management of Data 2, no. 1 (March 12, 2024): 1–26. http://dx.doi.org/10.1145/3639289.

Повний текст джерела
Анотація:
Random sampling is an effective tool for reducing the computational costs of query processing in large databases. It has also been used frequently for private data analysis, in particular, under differential privacy (DP). An interesting phenomenon that the literature has identified, is that sampling can amplify the privacy guarantee of a mechanism, which in turn leads to reduced noise scales that have to be injected. All existing privacy amplification results only hold in the standard, record-level DP model. Recently, user-level differential privacy (user-DP) has gained a lot of attention as it protects all data records contributed by any particular user, thus offering stronger privacy protection. Sampling-based mechanisms under user-DP have not been explored so far, except naively running the mechanism on a sample without privacy amplification, which results in large DP noises. In fact, sampling is in even more demand under user-DP, since all state-of-the-art user-DP mechanisms have high computational costs due to the complex relationships between users and records. In this paper, we take the first step towards the study of privacy amplification by sampling under user-DP, and give the amplification results for two common user-DP sampling strategies: simple sampling and sample-and-explore. The experimental results show that these sampling-based mechanisms can be a useful tool to obtain some quick and reasonably accurate estimates on large private datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Alborch Escobar, Ferran, Sébastien Canard, Fabien Laguillaumie, and Duong Hieu Phan. "Computational Differential Privacy for Encrypted Databases Supporting Linear Queries." Proceedings on Privacy Enhancing Technologies 2024, no. 4 (October 2024): 583–604. http://dx.doi.org/10.56553/popets-2024-0131.

Повний текст джерела
Анотація:
Differential privacy is a fundamental concept for protecting individual privacy in databases while enabling data analysis. Conceptually, it is assumed that the adversary has no direct access to the database, and therefore, encryption is not necessary. However, with the emergence of cloud computing and the << on-cloud >> storage of vast databases potentially contributed by multiple parties, it is becoming increasingly necessary to consider the possibility of the adversary having (at least partial) access to sensitive databases. A consequence is that, to protect the on-line database, it is now necessary to employ encryption. At PoPETs'19, it was the first time that the notion of differential privacy was considered for encrypted databases, but only for a limited type of query, namely histograms. Subsequently, a new type of query, summation, was considered at CODASPY'22. These works achieve statistical differential privacy, by still assuming that the adversary has no access to the encrypted database. In this paper, we take an essential step further by assuming that the adversary can eventually access the encrypted data, making it impossible to achieve statistical differential privacy because the security of encryption (beyond the one-time pad) relies on computational assumptions. Therefore, the appropriate privacy notion for encrypted databases that we target is computational differential privacy, which was introduced by Beimel et al. at CRYPTO '08. In our work, we focus on the case of functional encryption, which is an extensively studied primitive permitting some authorized computation over encrypted data. Technically, we show that any randomized functional encryption scheme that satisfies simulation-based security and differential privacy of the output can achieve computational differential privacy for multiple queries to one database. Our work also extends the summation query to a much broader range of queries, specifically linear queries, by utilizing inner-product functional encryption. Hence, we provide an instantiation for inner-product functionalities by proving its simulation soundness and present a concrete randomized inner-product functional encryption with computational differential privacy against multiple queries. In terms of efficiency, our protocol is almost as practical as the underlying inner product functional encryption scheme. As evidence, we provide a full benchmark, based on our concrete implementation for databases with up to 1 000 000 entries. Our work can be considered as a step towards achieving privacy-preserving encrypted databases for a wide range of query types and considering the involvement of multiple database owners.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Liu, Hai, Zhenqiang Wu, Yihui Zhou, Changgen Peng, Feng Tian, and Laifeng Lu. "Privacy-Preserving Monotonicity of Differential Privacy Mechanisms." Applied Sciences 8, no. 11 (October 28, 2018): 2081. http://dx.doi.org/10.3390/app8112081.

Повний текст джерела
Анотація:
Differential privacy mechanisms can offer a trade-off between privacy and utility by using privacy metrics and utility metrics. The trade-off of differential privacy shows that one thing increases and another decreases in terms of privacy metrics and utility metrics. However, there is no unified trade-off measurement of differential privacy mechanisms. To this end, we proposed the definition of privacy-preserving monotonicity of differential privacy, which measured the trade-off between privacy and utility. First, to formulate the trade-off, we presented the definition of privacy-preserving monotonicity based on computational indistinguishability. Second, building on privacy metrics of the expected estimation error and entropy, we theoretically and numerically showed privacy-preserving monotonicity of Laplace mechanism, Gaussian mechanism, exponential mechanism, and randomized response mechanism. In addition, we also theoretically and numerically analyzed the utility monotonicity of these several differential privacy mechanisms based on utility metrics of modulus of characteristic function and variant of normalized entropy. Third, according to the privacy-preserving monotonicity of differential privacy, we presented a method to seek trade-off under a semi-honest model and analyzed a unilateral trade-off under a rational model. Therefore, privacy-preserving monotonicity can be used as a criterion to evaluate the trade-off between privacy and utility in differential privacy mechanisms under the semi-honest model. However, privacy-preserving monotonicity results in a unilateral trade-off of the rational model, which can lead to severe consequences.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Pavan Kumar Vadrevu. "Scalable Approaches for Enhancing Privacy in Blockchain Networks: A Comprehensive Review of Differential Privacy Techniques." Journal of Information Systems Engineering and Management 10, no. 8s (January 31, 2025): 635–48. https://doi.org/10.52783/jisem.v10i8s.1119.

Повний текст джерела
Анотація:
The rapid adoption of blockchain technology in a number of industries, such as supply chain management, healthcare, and finance, has intensified concerns surrounding data privacy. As sensitive information is stored and shared on decentralized networks, the inherent cryptographic mechanisms of blockchain provide robust security. However, the transparency of public ledgers can unintentionally expose sensitive data, resulting in potential privacy risks and regulatory challenges. Differential privacy has emerged as a promising approach to protect individual data while preserving the usability of shared datasets. By enabling data analysis without revealing individual data points, differential privacy is well-suited for anonymizing transactions, smart contract interactions, and other blockchain activities. However, integrating differential privacy into blockchain systems presents several challenges, including ensuring scalability, balancing privacy with data utility, and managing computational overhead. This review, "Scalable Approaches for Enhancing Privacy in Blockchain Networks: A Comprehensive Review of Differential Privacy Techniques," examines 50 recent studies published between 2023 and 2024 that investigate differential privacy techniques in blockchain networks. It highlights various scalable approaches and their effectiveness in enhancing privacy. The findings indicate that these methods can significantly improve privacy protection, provide flexibility for both public and private blockchains, and assist in complying with regulatory requirements. This establishes differential privacy as a vital tool for secure blockchain implementation.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Hong, Yiyang, Xingwen Zhao, Hui Zhu, and Hui Li. "A Blockchain-Integrated Divided-Block Sparse Matrix Transformation Differential Privacy Data Publishing Model." Security and Communication Networks 2021 (December 7, 2021): 1–15. http://dx.doi.org/10.1155/2021/2418539.

Повний текст джерела
Анотація:
With the rapid development of information technology, people benefit more and more from big data. At the same time, it becomes a great concern that how to obtain optimal outputs from big data publishing and sharing management while protecting privacy. Many researchers seek to realize differential privacy protection in massive high-dimensional datasets using the method of principal component analysis. However, these algorithms are inefficient in processing and do not take into account the different privacy protection needs of each attribute in high-dimensional datasets. To address the above problem, we design a Divided-block Sparse Matrix Transformation Differential Privacy Data Publishing Algorithm (DSMT-DP). In this algorithm, different levels of privacy budget parameters are assigned to different attributes according to the required privacy protection level of each attribute, taking into account the privacy protection needs of different levels of attributes. Meanwhile, the use of the divided-block scheme and the sparse matrix transformation scheme can improve the computational efficiency of the principal component analysis method for handling large amounts of high-dimensional sensitive data, and we demonstrate that the proposed algorithm satisfies differential privacy. Our experimental results show that the mean square error of the proposed algorithm is smaller than the traditional differential privacy algorithm with the same privacy parameters, and the computational efficiency can be improved. Further, we combine this algorithm with blockchain and propose an Efficient Privacy Data Publishing and Sharing Model based on the blockchain. Publishing and sharing private data on this model not only resist strong background knowledge attacks from adversaries outside the system but also prevent stealing and tampering of data by not-completely-honest participants inside the system.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Meisingseth, Fredrik, Christian Rechberger, and Fabian Schmid. "Practical Two-party Computational Differential Privacy with Active Security." Proceedings on Privacy Enhancing Technologies 2025, no. 1 (January 2025): 341–60. http://dx.doi.org/10.56553/popets-2025-0019.

Повний текст джерела
Анотація:
In this work we revisit the problem of using general-purpose MPC schemes to emulate the trusted dataholder in differential privacy (DP), to achieve the same accuracy but without the need to trust one single dataholder. In particular, we consider the two-party model where two computational parties (or dataholders), each with their own dataset, wish to compute a canonical DP mechanism on their combined data and to do so with active security. We start by remarking that available definitions of computational DP (CDP) for protocols are somewhat ill-suited for such a use-case, due to them either poorly capturing some strong security guarantees commonly given by general-purpose MPC protocols, or having too strict requirements in the sense that they need significant adjustment in order to be satisfiable by using common DP and MPC techniques. With this in mind, we propose a new version of simulation-based CDP, called SIM*-CDP, and prove it to be stronger than the IND-CDP and SIM-CDP and incomparable to SIM+-CDP. We demonstrate the usability of the SIM*-CDP definition by showing how to satisfy it by the use of an available distributed protocol for sampling truncated geometric noise. Further, we use the protocol to compute two-party inner-products with CDP and active security, and with accuracy equal to that of the central model, being the first to do so. Finally, we provide an open-sourced implementation and benchmark its practical performance. Our implementation generates a truncated geometric sample in between about 0.035 and 3.5 seconds (amortized), depending on network and parameter settings, comparing favourably to existing implementations.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Kim, Jongwook, and Sae-Hong Cho. "A Differential Privacy Framework with Adjustable Efficiency–Utility Trade-Offs for Data Collection." Mathematics 13, no. 5 (February 28, 2025): 812. https://doi.org/10.3390/math13050812.

Повний текст джерела
Анотація:
The widespread use of mobile devices has led to the continuous collection of vast amounts of user-generated data, supporting data-driven decisions across a variety of fields. However, the growing volume of these data raises significant privacy concerns, especially when they include personal information vulnerable to misuse. Differential privacy (DP) has emerged as a prominent solution to these concerns, enabling the collection of user-generated data for data-driven decision-making while protecting user privacy. Despite their strengths, existing DP-based data collection frameworks are often faced with a trade-off between the utility of the data and the computational overhead. To address these challenges, we propose the differentially private fractional coverage model (DPFCM), a DP-based framework that adaptively balances data utility and computational overhead according to the requirements of data-driven decisions. DPFCM introduces two parameters, α and β, which control the fractions of collected data elements and user data, respectively, to ensure both data diversity and representative user coverage. In addition, we propose two probability-based methods for effectively determining the minimum data each user should provide to satisfy the DPFCM requirements. Experimental results on real-world datasets validate the effectiveness of DPFCM, demonstrating its high data utility and computational efficiency, especially for applications requiring real-time decision-making.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Mr. Samadhan Palkar, Prof. (Dr.) Raghav Mehra, and Prof. (Dr.) Lingaraj Hadimani. "Hyper Parameters Optimization for Gaussian Mechanism with Coyote-Badger and Kriging Model for EHR." International Research Journal on Advanced Engineering Hub (IRJAEH) 3, no. 02 (February 14, 2025): 152–55. https://doi.org/10.47392/irjaeh.2025.0020.

Повний текст джерела
Анотація:
Differential privacy (DP) is a cornerstone of privacy-preserving data analysis. Among its mechanisms, the Gaussian mechanism stands out for its ability to provide robust privacy guarantees by adding Gaussian noise to computations. However, the mechanism’s hyper parameters, including the noise scale (σ) and privacy budget (ϵ), require careful optimization to balance privacy and utility. This paper explores the application of Coyote Optimization Algorithm (COA) and Badger Optimization Algorithm (BOA) for hyper- parameter optimization, coupled with the Kriging surrogate model to enhance computational efficiency. Comparative evaluations demonstrate that these methods outperform traditional approaches, achieving better convergence rates and improved privacy-utility trade-offs.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Ni, Guangyuan, and Jiaxin Sun. "Differential privacy protection algorithm for large data sources based on normalized information entropy Bayesian network." Journal of Physics: Conference Series 2813, no. 1 (August 1, 2024): 012012. http://dx.doi.org/10.1088/1742-6596/2813/1/012012.

Повний текст джерела
Анотація:
Abstract With the rapid development of the Internet, privacy protection in big data has become an important research hotspot in the field of information security. Among them, differential privacy protection is the most widely used big data privacy protection technology today. However, the existing differential privacy protection technology cannot fully and effectively deal with the problem of high-dimensional privacy big data. Although the private data publishing method based on a Bayesian network can effectively deal with the conversion problem of high-dimensional data sets to low-dimensional data sets, this method also has some defects and deficiencies. Therefore, this paper proposes a large data source differential privacy protection algorithm (NIE-PrivBayes algorithm) based on normalized information entropy Bayesian network and proposes three improvements to the original PrivBayes algorithm. First, the original data set is protected by differential privacy on the client side. Secondly, the Bayesian network is constructed by the joint probability distribution obtained by the EM algorithm. Finally, the existing Bayesian network construction algorithm is optimized in the aspects of normalized information entropy. Based on significantly reducing the computational overhead, the effectiveness and availability of publishing high-dimensional data sets are realized under the premise of differential privacy protection.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Jain, Pinkal, Vikas Thada, and Deepak Motwani. "Providing Highest Privacy Preservation Scenario for Achieving Privacy in Confidential Data." International Journal of Experimental Research and Review 39, Spl Volume (May 30, 2024): 190–99. http://dx.doi.org/10.52756/ijerr.2024.v39spl.015.

Повний текст джерела
Анотація:
Machine learning algorithms have been extensively employed in multiple domains, presenting an opportunity to enable privacy. However, their effectiveness is dependent on enormous data volumes and high computational resources, usually available online. It entails personal and private data like mobile telephone numbers, identification numbers, and medical histories. Developing efficient and economical techniques to protect this private data is critical. In this context, the current research suggests a novel way to accomplish this, combining modified differential privacy with a more complicated machine learning (ML) model. It is possible to assess the privacy-specific characteristics of single or multiple-level models using the suggested method, as demonstrated by this work. It then employs the gradient values from the stochastic gradient descent algorithm to determine the scale of Gaussian noise, thereby preserving sensitive information within the data. The experimental results show that by fine-tuning the parameters of the modified differential privacy model based on the varied degrees of private information in the data, our suggested model outperforms existing methods in terms of accuracy, efficiency and privacy.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Mudassar, Bakhtawar, Shahzaib Tahir, Fawad Khan, Syed Aziz Shah, Syed Ikram Shah, and Qammer Hussain Abbasi. "Privacy-Preserving Data Analytics in Internet of Medical Things." Future Internet 16, no. 11 (November 5, 2024): 407. http://dx.doi.org/10.3390/fi16110407.

Повний текст джерела
Анотація:
The healthcare sector has changed dramatically in recent years due to depending more and more on big data to improve patient care, enhance or improve operational effectiveness, and forward medical research. Protecting patient privacy in the era of digital health records is a major challenge, as there could be a chance of privacy leakage during the process of collecting patient data. To overcome this issue, we propose a secure, privacy-preserving scheme for healthcare data to ensure maximum privacy of an individual while also maintaining their utility and allowing for the performance of queries based on sensitive attributes under differential privacy. We implemented differential privacy on two publicly available healthcare datasets, the Breast Cancer Prediction Dataset and the Nursing Home COVID-19 Dataset. Moreover, we examined the impact of varying privacy parameter (ε) values on both the privacy and utility of the data. A significant part of this study involved the selection of ε, which determines the degree of privacy protection. We also conducted a computational time comparison by performing multiple complex queries on these datasets to analyse the computational overhead introduced by differential privacy. The outcomes demonstrate that, despite a slight increase in query processing time, it remains within reasonable bounds, ensuring the practicality of differential privacy for real-time applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Almadhoun, Nour, Erman Ayday, and Özgür Ulusoy. "Inference attacks against differentially private query results from genomic datasets including dependent tuples." Bioinformatics 36, Supplement_1 (July 1, 2020): i136—i145. http://dx.doi.org/10.1093/bioinformatics/btaa475.

Повний текст джерела
Анотація:
Abstract Motivation The rapid decrease in the sequencing technology costs leads to a revolution in medical research and clinical care. Today, researchers have access to large genomic datasets to study associations between variants and complex traits. However, availability of such genomic datasets also results in new privacy concerns about personal information of the participants in genomic studies. Differential privacy (DP) is one of the rigorous privacy concepts, which received widespread interest for sharing summary statistics from genomic datasets while protecting the privacy of participants against inference attacks. However, DP has a known drawback as it does not consider the correlation between dataset tuples. Therefore, privacy guarantees of DP-based mechanisms may degrade if the dataset includes dependent tuples, which is a common situation for genomic datasets due to the inherent correlations between genomes of family members. Results In this article, using two real-life genomic datasets, we show that exploiting the correlation between the dataset participants results in significant information leak from differentially private results of complex queries. We formulate this as an attribute inference attack and show the privacy loss in minor allele frequency (MAF) and chi-square queries. Our results show that using the results of differentially private MAF queries and utilizing the dependency between tuples, an adversary can reveal up to 50% more sensitive information about the genome of a target (compared to original privacy guarantees of standard DP-based mechanisms), while differentially privacy chi-square queries can reveal up to 40% more sensitive information. Furthermore, we show that the adversary can use the inferred genomic data obtained from the attribute inference attack to infer the membership of a target in another genomic dataset (e.g. associated with a sensitive trait). Using a log-likelihood-ratio test, our results also show that the inference power of the adversary can be significantly high in such an attack even using inferred (and hence partially incorrect) genomes. Availability and implementation https://github.com/nourmadhoun/Inference-Attacks-Differential-Privacy
Стилі APA, Harvard, Vancouver, ISO та ін.
18

C.Kanmani Pappa. "Zero-Trust Cryptographic Protocols and Differential Privacy Techniques for Scalable Secure Multi-Party Computation in Big Data Analytics." Journal of Electrical Systems 20, no. 5s (April 13, 2024): 2114–23. http://dx.doi.org/10.52783/jes.2550.

Повний текст джерела
Анотація:
This research explores the integration of zero-trust cryptographic protocols and differential privacy techniques to establish scalable secure multi-party computation in the context of big data analytics. The study delves into the challenges of collaborative data processing and presents a comprehensive framework that addresses the intricate balance between security, scalability, and privacy. The framework focuses on zero-trust cryptographic protocols, advocating for a fundamental shift in trust assumptions within distributed systems. Differential privacy techniques are then seamlessly integrated to preserve individual privacy during collaborative data analytics. This model employs a layered approach and distributed architecture and leverages serverless and edge computing fusion to enhance scalability and responsiveness in dynamic big data environments. This also explores the optimization of computational resources and real-time processing capabilities through serverless and edge computing fusion. A distributed architecture facilitates efficient collaboration across multiple parties, allowing for seamless data integration, preprocessing, analytics, and visualization. Privacy preservation takes centre stage in the big data privacy component of the framework. Context-aware attribute analysis, distributed federated learning nodes, and Attribute-Based Access Control (ABAC) with cryptographic enforcement are introduced to ensure fine-grained access control, contextual understanding of attributes, and collaborative model training without compromising sensitive information. Smart Multi-Party Computation Protocols (SMPCP) further enhance security, enabling joint computation of functions over private inputs while ensuring the integrity and immutability of data transactions. In essence, the achieved results manifest a paradigm shift where the layered approach, distributed architecture, and advanced privacy techniques converge to heighten data security, drive efficient computation, and robustly preserve privacy in the expansive landscape of big data analytics. Fault tolerance and resource utilization exhibit significant advancements, with fault tolerance experiencing a 10% boost and resource utilization optimizing by 12%. These enhancements underscore the robustness and efficiency of the system's design, ensuring resilience and optimized resource allocation.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Kim, Hyeong-Geon, Jinmyeong Shin, and Yoon-Ho Choi. "Human-Unrecognizable Differential Private Noised Image Generation Method." Sensors 24, no. 10 (May 16, 2024): 3166. http://dx.doi.org/10.3390/s24103166.

Повний текст джерела
Анотація:
Differential privacy has emerged as a practical technique for privacy-preserving deep learning. However, recent studies on privacy attacks have demonstrated vulnerabilities in the existing differential privacy implementations for deep models. While encryption-based methods offer robust security, their computational overheads are often prohibitive. To address these challenges, we propose a novel differential privacy-based image generation method. Our approach employs two distinct noise types: one makes the image unrecognizable to humans, preserving privacy during transmission, while the other maintains features essential for machine learning analysis. This allows the deep learning service to provide accurate results, without compromising data privacy. We demonstrate the feasibility of our method on the CIFAR100 dataset, which offers a realistic complexity for evaluation.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Abdulbaqi, Azmi Shawkat, Adil M. Salman, and Sagar B. Tambe. "Privacy-Preserving Data Mining Techniques in Big Data: Balancing Security and Usability." SHIFRA 2023 (January 10, 2023): 1–9. http://dx.doi.org/10.70470/shifra/2023/001.

Повний текст джерела
Анотація:
The exponential growth of big data across industries presents both opportunities and challenges, particularly regarding the protection of sensitive information while maintaining data utility. The problem lies in balancing privacy preservation with the ability to extract meaningful insights from large datasets, which are often vulnerable to re-identification, breaches, and misuse. Current privacy-preserving data mining (PPDM) techniques, such as anonymization, differential privacy, and cryptographic methods, provide important solutions but introduce trade-offs in terms of data utility, computational performance, and compliance with privacy regulations. The objective of this study is to evaluate these PPDM methods, focusing on their effectiveness in safeguarding privacy while minimizing the impact on data accuracy and system performance. Additionally, the study seeks to assess the compliance of these methods with legal frameworks such as GDPR and HIPAA, which impose strict data protection requirements. By conducting an exhaustive analysis with regard to privacy-utility trade-offs, computation times, and communication complexities, this work attempts to outline the respective strengths and weaknesses of each method. Since these results can be elicited from the fact that indeed anonymization techniques contribute more to data utility by reducing the risk of re-identification, whereas differential privacy guarantees a high privacy at the cost of accuracy due to the introduction of noise in data through a privacy budget epsilon. Other cryptographic techniques, like homomorphic encryption and secure multiparty computation, are computationally expensive and hard to scale but offer strong security. In that respect, this work concludes that these techniques protect privacy with great efficiency; however, a number of privacy-data usability and performance trade-offs need to be performed. Future research should be focused on enhancing the scalability and efficiency of these methods toward fulfilling the needs of real-time big data analytics applications without loss of privacy.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Gruska, Damas P. "Differential Privacy and Security." Fundamenta Informaticae 143, no. 1-2 (February 2, 2016): 73–87. http://dx.doi.org/10.3233/fi-2016-1304.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Xiao, Xiaokui, Guozhang Wang, and Johannes Gehrke. "Differential Privacy via Wavelet Transforms." IEEE Transactions on Knowledge and Data Engineering 23, no. 8 (August 2011): 1200–1214. http://dx.doi.org/10.1109/tkde.2010.247.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Zhang, Guowei, Shengjian Zhang, Zhiyi Man, Chenlin Cui, and Wenli Hu. "Location Privacy Protection in Edge Computing: Co-Design of Differential Privacy and Offloading Mode." Electronics 13, no. 13 (July 7, 2024): 2668. http://dx.doi.org/10.3390/electronics13132668.

Повний текст джерела
Анотація:
Edge computing has emerged as an innovative paradigm that decentralizes computation to the network’s periphery, empowering edge servers to manage user-initiated complex tasks. This strategy alleviates the computational load on end-user devices and increases task processing efficiency. Nonetheless, the task offloading process can introduce a critical vulnerability, as adversaries may infer a user’s location through an analysis of their offloading mode, thereby threatening the user’s location privacy. To counteract this vulnerability, this study introduces differential privacy as a protective mechanism to obscure the user’s offloading mode, thereby safeguarding their location information. This research specifically addresses the issue of location privacy leakage stemming from the correlation between a user’s location and their task offloading ratio. The proposed strategy is based on differential privacy. It aims to increase the efficiency of offloading services and the benefits of task offloading. At the same time, it ensures privacy protection. An innovative optimization technique for task offloading that maintains location privacy is presented. Utilizing this technique, users can make informed offloading decisions, dynamically adjusting the level of obfuscation in response to the state of the wireless channel and their privacy requirements. This study substantiates the feasibility and effectiveness of the proposed mechanism through rigorous theoretical analysis and extensive empirical testing. The numerical results demonstrate that the proposed strategy can achieve a balance between offloading privacy and processing overhead.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Wang, Lin, Xingang Xu, Xuhui Zhao, Baozhu Li, Ruijuan Zheng, and Qingtao Wu. "A randomized block policy gradient algorithm with differential privacy in Content Centric Networks." International Journal of Distributed Sensor Networks 17, no. 12 (December 2021): 155014772110599. http://dx.doi.org/10.1177/15501477211059934.

Повний текст джерела
Анотація:
Policy gradient methods are effective means to solve the problems of mobile multimedia data transmission in Content Centric Networks. Current policy gradient algorithms impose high computational cost in processing high-dimensional data. Meanwhile, the issue of privacy disclosure has not been taken into account. However, privacy protection is important in data training. Therefore, we propose a randomized block policy gradient algorithm with differential privacy. In order to reduce computational complexity when processing high-dimensional data, we randomly select a block coordinate to update the gradients at each round. To solve the privacy protection problem, we add a differential privacy protection mechanism to the algorithm, and we prove that it preserves the [Formula: see text]-privacy level. We conduct extensive simulations in four environments, which are CartPole, Walker, HalfCheetah, and Hopper. Compared with the methods such as important-sampling momentum-based policy gradient, Hessian-Aided momentum-based policy gradient, REINFORCE, the experimental results of our algorithm show a faster convergence rate than others in the same environment.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Du, Yuntao, Yujia Hu, Zhikun Zhang, Ziquan Fang, Lu Chen, Baihua Zheng, and Yunjun Gao. "LDPTrace: Locally Differentially Private Trajectory Synthesis." Proceedings of the VLDB Endowment 16, no. 8 (April 2023): 1897–909. http://dx.doi.org/10.14778/3594512.3594520.

Повний текст джерела
Анотація:
Trajectory data has the potential to greatly benefit a wide-range of real-world applications, such as tracking the spread of the disease through people's movement patterns and providing personalized location-based services based on travel preference. However, privacy concerns and data protection regulations have limited the extent to which this data is shared and utilized. To overcome this challenge, local differential privacy provides a solution by allowing people to share a perturbed version of their data, ensuring privacy as only the data owners have access to the original information. Despite its potential, existing point-based perturbation mechanisms are not suitable for real-world scenarios due to poor utility, dependence on external knowledge, high computational overhead, and vulnerability to attacks. To address these limitations, we introduce LDPTrace, a novel locally differentially private trajectory synthesis framework. Our framework takes into account three crucial patterns inferred from users' trajectories in the local setting, allowing us to synthesize trajectories that closely resemble real ones with minimal computational cost. Additionally, we present a new method for selecting a proper grid granularity without compromising privacy. Our extensive experiments using real-world as well as synthetic data, various utility metrics and attacks, demonstrate the efficacy and efficiency of LDPTrace.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Lu, Kangjie. "Noise Addition Strategies for Differential Privacy in Stochastic Gradient Descent." Transactions on Computer Science and Intelligent Systems Research 5 (August 12, 2024): 960–67. http://dx.doi.org/10.62051/f2kew975.

Повний текст джерела
Анотація:
Differential privacy technology is more and more widely used in the field of machine learning, especially in the gradient descent algorithm (SGD). Protecting data privacy by adding noise has become a hot topic of research. This paper reviews the noise addition strategy of differential privacy SGD from multiple dimensions, including adjustment based on noise distribution, adjustment based on gradient norm, adjustment based on privacy budget, and method based on model architecture. Each strategy has different performances in terms of privacy protection level, model performance loss and computational complexity. This article compares and analyzes these differences in detail, aiming to provide valuable reference for researchers and practitioners. This article also discusses how to combine federal learning and differential privacy technology to protect data privacy more efficiently in a secure multi-party computing (MPC) environment. Through the review of this article, we can see the wide application of differential privacy in machine learning and deep learning and its importance in the field of privacy protection. At the same time, we also show the direction and challenges of future research.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Adeyinka Ogunbajo, Itunu Taiwo, Adefemi Quddus Abidola, Oluwadamilola Fisayo Adediran, and Israel Agbo-Adediran. "Privacy preserving AI models for decentralized data management in federated information systems." GSC Advanced Research and Reviews 22, no. 2 (February 28, 2025): 104–12. https://doi.org/10.30574/gscarr.2025.22.2.0043.

Повний текст джерела
Анотація:
Federated information systems represent a transformative approach to decentralized data management and privacy-preserving artificial intelligence. This review critically examines the architectural innovations, technological challenges, and emerging paradigms in federated learning and distributed computing environments. By enabling collaborative model training across disparate data sources without direct data sharing, these systems address critical privacy concerns while maintaining computational efficiency. The research synthesizes current implementation strategies across domains such as healthcare, financial services, and edge computing, highlighting the potential of decentralized machine learning architectures. Comparative assessments reveal significant advancements in maintaining data confidentiality while extracting meaningful insights. Persistent challenges include communication overhead, model aggregation complexities, and heterogeneous data distribution problems. The investigation explores advanced cryptographic techniques, secure multi-party computation mechanisms, and differential privacy approaches that underpin federated AI models. Emerging research directions emphasize developing robust standardization protocols, enhancing cryptographic safeguards, and creating adaptive federated learning algorithms capable of dynamically responding to evolving privacy and computational requirements.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Shin, Hyejin, Sungwook Kim, Junbum Shin, and Xiaokui Xiao. "Privacy Enhanced Matrix Factorization for Recommendation with Local Differential Privacy." IEEE Transactions on Knowledge and Data Engineering 30, no. 9 (September 1, 2018): 1770–82. http://dx.doi.org/10.1109/tkde.2018.2805356.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Liu, Fang. "Generalized Gaussian Mechanism for Differential Privacy." IEEE Transactions on Knowledge and Data Engineering 31, no. 4 (April 1, 2019): 747–56. http://dx.doi.org/10.1109/tkde.2018.2845388.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Laeuchli, Jesse, Yunior Ramírez-Cruz, and Rolando Trujillo-Rasua. "Analysis of centrality measures under differential privacy models." Applied Mathematics and Computation 412 (January 2022): 126546. http://dx.doi.org/10.1016/j.amc.2021.126546.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Han, Yuchen. "Research on machine learning technology with privacy protection strategy in recommendation field." Applied and Computational Engineering 43, no. 1 (February 26, 2024): 294–99. http://dx.doi.org/10.54254/2755-2721/43/20230848.

Повний текст джерела
Анотація:
Technologies such as machine learning can achieve accurate personalized recommendations. However, due to the collection and utilization of a large amount of user information in this process, people are widely worried about data security and privacy issues. This paper first introduces two key issues of privacy protection in the field of machine learning, namely data privacy and model privacy. On this basis, this paper introduces and analyzes homomorphic encryption, differential privacy and federated learning, and compares their advantages and disadvantages. Among them, homomorphic encryption technology has a large computational cost, differential privacy technology has a negative impact on system accuracy, and federated learning technology has a high training and communication cost. Therefore, it will be the future research direction to study more efficient and accurate recommendation models.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Munn, Luke, Tsvetelina Hristova, and Liam Magee. "Clouded data: Privacy and the promise of encryption." Big Data & Society 6, no. 1 (January 2019): 205395171984878. http://dx.doi.org/10.1177/2053951719848781.

Повний текст джерела
Анотація:
Personal data is highly vulnerable to security exploits, spurring moves to lock it down through encryption, to cryptographically ‘cloud’ it. But personal data is also highly valuable to corporations and states, triggering moves to unlock its insights by relocating it in the cloud. We characterise this twinned condition as ‘clouded data’. Clouded data constructs a political and technological notion of privacy that operates through the intersection of corporate power, computational resources and the ability to obfuscate, gain insights from and valorise a dependency between public and private. First, we survey prominent clouded data approaches (blockchain, multiparty computation, differential privacy, and homomorphic encryption), suggesting their particular affordances produce distinctive versions of privacy. Next, we perform two notional code-based experiments using synthetic datasets. In the field of health, we submit a patient’s blood pressure to a notional cloud-based diagnostics service; in education, we construct a student survey that enables aggregate reporting without individual identification. We argue that these technical affordances legitimate new political claims to capture and commodify personal data. The final section broadens the discussion to consider the political force of clouded data and its reconstitution of traditional notions such as the public and the private.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Zhao, Jianzhe, Mengbo Yang, Ronglin Zhang, Wuganjing Song, Jiali Zheng, Jingran Feng, and Stan Matwin. "Privacy-Enhanced Federated Learning: A Restrictively Self-Sampled and Data-Perturbed Local Differential Privacy Method." Electronics 11, no. 23 (December 2, 2022): 4007. http://dx.doi.org/10.3390/electronics11234007.

Повний текст джерела
Анотація:
As a popular distributed learning framework, federated learning (FL) enables clients to conduct cooperative training without sharing data, thus having higher security and enjoying benefits in processing large-scale, high-dimensional data. However, by sharing parameters in the federated learning process, the attacker can still obtain private information from the sensitive data of participants by reverse parsing. Local differential privacy (LDP) has recently worked well in preserving privacy for federated learning. However, it faces the inherent problem of balancing privacy, model performance, and algorithm efficiency. In this paper, we propose a novel privacy-enhanced federated learning framework (Optimal LDP-FL) which achieves local differential privacy protection by the client self-sampling and data perturbation mechanisms. We theoretically analyze the relationship between the model accuracy and client self-sampling probability. Restrictive client self-sampling technology is proposed which eliminates the randomness of the self-sampling probability settings in existing studies and improves the utilization of the federated system. A novel, efficiency-optimized LDP data perturbation mechanism (Adaptive-Harmony) is also proposed, which allows an adaptive parameter range to reduce variance and improve model accuracy. Comprehensive experiments on the MNIST and Fashion MNIST datasets show that the proposed method can significantly reduce computational and communication costs with the same level of privacy and model utility.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Xu, Shasha, and Xiufang Yin. "Recommendation System for Privacy-Preserving Education Technologies." Computational Intelligence and Neuroscience 2022 (April 16, 2022): 1–8. http://dx.doi.org/10.1155/2022/3502992.

Повний текст джерела
Анотація:
Considering the priority for personalized and fully customized learning systems, the innovative computational intelligent systems for personalized educational technologies are the timeliest research area. Since the machine learning models reflect the data over which they were trained, data that have privacy and other sensitivities associated with the education abilities of learners, which can be vulnerable. This work proposes a recommendation system for privacy-preserving education technologies that uses machine learning and differential privacy to overcome this issue. Specifically, each student is automatically classified on their skills in a category using a directed acyclic graph method. In the next step, the model uses differential privacy which is the technology that enables a facility for the purpose of obtaining useful information from databases containing individuals’ personal information without divulging sensitive identification about each individual. In addition, an intelligent recommendation mechanism based on collaborative filtering offers personalized real-time data for the users’ privacy.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Vyas, Bhuman. "PRIVACY –PRESERVING DATA VAULTS: SAFE GUARDING PILL INFORMATION IN THE DIGITAL AGE." International Journal of Innovative Research in Advanced Engineering 06, no. 10 (October 30, 2019): 616–23. http://dx.doi.org/10.26562/ijirae.2019.v0610.04.

Повний текст джерела
Анотація:
Protecting sensitive medical data, including prescription and pill data, during its handling and storage is critical in the digital era. Data vaults that protect privacy provide a strong way to protect this information, guaranteeing that patient information is kept private but yet available for authorised uses. Focussing on the safe preservation of pharmaceutical data, this project investigates the creation of sophisticated algorithms for privacy-preserving data vaults. We start by contrasting the suggested innovative technique, which combines elements of Zero-Knowledge Proofs with Enhanced Homomorphic Encryption, with other known cryptographic and data masking algorithms, such as Differential Privacy, Secure Multi-Party Computation, and Homomorphic Encryption. Data integrity, computational efficiency, and resistance to different attack vectors are some of the characteristics used in the comparison.As the results show, the suggested method offers aimproved performance against confidentiality compromises, especially in real-time data retrieval scenarios, while existing techniques offer varied degrees of efficiency and security. But this comes with more implementation complexity and processing overhead. Improved security characteristics, like less data leakage and strong user authentication systems, are benefits of the suggested approach. Large-scale applications may experience latency problems and require more powerful hardware, which are drawbacks.The trade-offs between various data privacy strategies are highlighted in this study, and it also highlights the necessity for on-going innovation in privacy-preserving technology, which makes a contribution to the region.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Yang, Zhijie, Xiaolong Yan, Guoguang Chen, Mingli Niu, and Xiaoli Tian. "Towards Federated Robust Approximation of Nonlinear Systems with Differential Privacy Guarantee." Electronics 14, no. 5 (February 26, 2025): 937. https://doi.org/10.3390/electronics14050937.

Повний текст джерела
Анотація:
Nonlinear systems, characterized by their complex and often unpredictable dynamics, are essential in various scientific and engineering applications. However, accurately modeling these systems remains challenging due to their nonlinearity, high-dimensional interactions, and the privacy concerns inherent in data-sensitive domains. Existing federated learning approaches struggle to model such complex behaviors, particularly due to their inability to capture high-dimensional interactions and their failure to maintain privacy while ensuring robust model performance. This paper presents a novel federated learning framework for the robust approximation of nonlinear systems, addressing these challenges by integrating differential privacy to protect sensitive data without compromising model utility. The proposed framework enables decentralized training across multiple clients, ensuring privacy through differential privacy mechanisms that mitigate risks of information leakage via gradient updates. Advanced neural network architectures are employed to effectively approximate nonlinear dynamics, with stability and scalability ensured by rigorous theoretical analysis. We compare our approach with both centralized and decentralized federated models, highlighting the advantages of our framework, particularly in terms of privacy preservation. Comprehensive experiments on benchmark datasets, such as the Lorenz system and real-world climate data, demonstrate that our federated model achieves comparable accuracy to centralized approaches while offering strong privacy guarantees. The system efficiently handles data heterogeneity and dynamic nonlinear behavior, scaling well with both the number of clients and model complexity. These findings demonstrate a pathway for the secure and scalable deployment of machine learning models in nonlinear system modeling, effectively balancing accuracy, privacy, and computational performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Kim, Jong-Wook. "Differential Privacy-Based Data Collection for Improving Data Utility and Reducing Computational Overhead." Transactions of The Korean Institute of Electrical Engineers 74, no. 1 (January 31, 2025): 102–8. https://doi.org/10.5370/kiee.2025.74.1.102.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Parvandeh, Saeid, Hung-Wen Yeh, Martin P. Paulus, and Brett A. McKinney. "Consensus features nested cross-validation." Bioinformatics 36, no. 10 (January 27, 2020): 3093–98. http://dx.doi.org/10.1093/bioinformatics/btaa046.

Повний текст джерела
Анотація:
Abstract Summary Feature selection can improve the accuracy of machine-learning models, but appropriate steps must be taken to avoid overfitting. Nested cross-validation (nCV) is a common approach that chooses the classification model and features to represent a given outer fold based on features that give the maximum inner-fold accuracy. Differential privacy is a related technique to avoid overfitting that uses a privacy-preserving noise mechanism to identify features that are stable between training and holdout sets. We develop consensus nested cross-validation (cnCV) that combines the idea of feature stability from differential privacy with nCV. Feature selection is applied in each inner fold and the consensus of top features across folds is used as a measure of feature stability or reliability instead of classification accuracy, which is used in standard nCV. We use simulated data with main effects, correlation and interactions to compare the classification accuracy and feature selection performance of the new cnCV with standard nCV, Elastic Net optimized by cross-validation, differential privacy and private evaporative cooling (pEC). We also compare these methods using real RNA-seq data from a study of major depressive disorder. The cnCV method has similar training and validation accuracy to nCV, but cnCV has much shorter run times because it does not construct classifiers in the inner folds. The cnCV method chooses a more parsimonious set of features with fewer false positives than nCV. The cnCV method has similar accuracy to pEC and cnCV selects stable features between folds without the need to specify a privacy threshold. We show that cnCV is an effective and efficient approach for combining feature selection with classification. Availability and implementation Code available at https://github.com/insilico/cncv. Supplementary information Supplementary data are available at Bioinformatics online.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Wang, Ji, Weidong Bao, Lichao Sun, Xiaomin Zhu, Bokai Cao, and Philip S. Yu. "Private Model Compression via Knowledge Distillation." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 1190–97. http://dx.doi.org/10.1609/aaai.v33i01.33011190.

Повний текст джерела
Анотація:
The soaring demand for intelligent mobile applications calls for deploying powerful deep neural networks (DNNs) on mobile devices. However, the outstanding performance of DNNs notoriously relies on increasingly complex models, which in turn is associated with an increase in computational expense far surpassing mobile devices’ capacity. What is worse, app service providers need to collect and utilize a large volume of users’ data, which contain sensitive information, to build the sophisticated DNN models. Directly deploying these models on public mobile devices presents prohibitive privacy risk. To benefit from the on-device deep learning without the capacity and privacy concerns, we design a private model compression framework RONA. Following the knowledge distillation paradigm, we jointly use hint learning, distillation learning, and self learning to train a compact and fast neural network. The knowledge distilled from the cumbersome model is adaptively bounded and carefully perturbed to enforce differential privacy. We further propose an elegant query sample selection method to reduce the number of queries and control the privacy loss. A series of empirical evaluations as well as the implementation on an Android mobile device show that RONA can not only compress cumbersome models efficiently but also provide a strong privacy guarantee. For example, on SVHN, when a meaningful (9.83,10−6)-differential privacy is guaranteed, the compact model trained by RONA can obtain 20× compression ratio and 19× speed-up with merely 0.97% accuracy loss.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Özdel, Süleyman, Efe Bozkir, and Enkelejda Kasneci. "Privacy-preserving Scanpath Comparison for Pervasive Eye Tracking." Proceedings of the ACM on Human-Computer Interaction 8, ETRA (May 20, 2024): 1–28. http://dx.doi.org/10.1145/3655605.

Повний текст джерела
Анотація:
As eye tracking becomes pervasive with screen-based devices and head-mounted displays, privacy concerns regarding eye-tracking data have escalated. While state-of-the-art approaches for privacy-preserving eye tracking mostly involve differential privacy and empirical data manipulations, previous research has not focused on methods for scanpaths. We introduce a novel privacy-preserving scanpath comparison protocol designed for the widely used Needleman-Wunsch algorithm, a generalized version of the edit distance algorithm. Particularly, by incorporating the Paillier homomorphic encryption scheme, our protocol ensures that no private information is revealed. Furthermore, we introduce a random processing strategy and a multi-layered masking method to obfuscate the values while preserving the original order of encrypted editing operation costs. This minimizes communication overhead, requiring a single communication round for each iteration of the Needleman-Wunsch process. We demonstrate the efficiency and applicability of our protocol on three publicly available datasets with comprehensive computational performance analyses and make our source code publicly accessible.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Min, Minghui, Zeqian Liu, Jincheng Duan, Peng Zhang, and Shiyin Li. "Safe-Learning-Based Location-Privacy-Preserved Task Offloading in Mobile Edge Computing." Electronics 13, no. 1 (December 25, 2023): 89. http://dx.doi.org/10.3390/electronics13010089.

Повний текст джерела
Анотація:
Mobile edge computing (MEC) integration with 5G/6G technologies is an essential direction in mobile communications and computing. However, it is crucial to be aware of the potential privacy implications of task offloading in MEC scenarios, specifically the leakage of user location information. To address this issue, this paper proposes a location-privacy-preserved task offloading (LPTO) scheme based on safe reinforcement learning to balance computational cost and privacy protection. This scheme uses the differential privacy technique to perturb the user’s actual location to achieve location privacy protection. We model the privacy-preserving location perturbation problem as a Markov decision process (MDP), and we develop a safe deep Q-network (DQN)-based LPTO (SDLPTO) scheme to select the offloading policy and location perturbation policy dynamically. This approach effectively mitigates the selection of high-risk state–action pairs by conducting a risk assessment for each state–action pair. Simulation results show that the proposed SDLPTO scheme has a lower computational cost and location privacy leakage than the benchmarks. These results highlight the significance of our approach in protecting user location privacy while achieving improved performance in MEC environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Chen, Xiang, Dun Zhang, Zhan-Qi Cui, Qing Gu, and Xiao-Lin Ju. "DP-Share: Privacy-Preserving Software Defect Prediction Model Sharing Through Differential Privacy." Journal of Computer Science and Technology 34, no. 5 (September 2019): 1020–38. http://dx.doi.org/10.1007/s11390-019-1958-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Dong, Yipeng, Wei Luo, Xiangyang Wang, Lei Zhang, Lin Xu, Zehao Zhou, and Lulu Wang. "Multi-Task Federated Split Learning Across Multi-Modal Data with Privacy Preservation." Sensors 25, no. 1 (January 3, 2025): 233. https://doi.org/10.3390/s25010233.

Повний текст джерела
Анотація:
With the advancement of federated learning (FL), there is a growing demand for schemes that support multi-task learning on multi-modal data while ensuring robust privacy protection, especially in applications like intelligent connected vehicles. Traditional FL schemes often struggle with the complexities introduced by multi-modal data and diverse task requirements, such as increased communication overhead and computational burdens. In this paper, we propose a novel privacy-preserving scheme for multi-task federated split learning across multi-modal data (MTFSLaMM). Our approach leverages the principles of split learning to partition models between clients and servers, employing a modular design that reduces computational demands on resource-constrained clients. To ensure data privacy, we integrate differential privacy to protect intermediate data and employ homomorphic encryption to safeguard client models. Additionally, our scheme employs an optimized attention mechanism guided by mutual information to achieve efficient multi-modal data fusion, maximizing information integration while minimizing computational overhead and preventing overfitting. Experimental results demonstrate the effectiveness of the proposed scheme in addressing the challenges of multi-modal data and multi-task learning while offering robust privacy protection, with MTFSLaMM achieving a 15.3% improvement in BLEU-4 and an 11.8% improvement in CIDEr scores compared with the baseline.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Elhattab, Fatima, Sara Bouchenak, and Cédric Boscher. "PASTEL." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 7, no. 4 (December 19, 2023): 1–29. http://dx.doi.org/10.1145/3633808.

Повний текст джерела
Анотація:
Federated Learning (FL) aims to improve machine learning privacy by allowing several data owners in edge and ubiquitous computing systems to collaboratively train a model, while preserving their local training data private, and sharing only model training parameters. However, FL systems remain vulnerable to privacy attacks, and in particular, to membership inference attacks that allow adversaries to determine whether a given data sample belongs to participants' training data, thus, raising a significant threat in sensitive ubiquitous computing systems. Indeed, membership inference attacks are based on a binary classifier that is able to differentiate between member data samples used to train a model and non-member data samples not used for training. In this context, several defense mechanisms, including differential privacy, have been proposed to counter such privacy attacks. However, the main drawback of these methods is that they may reduce model accuracy while incurring non-negligible computational costs. In this paper, we precisely address this problem with PASTEL, a FL privacy-preserving mechanism that is based on a novel multi-objective learning function. On the one hand, PASTEL decreases the generalization gap to reduce the difference between member data and non-member data, and on the other hand, PASTEL reduces model loss and leverages adaptive gradient descent optimization for preserving high model accuracy. Our experimental evaluations conducted on eight widely used datasets and five model architectures show that PASTEL significantly reduces membership inference attack success rates by up to -28%, reaching optimal privacy protection in most cases, with low to no perceptible impact on model accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Öksüz, Abdullah Çağlar, Erman Ayday, and Uğur Güdükbay. "Privacy-preserving and robust watermarking on sequential genome data using belief propagation and local differential privacy." Bioinformatics 37, no. 17 (February 25, 2021): 2668–74. http://dx.doi.org/10.1093/bioinformatics/btab128.

Повний текст джерела
Анотація:
Abstract Motivation Genome data is a subject of study for both biology and computer science since the start of the Human Genome Project in 1990. Since then, genome sequencing for medical and social purposes becomes more and more available and affordable. Genome data can be shared on public websites or with service providers (SPs). However, this sharing compromises the privacy of donors even under partial sharing conditions. We mainly focus on the liability aspect ensued by the unauthorized sharing of these genome data. One of the techniques to address the liability issues in data sharing is the watermarking mechanism. Results To detect malicious correspondents and SPs—whose aim is to share genome data without individuals’ consent and undetected—, we propose a novel watermarking method on sequential genome data using belief propagation algorithm. In our method, we have two criteria to satisfy. (i) Embedding robust watermarks so that the malicious adversaries cannot temper the watermark by modification and are identified with high probability. (ii) Achieving ϵ-local differential privacy in all data sharings with SPs. For the preservation of system robustness against single SP and collusion attacks, we consider publicly available genomic information like Minor Allele Frequency, Linkage Disequilibrium, Phenotype Information and Familial Information. Our proposed scheme achieves 100% detection rate against the single SP attacks with only 3% watermark length. For the worst case scenario of collusion attacks (50% of SPs are malicious), 80% detection is achieved with 5% watermark length and 90% detection is achieved with 10% watermark length. For all cases, the impact of ϵ on precision remained negligible and high privacy is ensured. Availability and implementation https://github.com/acoksuz/PPRW\_SGD\_BPLDP Supplementary information Supplementary data are available at Bioinformatics online.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Luo, Yuan, and Nicholas R. Jennings. "A Differential Privacy Mechanism that Accounts for Network Effects for Crowdsourcing Systems." Journal of Artificial Intelligence Research 69 (December 3, 2020): 1127–64. http://dx.doi.org/10.1613/jair.1.12158.

Повний текст джерела
Анотація:
In crowdsourcing systems, it is important for the crowdsource campaign initiator to incentivize users to share their data to produce results of the desired computational accuracy. This problem becomes especially challenging when users are concerned about the privacy of their data. To overcome this challenge, existing work often aims to provide users with differential privacy guarantees to incentivize privacy-sensitive users to share their data. However, this work neglects the network effect that a user enjoys greater privacy protection when he aligns his participation behaviour with that of other users. To explore this network effect, we formulate the interaction among users regarding their participation decisions as a population game, because a user’s welfare from the interaction depends not only on his own participation decision but also the distribution of others’ decisions. We show that the Nash equilibrium of this game consists of a threshold strategy, where all users whose privacy sensitivity is below a certain threshold will participate and the remaining users will not. We characterize the existence and uniqueness of this equilibrium, which depends on the privacy guarantee, the reward provided by the initiator and the population size. Based on this equilibria analysis, we design the PINE (Privacy Incentivization with Network Effects) mechanism and prove that it maximizes the initiator’s payoff while providing participating users with a guaranteed degree of privacy protection. Numerical simulations, on both real and synthetic data, show that (i) PINE improves the initiator’s expected payoff by up to 75%, compared to state of the art mechanisms that do not consider this effect; (ii) the performance gain by exploiting the network effect is particularly good when the majority of users are flexible over their privacy attitudes and when there are a large number of low quality task performers.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Rahman, Ashequr, Asif Iqbal, Emon Ahmed, Tanvirahmedshuvo ., and Md Risalat Hossain Ontor. "PRIVACY-PRESERVING MACHINE LEARNING: TECHNIQUES, CHALLENGES, AND FUTURE DIRECTIONS IN SAFEGUARDING PERSONAL DATA MANAGEMENT." Frontline Marketing, Management and Economics Journal 04, no. 12 (December 1, 2024): 84–106. https://doi.org/10.37547/marketing-fmmej-04-12-07.

Повний текст джерела
Анотація:
This paper explores the intersection of machine learning and personal data privacy, examining the challenges and solutions for preserving privacy in data-driven systems. As machine learning algorithms increasingly rely on large datasets, concerns about data leakage and breaches have intensified. To address these issues, we investigate various privacy-preserving techniques, including differential privacy, federated learning, adversarial training, and data anonymization. The findings highlight the effectiveness of these methods in protecting sensitive information while maintaining model performance. However, trade-offs in accuracy, computational efficiency, and model interpretability remain significant challenges. The paper also emphasizes the need for transparent and explainable models to ensure ethical data use and foster trust in AI systems. Ultimately, the study concludes that while privacy-preserving machine learning methods show great promise, ongoing research is essential to balance privacy and performance in future applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Rahman, Ashequr, Asif Iqbal, Emon Ahmed, Tanvirahmedshuvo ., and Md Risalat Hossain Ontor. "PRIVACY-PRESERVING MACHINE LEARNING: TECHNIQUES, CHALLENGES, AND FUTURE DIRECTIONS IN SAFEGUARDING PERSONAL DATA MANAGEMENT." International journal of business and management sciences 04, no. 12 (December 15, 2024): 18–32. https://doi.org/10.55640/ijbms-04-12-03.

Повний текст джерела
Анотація:
This paper explores the intersection of machine learning and personal data privacy, examining the challenges and solutions for preserving privacy in data-driven systems. As machine learning algorithms increasingly rely on large datasets, concerns about data leakage and breaches have intensified. To address these issues, we investigate various privacy-preserving techniques, including differential privacy, federated learning, adversarial training, and data anonymization. The findings highlight the effectiveness of these methods in protecting sensitive information while maintaining model performance. However, trade-offs in accuracy, computational efficiency, and model interpretability remain significant challenges. The paper also emphasizes the need for transparent and explainable models to ensure ethical data use and foster trust in AI systems. Ultimately, the study concludes that while privacy-preserving machine learning methods show great promise, ongoing research is essential to balance privacy and performance in future applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Zhang, Lei, and Lina Ge. "A clustering-based differential privacy protection algorithm for weighted social networks." Mathematical Biosciences and Engineering 21, no. 3 (2024): 3755–33. http://dx.doi.org/10.3934/mbe.2024166.

Повний текст джерела
Анотація:
<abstract> <p>Weighted social networks play a crucial role in various fields such as social media analysis, healthcare, and recommendation systems. However, with their widespread application and privacy issues have become increasingly prominent, including concerns related to sensitive information leakage, individual behavior analysis, and privacy attacks. Despite traditional differential privacy protection algorithms being able to protect privacy for edges with sensitive information, directly adding noise to edge weights may result in excessive noise, thereby reducing data utility. To address these challenges, we proposed a privacy protection algorithm for weighted social networks called DCDP. The algorithm combines the density clustering algorithm OPTICS to partition the weighted social network into multiple sub-clusters and adds noise to different sub-clusters at random sampling frequencies. To enhance the balance of privacy protection, we designed a novel privacy parameter calculation method. Through theoretical derivation and experimentation, the DCDP algorithm demonstrated its capability to achieve differential privacy protection for weighted social networks while effectively maintaining data accuracy. Compared to traditional privacy protection algorithms, the DCDP algorithm reduced the average relative error by approximately 20% and increases the proportion of unchanged shortest paths by about 10%. In summary, we aimed to address privacy issues in weighted social networks, providing an effective method to protect user-sensitive information while ensuring the accuracy and utility of data analysis.</p> </abstract>
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Yeow, Sin-Qian, and Kok-Why Ng. "Neural Network Based Data Encryption: A Comparison Study among DES, AES, and HE Techniques." JOIV : International Journal on Informatics Visualization 7, no. 3-2 (November 30, 2023): 2086. http://dx.doi.org/10.30630/joiv.7.3-2.2336.

Повний текст джерела
Анотація:
With the improvement of technology and the continuous expansion and deepening of neural network technology, its application in computer network security plays an important role. However, the development of neural networks is accompanied by new threats and challenges. This paper proposes to encrypt the weight data using encryption algorithms and embed image encryption algorithms to improve protected data security further. The purpose is to address the feasibility and effectiveness of using modern encryption algorithms for data encryption in machine learning in response to data privacy breaches. The approach consists of training a neural network to simulate a model of machine learning and then encrypting it using Data Encryption Standard (DES), Advanced Encryption Standard (AES), and Homomorphic Encryption (HE) techniques, respectively. Its performance is evaluated based on the encryption/decryption accuracy and computational efficiency. The results indicate that combining DES with Blowfish offers moderate encryption and decryption speeds but is less secure than AES and HE. AES provides a practical solution, balancing security and performance, offering a relatively swift encryption and decryption process while maintaining high security. However, Fernet and HE present a viable alternative if data privacy is a top priority. Encryption and decryption times increase with file size and require sufficient computational resources. Future research should explore image encryption techniques to balance security and accurate image retrieval during decryption. Advanced privacy-preserving approaches, such as differential privacy and secure multi-party computation, may enhance security and confidentiality in digital encryption and decryption processes.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії