Journal articles on the topic 'Data privacy'

To see the other types of publications on this topic, follow the link: Data privacy.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Data privacy.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Yerbulatov, Sultan. "Data Security and Privacy in Data Engineering." International Journal of Science and Research (IJSR) 13, no. 4 (April 5, 2024): 232–36. http://dx.doi.org/10.21275/es24318121241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Torra, Vicenç, and Guillermo Navarro-Arribas. "Data privacy." Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 4, no. 4 (June 2, 2014): 269–80. http://dx.doi.org/10.1002/widm.1129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Basha, M. John, T. Satyanarayana Murthy, A. S. Valarmathy, Ahmed Radie Abbas, Djuraeva Gavhar, R. Rajavarman, and N. Parkunam. "Privacy-Preserving Data Mining and Analytics in Big Data." E3S Web of Conferences 399 (2023): 04033. http://dx.doi.org/10.1051/e3sconf/202339904033.

Full text
Abstract:
Privacy concerns have gotten more attention as Big Data has spread. The difficulties of striking a balance between the value of data and individual privacy have led to the emergence of privacy-preserving data mining and analytics approaches as a crucial area of research. An overview of the major ideas, methods, and developments in privacy-preserving data mining and analytics in the context of Big Data is given in this abstract. Data mining that protects privacy tries to glean useful insights from huge databases while shielding the private data of individuals. Commonly used in traditional data mining methods, sharing or pooling data might have serious privacy implications. On the other hand, privacy-preserving data mining strategies concentrate on creating procedures and algorithms that enable analysis without jeopardizing personal information. Finally, privacy-preserving data mining and analytics in the Big Data age bring important difficulties and opportunities. An overview of the main ideas, methods, and developments in privacy-preserving data mining and analytics are given in this abstract. It underscores the value of privacy in the era of data-driven decision-making and the requirement for effective privacy-preserving solutions to safeguard sensitive personal data while facilitating insightful analysis of huge datasets.
APA, Harvard, Vancouver, ISO, and other styles
4

COSTEA, Ioan. "Data Privacy Assurance in Virtual Private Networks." International Journal of Information Security and Cybercrime 1, no. 2 (December 21, 2012): 40–47. http://dx.doi.org/10.19107/ijisc.2012.02.05.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mohapatra, Shubhankar, Jianqiao Zong, Florian Kerschbaum, and Xi He. "Differentially Private Data Generation with Missing Data." Proceedings of the VLDB Endowment 17, no. 8 (April 2024): 2022–35. http://dx.doi.org/10.14778/3659437.3659455.

Full text
Abstract:
Despite several works that succeed in generating synthetic data with differential privacy (DP) guarantees, they are inadequate for generating high-quality synthetic data when the input data has missing values. In this work, we formalize the problems of DP synthetic data with missing values and propose three effective adaptive strategies that significantly improve the utility of the synthetic data on four real-world datasets with different types and levels of missing data and privacy requirements. We also identify the relationship between privacy impact for the complete ground truth data and incomplete data for these DP synthetic data generation algorithms. We model the missing mechanisms as a sampling process to obtain tighter upper bounds for the privacy guarantees to the ground truth data. Overall, this study contributes to a better understanding of the challenges and opportunities for using private synthetic data generation algorithms in the presence of missing data.
APA, Harvard, Vancouver, ISO, and other styles
6

Sramka, Michal. "Data mining as a tool in privacy-preserving data publishing." Tatra Mountains Mathematical Publications 45, no. 1 (December 1, 2010): 151–59. http://dx.doi.org/10.2478/v10127-010-0011-z.

Full text
Abstract:
ABSTRACTMany databases contain data about individuals that are valuable for research, marketing, and decision making. Sharing or publishing data about individuals is however prone to privacy attacks, breaches, and disclosures. The concern here is about individuals’ privacy-keeping the sensitive information about individuals private to them. Data mining in this setting has been shown to be a powerful tool to breach privacy and make disclosures. In contrast, data mining can be also used in practice to aid data owners in their decision on how to share and publish their databases. We present and discuss the role and uses of data mining in these scenarios and also briefly discuss other approaches to private data analysis.
APA, Harvard, Vancouver, ISO, and other styles
7

Heubl, B. "News - Briefing. Data privacy: Data privacy group found to have breached online privacy rules." Engineering & Technology 15, no. 3 (April 1, 2020): 9. http://dx.doi.org/10.1049/et.2020.0317.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

JAKŠIĆ, SVETLANA, JOVANKA PANTOVIĆ, and SILVIA GHILEZAN. "Linked data privacy." Mathematical Structures in Computer Science 27, no. 1 (March 18, 2015): 33–53. http://dx.doi.org/10.1017/s096012951500002x.

Full text
Abstract:
Web of Linked Data introduces common format and principles for publishing and linking data on the Web. Such a network of linked data is publicly available and easily consumable. This paper introduces a calculus for modelling networks of linked data with encoded privacy preferences.In that calculus, a network is a parallel composition of users, where each user is named and consists of data, representing the user's profile, and a process. Data is a parallel composition of triples with names (resources) as components. Associated with each name and each triple of names are their privacy protection policies, that are represented by queries. A data triple is accessible to a user if the user's data satisfies the query assigned to that triple.The main contribution of this model lies in the type system which together with the introduced query order ensures that static type-checking prevents privacy violations. We say that a network is well behaved if —access to a triple is more restrictive than access to its components and less restrictive than access to the user name it is enclosed with,—each user can completely access their own profile,—each user can update or partly delete profiles that they own (can access the whole profiles), and—each user can update the privacy preference policy of data of another profile that they own or write data to another profile only if the newly obtained profile stays fully accessible to their owner.We prove that any well-typed network is well behaved.
APA, Harvard, Vancouver, ISO, and other styles
9

Winarsih, Winarsih, and Irwansyah Irwansyah. "PROTEKSI PRIVASI BIG DATA DALAM MEDIA SOSIAL." Jurnal Audience 3, no. 1 (October 19, 2020): 1–33. http://dx.doi.org/10.33633/ja.v3i1.3722.

Full text
Abstract:
AbstrakPerkembangan media sosial di Indonesia begitu pesat dengan jumlah pengguna yang terus meningkat. Akan tetapi hal tersebut kurang diimbangi dengan kesadaran tentang privasi dalam kaitannya dengan big data yang dihasilkan oleh penyedia layanan. Penyedia layanan memberikan kebijakan berupa syarat dan ketentuan akan tetapi masyarakat umumnya masih rendah dalam hal memiliki kesadaran tentang privasi data pribadi mereka. Penelitian ini bertujuan untuk mengetahui solusi dari permasalahan privasi big data dalam media sosial dan dianalisis dengan teori privasi komunikasi. Metode yang digunakan dalam penelitian ini adalah metode meta-analisis yang mengolah hasil temuan dari penelitian sebelumnya. Hasil dari penelitian ini berupa solusi bagi perlindungan privasi data individu saat pembuatan, penyimpanan, dan pemrosesan data. Kata Kunci: data besar, Indonesia, kebijakan, media sosial, privasi AbstractThe development of social media in Indonesia is high increasing. However, this is not accompanied by awareness of privacy in its commitment to big data generated by service providers. The service provider provides an agreed policy, will provide the public about their data privacy issues. This article used Communication Privacy Management to finding solution about big data privacy problems. The method used in this study is a meta-analysis method that processes the findings from previous studies. The results of this study contain solutions for privacy protection when creating data, data storage, and processing data. Keywords: big data, Indonesia, policy, social media, privacy
APA, Harvard, Vancouver, ISO, and other styles
10

Smith, J. H., and JS Horne. "Data privacy and DNA data." IASSIST Quarterly 47, no. 3-4 (December 14, 2023): 1–3. http://dx.doi.org/10.29173/iq1094.

Full text
Abstract:
The letter to the Editor is in response to the manuscript by Hertzog et al. (2023) titled "Data management instruments to Protect the personal information of Children and Adolescents in sub-Saharan Africa." The letter elaborates on personal data protection, particularly the POPI Act's data management requirements; the DNA Act mandates specific measures to ensure the data integrity and security of the NFDD's information. In addition, it criminalises the misuse or compromise of the data's integrity within the NFDD. In addition, the DNA Act established the National Forensic Oversight and Ethical Board (NFOEB), which is responsible for overseeing ethical compliance, implementing the Act, and preserving data integrity within the NFDD. The NFOEB is also responsible for investigating any complaints regarding DNA forensics and the management of the NFDD.
APA, Harvard, Vancouver, ISO, and other styles
11

Abdul Manap, Nazura, Mohamad Rizal Abd Rahman, and Siti Nur Farah Atiqah Salleh. "HEALTH DATA OWNERSHIP IN MALAYSIA PUBLIC AND PRIVATE HEALTHCARE: A LEGAL ANALYSIS OF HEALTH DATA PRIVACY IN THE AGE OF BIG DATA." International Journal of Law, Government and Communication 7, no. 30 (December 31, 2022): 33–41. http://dx.doi.org/10.35631/ijlgc.730004.

Full text
Abstract:
Health data ownership in big data is a new legal issue. The problem stands between the public and private healthcare as the main proprietor of health data. In Malaysia, health data ownership is under government hospitals and private healthcare jurisdictions. Who owns the data will be responsible for safeguarding it, including its privacy. Various technical methods are applied to protect health data, such as aggregation and anonymization. The thing is, do these technical methods are still reliable to safeguard privacy in big data? In terms of legal protection, private healthcare is governed under Personal Data Protection Act 2010, while the same Act does not bind the government. With the advancement of big data, public and private healthcare are trying to extract values from health data by processing big data and its analytical outcomes. Considering that health data is sensitive due to its nature which contains personal information of individuals or patients, had raised an issue as to whether the proprietor could provide adequate legal protection of health data. Personal Data Protection Act 2010 is still applicable in giving protection for health data for private healthcare, but what are the laws governing health data privacy in public healthcare? This article aims to answer the questions by analyzing legal sources relevant to health data privacy in big data. We propose a regulatory guideline that follows the GDPR as a legal reference model to harmonize the public and private healthcare ownership of health data better to protect the privacy of individuals in big data.
APA, Harvard, Vancouver, ISO, and other styles
12

DE CAPITANI DI VIMERCATI, SABRINA, SARA FORESTI, GIOVANNI LIVRAGA, and PIERANGELA SAMARATI. "DATA PRIVACY: DEFINITIONS AND TECHNIQUES." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 20, no. 06 (December 2012): 793–817. http://dx.doi.org/10.1142/s0218488512400247.

Full text
Abstract:
The proper protection of data privacy is a complex task that requires a careful analysis of what actually has to be kept private. Several definitions of privacy have been proposed over the years, from traditional syntactic privacy definitions, which capture the protection degree enjoyed by data respondents with a numerical value, to more recent semantic privacy definitions, which take into consideration the mechanism chosen for releasing the data. In this paper, we illustrate the evolution of the definitions of privacy, and we survey some data protection techniques devised for enforcing such definitions. We also illustrate some well-known application scenarios in which the discussed data protection techniques have been successfully used, and present some open issues.
APA, Harvard, Vancouver, ISO, and other styles
13

Gayathri, Tata, and N. Durga. "Privacy Preserving Approaches for High Dimensional Data." International Journal of Trend in Scientific Research and Development Volume-1, Issue-5 (August 31, 2017): 1120–25. http://dx.doi.org/10.31142/ijtsrd2430.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Qamar, T., N. Z. Bawany, and N. A. Khan. "EDAMS: Efficient Data Anonymization Model Selector for Privacy-Preserving Data Publishing." Engineering, Technology & Applied Science Research 10, no. 2 (April 4, 2020): 5423–27. http://dx.doi.org/10.48084/etasr.3374.

Full text
Abstract:
The evolution of internet to the Internet of Things (IoT) gives an exponential rise to the data collection process. This drastic increase in the collection of a person’s private information represents a serious threat to his/her privacy. Privacy-Preserving Data Publishing (PPDP) is an area that provides a way of sharing data in their anonymized version, i.e. keeping the identity of a person undisclosed. Various anonymization models are available in the area of PPDP that guard privacy against numerous attacks. However, selecting the optimum model which balances utility and privacy is a challenging process. This study proposes the Efficient Data Anonymization Model Selector (EDAMS) for PPDP which generates an optimized anonymized dataset in terms of privacy and utility. EDAMS inputs the dataset with required parameters and produces its anonymized version by incorporating PPDP techniques while balancing utility and privacy. EDAMS is currently incorporating three PPDP techniques, namely k-anonymity, l-diversity, and t-closeness. It is tested against different variations of three datasets. The results are validated by testing each variation explicitly with the stated techniques. The results show the effectiveness of EDAMS by selecting the optimum model with minimal effort.
APA, Harvard, Vancouver, ISO, and other styles
15

Lobo-Vesga, Elisabet, Alejandro Russo, and Marco Gaboardi. "A Programming Language for Data Privacy with Accuracy Estimations." ACM Transactions on Programming Languages and Systems 43, no. 2 (July 2021): 1–42. http://dx.doi.org/10.1145/3452096.

Full text
Abstract:
Differential privacy offers a formal framework for reasoning about the privacy and accuracy of computations on private data. It also offers a rich set of building blocks for constructing private data analyses. When carefully calibrated, these analyses simultaneously guarantee the privacy of the individuals contributing their data, and the accuracy of the data analysis results, inferring useful properties about the population. The compositional nature of differential privacy has motivated the design and implementation of several programming languages to ease the implementation of differentially private analyses. Even though these programming languages provide support for reasoning about privacy, most of them disregard reasoning about the accuracy of data analyses. To overcome this limitation, we present DPella, a programming framework providing data analysts with support for reasoning about privacy, accuracy, and their trade-offs. The distinguishing feature of DPella is a novel component that statically tracks the accuracy of different data analyses. To provide tight accuracy estimations, this component leverages taint analysis for automatically inferring statistical independence of the different noise quantities added for guaranteeing privacy. We evaluate our approach by implementing several classical queries from the literature and showing how data analysts can calibrate the privacy parameters to meet the accuracy requirements, and vice versa.
APA, Harvard, Vancouver, ISO, and other styles
16

Gertner, Yael, Yuval Ishai, Eyal Kushilevitz, and Tal Malkin. "Protecting Data Privacy in Private Information Retrieval Schemes." Journal of Computer and System Sciences 60, no. 3 (June 2000): 592–629. http://dx.doi.org/10.1006/jcss.1999.1689.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Luo, Xiaohui. "A Method for Privacy-Safe Synthetic Health Data." Academic Journal of Science and Technology 10, no. 1 (March 27, 2024): 445–50. http://dx.doi.org/10.54097/f7fjss40.

Full text
Abstract:
Private health records are important for medical research but hard to get because of legal rules. This shortage of data can be solved by using generative models like GANs, which make new, similar data. But GANs might leak private information. To fix this, we made a new kind of GAN with a privacy protection part called DP-ACTGAN. It uses differential privacy to keep the original data safe. We also put a classifier in the GAN to make sure the new data is very close to the real data. Experiments show that DP-ACTGAN can make good quality data without giving away private information. This means we can use data well without breaking privacy, which is good for ethical research and making new things while keeping privacy.
APA, Harvard, Vancouver, ISO, and other styles
18

Riyana, Surapon, Nobutaka Ito, Tatsanee Chaiya, Uthaiwan Sriwichai, Natthawud Dussadee, Tanate Chaichana, Rittichai Assawarachan, Thongchai Maneechukate, Samerkhwan Tantikul, and Noppamas Riyana. "Privacy Threats and Privacy Preservation Techniques for Farmer Data Collections Based on Data Shuffling." ECTI Transactions on Computer and Information Technology (ECTI-CIT) 16, no. 3 (June 25, 2022): 289–301. http://dx.doi.org/10.37936/ecti-cit.2022163.246469.

Full text
Abstract:
Aside from smart technologies, farm data collection is also important for smart farms including farm environment data collection and farmer survey data collection. With farm data collection, we observe that it is generally proposed to utilize in smart farm systems. However, it can also be released for use in the outside scope of the data collecting organization for an appropriate business reason such as improving the smart farm system, product quality, and customer service. Moreover, we can observe that the farmer survey data collection often includes sensitive data, the private data of farmers. Thus, it could lead to privacy violation issues when it is released. To address these issues in the farmer survey data collection, an anatomization model can protect the users' private data that is available in farmer survey data collection to be proposed. However, it still has disorganized issues and privacy violation issues in the sensitive table that must be addressed. To rid these vulnerabilities of anatomization models, a new privacy preservation model based on data shuffing is proposed in this work. Moreover, the proposed model is evaluated by conducting extensive experiments. The experimental results indicate that the proposed model is more efficient than the anatomization model for the farmer survey data collection. That is, the adversary can have the confidence for re-identifying every sensitive data that is available in farmer survey data collection that is after satisfied by the privacy preservation constraint of the proposed model to be at most 1/l. Furthermore, after the farmer survey data collection satisfies the privacy preservation constraint of the proposed model, it does not have disorganized issues and privacy violation issues from considering the sensitive values.
APA, Harvard, Vancouver, ISO, and other styles
19

Yang, Qing, Cheng Wang, Teng Hu, Xue Chen, and Changjun Jiang. "Implicit privacy preservation: a framework based on data generation." Security and Safety 1 (2022): 2022008. http://dx.doi.org/10.1051/sands/2022008.

Full text
Abstract:
This paper addresses a special and imperceptible class of privacy, called implicit privacy. In contrast to traditional (explicit) privacy, implicit privacy has two essential properties: (1) It is not initially defined as a privacy attribute; (2) it is strongly associated with privacy attributes. In other words, attackers could utilize it to infer privacy attributes with a certain probability, indirectly resulting in the disclosure of private information. To deal with the implicit privacy disclosure problem, we give a measurable definition of implicit privacy, and propose an ex-ante implicit privacy-preserving framework based on data generation, called IMPOSTER. The framework consists of an implicit privacy detection module and an implicit privacy protection module. The former uses normalized mutual information to detect implicit privacy attributes that are strongly related to traditional privacy attributes. Based on the idea of data generation, the latter equips the Generative Adversarial Network (GAN) framework with an additional discriminator, which is used to eliminate the association between traditional privacy attributes and implicit ones. We elaborate a theoretical analysis for the convergence of the framework. Experiments demonstrate that with the learned generator, IMPOSTER can alleviate the disclosure of implicit privacy while maintaining good data utility.
APA, Harvard, Vancouver, ISO, and other styles
20

Jain, Pinkal, Vikas Thada, and Deepak Motwani. "Providing Highest Privacy Preservation Scenario for Achieving Privacy in Confidential Data." International Journal of Experimental Research and Review 39, Spl Volume (May 30, 2024): 190–99. http://dx.doi.org/10.52756/ijerr.2024.v39spl.015.

Full text
Abstract:
Machine learning algorithms have been extensively employed in multiple domains, presenting an opportunity to enable privacy. However, their effectiveness is dependent on enormous data volumes and high computational resources, usually available online. It entails personal and private data like mobile telephone numbers, identification numbers, and medical histories. Developing efficient and economical techniques to protect this private data is critical. In this context, the current research suggests a novel way to accomplish this, combining modified differential privacy with a more complicated machine learning (ML) model. It is possible to assess the privacy-specific characteristics of single or multiple-level models using the suggested method, as demonstrated by this work. It then employs the gradient values from the stochastic gradient descent algorithm to determine the scale of Gaussian noise, thereby preserving sensitive information within the data. The experimental results show that by fine-tuning the parameters of the modified differential privacy model based on the varied degrees of private information in the data, our suggested model outperforms existing methods in terms of accuracy, efficiency and privacy.
APA, Harvard, Vancouver, ISO, and other styles
21

Ekta, Mrinal. "Secured Cloud Data Sharing: Privacy-Preserving Storage Optimization with Data Confidentiality." International Journal of Research Publication and Reviews 4, no. 8 (August 2023): 2957–66. http://dx.doi.org/10.55248/gengpi.4.823.51935.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Zhu, Xiao Ming. "Research on Privacy Preserving Data Mining Association Rules Protocol." Advanced Materials Research 756-759 (September 2013): 1661–64. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.1661.

Full text
Abstract:
Privacy preserving in data mining is a significant direction. There has been growing interests in private concerns for future data mining research. Privacy preserving data mining concentrates on developing accurate models without sharing precise individual data records. A privacy preserving association rule mining algorithm was introduced. This algorithm preserved privacy of individual values by computing scalar product. Then, the data mining and secure multiparty computation are briefly introduced. And proposes an implementation for privacy preserving mining protocol based secure multiparty computation protocol.
APA, Harvard, Vancouver, ISO, and other styles
23

BRYZHKO, V. "Data privacy in cloud technologies." INFORMATION AND LAW, no. 4(19) (December 15, 2016): 47–59. http://dx.doi.org/10.37750/2616-6798.2016.4(19).272976.

Full text
Abstract:
On the privacy and personal data protection in terms of modern technologies development. Suggestions are provided in relation to introduction of institute of right of private ownership of a person on the personal information in Ukraine.
APA, Harvard, Vancouver, ISO, and other styles
24

Duan, Huabin, Jie Yang, and Huanjun Yang. "A Blockchain-Based Privacy Protection Application for Logistics Big Data." Journal of Cases on Information Technology 24, no. 5 (February 21, 2022): 1–12. http://dx.doi.org/10.4018/jcit.295249.

Full text
Abstract:
Logistics business is generally managed by logistics orders in plain text, and there is a risk of disclosure of customer privacy information in every business link. In order to solve the problem of privacy protection in logistics big data system, a new kind of logistics user privacy data protection scheme is proposed. First of all, an access rights management mechanism is designed by combining block chain and anonymous authentication to realize the control and management of users' access rights to private data. Then, the privacy and confidentiality protection between different services is realized by dividing and storing the data of different services. Finally, the participants of the intra-chain private data are specified by embedding fields in the logistics information. The blockchain node receiving the transaction is used as the transit node to synchronize the intra-chain privacy data, so as to improve the intra-chain privacy protection within the business. Experimental results show that the proposed method can satisfy the privacy requirements and ensure considerable performance.
APA, Harvard, Vancouver, ISO, and other styles
25

Du, Jiawen, and Yong Pi. "Research on Privacy Protection Technology of Mobile Social Network Based on Data Mining under Big Data." Security and Communication Networks 2022 (January 13, 2022): 1–9. http://dx.doi.org/10.1155/2022/3826126.

Full text
Abstract:
With the advent of the era of big data, people’s lives have undergone earth-shaking changes, not only getting rid of the cumbersome traditional data collection but also collecting and sorting information directly from people’s footprints on social networks. This paper explores and analyzes the privacy issues in current social networks and puts forward the protection strategies of users’ privacy data based on data mining algorithms so as to truly ensure that users’ privacy in social networks will not be illegally infringed in the era of big data. The data mining algorithm proposed in this paper can protect the user’s identity from being identified and the user’s private information from being leaked. Using differential privacy protection methods in social networks can effectively protect users’ privacy information in data publishing and data mining. Therefore, it is of great significance to study data publishing, data mining methods based on differential privacy protection, and their application in social networks.
APA, Harvard, Vancouver, ISO, and other styles
26

Silva, Paulo, Carolina Gonçalves, Nuno Antunes, Marilia Curado, and Bogdan Walek. "Privacy risk assessment and privacy-preserving data monitoring." Expert Systems with Applications 200 (August 2022): 116867. http://dx.doi.org/10.1016/j.eswa.2022.116867.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Yao-Huai, Lü. "Privacy and Data Privacy Issues in Contemporary China." Ethics and Information Technology 7, no. 1 (March 2005): 7–15. http://dx.doi.org/10.1007/s10676-005-0456-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Ingale, Indrajeet. "Privacy Preserving of Collaborative Data Publishing WithM-Privacy." International Journal on Recent and Innovation Trends in Computing and Communication 3, no. 2 (2015): 845–47. http://dx.doi.org/10.17762/ijritcc2321-8169.150290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Appenzeller, Arno, Moritz Leitner, Patrick Philipp, Erik Krempel, and Jürgen Beyerer. "Privacy and Utility of Private Synthetic Data for Medical Data Analyses." Applied Sciences 12, no. 23 (December 1, 2022): 12320. http://dx.doi.org/10.3390/app122312320.

Full text
Abstract:
The increasing availability and use of sensitive personal data raises a set of issues regarding the privacy of the individuals behind the data. These concerns become even more important when health data are processed, as are considered sensitive (according to most global regulations). PETs attempt to protect the privacy of individuals whilst preserving the utility of data. One of the most popular technologies recently is DP, which was used for the 2020 U.S. Census. Another trend is to combine synthetic data generators with DP to create so-called private synthetic data generators. The objective is to preserve statistical properties as accurately as possible, while the generated data should be as different as possible compared to the original data regarding private features. While these technologies seem promising, there is a gap between academic research on DP and synthetic data and the practical application and evaluation of these techniques for real-world use cases. In this paper, we evaluate three different private synthetic data generators (MWEM, DP-CTGAN, and PATE-CTGAN) on their use-case-specific privacy and utility. For the use case, continuous heart rate measurements from different individuals are analyzed. This work shows that private synthetic data generators have tremendous advantages over traditional techniques, but also require in-depth analysis depending on the use case. Furthermore, it can be seen that each technology has different strengths, so there is no clear winner. However, DP-CTGAN often performs slightly better than the other technologies, so it can be recommended for a continuous medical data use case.
APA, Harvard, Vancouver, ISO, and other styles
30

Vishnoi, Meenakshi, and Seeja K. R. "Privacy Preserving Data Mining using Attribute Encryption and Data Perturbation." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 6, no. 3 (May 25, 2013): 370–78. http://dx.doi.org/10.24297/ijct.v6i3.4461.

Full text
Abstract:
Data mining is a very active research area that deals with the extraction of  knowledge from very large databases. Data mining has made knowledge extraction and decision making easy. The extracted knowledge could reveal the personal information , if the data contains various private and sensitive attributes about an individual. This poses a threat to the personal information as there is a possibility of misusing the information behind the scenes without the knowledge of the individual. So, privacy becomes a great concern for the data owners and the organizations  as none of the organizations would like to share their data. To solve this problem Privacy Preserving Data Mining technique have emerged and also solved problems of various domains as it provides the benefit of data mining without compromising the privacy of an individual. This paper proposes a privacy preserving data mining technique the uses randomized perturbation and cryptographic technique. The performance evaluation of the proposed technique shows the same result with the modified data and the original data.
APA, Harvard, Vancouver, ISO, and other styles
31

Wang, Yi-Ren, and Yun-Cheng Tsai. "The Protection of Data Sharing for Privacy in Financial Vision." Applied Sciences 12, no. 15 (July 23, 2022): 7408. http://dx.doi.org/10.3390/app12157408.

Full text
Abstract:
The primary motivation is to address difficulties in data interpretation or a reduction in model accuracy. Although differential privacy can provide data privacy guarantees, it also creates problems. Thus, we need to consider the noise setting for differential privacy is currently inconclusive. This paper’s main contribution is finding a balance between privacy and accuracy. The training data of deep learning models may contain private or sensitive corporate information. These may be dangerous to attacks, leading to privacy data leakage for data sharing. Many strategies are for privacy protection, and differential privacy is the most widely applied one. Google proposed a federated learning technology to solve the problem of data silos in 2016. The technology can share information without exchanging original data and has made significant progress in the medical field. However, there is still the risk of data leakage in federated learning; thus, many models are now used with differential privacy mechanisms to minimize the risk. The data in the financial field are similar to medical data, which contains a substantial amount of personal data. The leakage may cause uncontrollable consequences, making data exchange and sharing difficult. Let us suppose that differential privacy applies to the financial field. Financial institutions can provide customers with higher value and personalized services and automate credit scoring and risk management. Unfortunately, the economic area rarely applies differential privacy and attains no consensus on parameter settings. This study compares data security with non-private and differential privacy financial visual models. The paper finds a balance between privacy protection with model accuracy. The results show that when the privacy loss parameter ϵ is between 12.62 and 5.41, the privacy models can protect training data, and the accuracy does not decrease too much.
APA, Harvard, Vancouver, ISO, and other styles
32

Kapil, Gayatri, Alka Agrawal, and R. A. Khan. "Big Data Security and Privacy Issues." Asian Journal of Computer Science and Technology 7, no. 2 (August 5, 2018): 128–32. http://dx.doi.org/10.51983/ajcst-2018.7.2.1861.

Full text
Abstract:
Big data gradually become a hot topic of research and business and has been growing at exponential rate. It is a combination of structured, semi-structured & unstructured data which is generated constantly through various sources from different platforms like web servers, mobile devices, social network, private and public cloud etc. Big data is used in many organisations and enterprises, big data security and privacy have been increasingly concerned. However, there is a clear contradiction between the large data security and privacy and the widespread use of big data. In this paper, we have indicated challenges of security and privacy in big data. Then, we have presented some possible methods and techniques to ensure big data security and privacy.
APA, Harvard, Vancouver, ISO, and other styles
33

Xu, Xiaolong, Xuan Zhao, Feng Ruan, Jie Zhang, Wei Tian, Wanchun Dou, and Alex X. Liu. "Data Placement for Privacy-Aware Applications over Big Data in Hybrid Clouds." Security and Communication Networks 2017 (2017): 1–15. http://dx.doi.org/10.1155/2017/2376484.

Full text
Abstract:
Nowadays, a large number of groups choose to deploy their applications to cloud platforms, especially for the big data era. Currently, the hybrid cloud is one of the most popular computing paradigms for holding the privacy-aware applications driven by the requirements of privacy protection and cost saving. However, it is still a challenge to realize data placement considering both the energy consumption in private cloud and the cost for renting the public cloud services. In view of this challenge, a cost and energy aware data placement method, named CEDP, for privacy-aware applications over big data in hybrid cloud is proposed. Technically, formalized analysis of cost, access time, and energy consumption is conducted in the hybrid cloud environment. Then a corresponding data placement method is designed to accomplish the cost saving for renting the public cloud services and energy savings for task execution within the private cloud platforms. Experimental evaluations validate the efficiency and effectiveness of our proposed method.
APA, Harvard, Vancouver, ISO, and other styles
34

Li, Zhiping, Jiagui Xie, Likun Gao, and Fanjie Nie. "Data Privacy Protection in Data Fusion." Journal of Physics: Conference Series 2033, no. 1 (September 1, 2021): 012179. http://dx.doi.org/10.1088/1742-6596/2033/1/012179.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Habegger, Benjamin. "Big Data vs. Privacy Big Data." Services Transactions on Big Data 1, no. 1 (January 2014): 25–35. http://dx.doi.org/10.29268/stbd.2014.1.1.3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Thuraisingham, Bhavani. "Privacy-Preserving Data Mining." Journal of Database Management 16, no. 1 (January 2005): 75–87. http://dx.doi.org/10.4018/jdm.2005010106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Determann, L. "Privacy and Data Protection." Moscow Journal of International Law 2019, no. 1 (March 30, 2019): 18–26. http://dx.doi.org/10.24833/0869-0049-2019-1-18-26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Labrinidis, Alexandros. "Privacy-Preserving Data Publishing." Foundations and Trends® in Databases 2, no. 3 (2009): 169–266. http://dx.doi.org/10.1561/1900000005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Chen, Bee-Chung, Daniel Kifer, Kristen LeFevre, and Ashwin Machanavajjhala. "Privacy-Preserving Data Publishing." Foundations and Trends® in Databases 2, no. 1-2 (2009): 1–167. http://dx.doi.org/10.1561/1900000008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Ravi, A. T., and S. Chitra. "Privacy Preserving Data Mining." Research Journal of Applied Sciences, Engineering and Technology 9, no. 8 (March 15, 2015): 616–21. http://dx.doi.org/10.19026/rjaset.9.1445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Wang, Shengling, Lina Shi, Qin Hu, Junshan Zhang, Xiuzhen Cheng, and Jiguo Yu. "Privacy-Aware Data Trading." IEEE Transactions on Information Forensics and Security 16 (2021): 3916–27. http://dx.doi.org/10.1109/tifs.2021.3099699.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Suleiman, James, and Terry Huston. "Data Privacy and Security." International Journal of Information Security and Privacy 3, no. 2 (April 2009): 42–53. http://dx.doi.org/10.4018/jisp.2009040103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Totterdale, Robert L. "Globalization and Data Privacy." International Journal of Information Security and Privacy 4, no. 2 (April 2010): 19–35. http://dx.doi.org/10.4018/jisp.2010040102.

Full text
Abstract:
Global organizations operate in multiple countries and are subject to both local and federal laws in each of the jurisdictions in which they conduct business. The collection, storage, processing, and transfer of data between countries or operating locations are often subject to a multitude of data privacy laws, regulations, and legal systems that are at times in conflict. Companies struggle to have the proper policies, processes, and technologies in place that will allow them to comply with a myriad of laws which are constantly changing. Using an established privacy management framework, this study provides a summary of major data privacy laws in the U.S., Europe, and India, and their implication for businesses. Additionally, in this paper, relationships between age, residence (country), attitudes and awareness of business rules and data privacy laws are explored for 331 business professionals located in the U.S and India.
APA, Harvard, Vancouver, ISO, and other styles
44

Butler, Declan. "Data sharing threatens privacy." Nature 449, no. 7163 (October 2007): 644. http://dx.doi.org/10.1038/449644a.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Jasny, Barbara R. "Sharing data, protecting privacy." Science 357, no. 6352 (August 17, 2017): 656.9–657. http://dx.doi.org/10.1126/science.357.6352.656-i.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Southey, Hugh, and Adam Straw. "Surveillance, Data and Privacy." Judicial Review 18, no. 4 (December 20, 2013): 440–45. http://dx.doi.org/10.1080/10854681.2013.11426812.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Gaff, Brian M., Thomas J. Smedinghoff, and Socheth Sor. "Privacy and Data Security." Computer 45, no. 3 (March 2012): 8–10. http://dx.doi.org/10.1109/mc.2012.102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Gaff, Brian M., Heather Egan Sussman, and Jennifer Geetter. "Privacy and Big Data." Computer 47, no. 6 (June 2014): 7–9. http://dx.doi.org/10.1109/mc.2014.161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

McGraw, Deven. "Data Identifiability and Privacy." American Journal of Bioethics 10, no. 9 (September 9, 2010): 30–31. http://dx.doi.org/10.1080/15265161.2010.494224.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Martin, Kelly D., Jisu J. Kim, Robert W. Palmatier, Lena Steinhoff, David W. Stewart, Beth A. Walker, Yonggui Wang, and Scott K. Weaven. "Data Privacy in Retail." Journal of Retailing 96, no. 4 (December 2020): 474–89. http://dx.doi.org/10.1016/j.jretai.2020.08.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography