Academic literature on the topic 'Privacy attacks on genomic data'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Privacy attacks on genomic data.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Privacy attacks on genomic data"

1

Ayoz, Kerem, Erman Ayday, and A. Ercument Cicek. "Genome Reconstruction Attacks Against Genomic Data-Sharing Beacons." Proceedings on Privacy Enhancing Technologies 2021, no. 3 (April 27, 2021): 28–48. http://dx.doi.org/10.2478/popets-2021-0036.

Full text
Abstract:
Abstract Sharing genome data in a privacy-preserving way stands as a major bottleneck in front of the scientific progress promised by the big data era in genomics. A community-driven protocol named genomic data-sharing beacon protocol has been widely adopted for sharing genomic data. The system aims to provide a secure, easy to implement, and standardized interface for data sharing by only allowing yes/no queries on the presence of specific alleles in the dataset. However, beacon protocol was recently shown to be vulnerable against membership inference attacks. In this paper, we show that privacy threats against genomic data sharing beacons are not limited to membership inference. We identify and analyze a novel vulnerability of genomic data-sharing beacons: genome reconstruction. We show that it is possible to successfully reconstruct a substantial part of the genome of a victim when the attacker knows the victim has been added to the beacon in a recent update. In particular, we show how an attacker can use the inherent correlations in the genome and clustering techniques to run such an attack in an efficient and accurate way. We also show that even if multiple individuals are added to the beacon during the same update, it is possible to identify the victim’s genome with high confidence using traits that are easily accessible by the attacker (e.g., eye color or hair type). Moreover, we show how a reconstructed genome using a beacon that is not associated with a sensitive phenotype can be used for membership inference attacks to beacons with sensitive phenotypes (e.g., HIV+). The outcome of this work will guide beacon operators on when and how to update the content of the beacon and help them (along with the beacon participants) make informed decisions.
APA, Harvard, Vancouver, ISO, and other styles
2

Almadhoun, Nour, Erman Ayday, and Özgür Ulusoy. "Inference attacks against differentially private query results from genomic datasets including dependent tuples." Bioinformatics 36, Supplement_1 (July 1, 2020): i136—i145. http://dx.doi.org/10.1093/bioinformatics/btaa475.

Full text
Abstract:
Abstract Motivation The rapid decrease in the sequencing technology costs leads to a revolution in medical research and clinical care. Today, researchers have access to large genomic datasets to study associations between variants and complex traits. However, availability of such genomic datasets also results in new privacy concerns about personal information of the participants in genomic studies. Differential privacy (DP) is one of the rigorous privacy concepts, which received widespread interest for sharing summary statistics from genomic datasets while protecting the privacy of participants against inference attacks. However, DP has a known drawback as it does not consider the correlation between dataset tuples. Therefore, privacy guarantees of DP-based mechanisms may degrade if the dataset includes dependent tuples, which is a common situation for genomic datasets due to the inherent correlations between genomes of family members. Results In this article, using two real-life genomic datasets, we show that exploiting the correlation between the dataset participants results in significant information leak from differentially private results of complex queries. We formulate this as an attribute inference attack and show the privacy loss in minor allele frequency (MAF) and chi-square queries. Our results show that using the results of differentially private MAF queries and utilizing the dependency between tuples, an adversary can reveal up to 50% more sensitive information about the genome of a target (compared to original privacy guarantees of standard DP-based mechanisms), while differentially privacy chi-square queries can reveal up to 40% more sensitive information. Furthermore, we show that the adversary can use the inferred genomic data obtained from the attribute inference attack to infer the membership of a target in another genomic dataset (e.g. associated with a sensitive trait). Using a log-likelihood-ratio test, our results also show that the inference power of the adversary can be significantly high in such an attack even using inferred (and hence partially incorrect) genomes. Availability and implementation https://github.com/nourmadhoun/Inference-Attacks-Differential-Privacy
APA, Harvard, Vancouver, ISO, and other styles
3

Mohammed Yakubu, Abukari, and Yi-Ping Phoebe Chen. "Ensuring privacy and security of genomic data and functionalities." Briefings in Bioinformatics 21, no. 2 (February 12, 2019): 511–26. http://dx.doi.org/10.1093/bib/bbz013.

Full text
Abstract:
Abstract In recent times, the reduced cost of DNA sequencing has resulted in a plethora of genomic data that is being used to advance biomedical research and improve clinical procedures and healthcare delivery. These advances are revolutionizing areas in genome-wide association studies (GWASs), diagnostic testing, personalized medicine and drug discovery. This, however, comes with security and privacy challenges as the human genome is sensitive in nature and uniquely identifies an individual. In this article, we discuss the genome privacy problem and review relevant privacy attacks, classified into identity tracing, attribute disclosure and completion attacks, which have been used to breach the privacy of an individual. We then classify state-of-the-art genomic privacy-preserving solutions based on their application and computational domains (genomic aggregation, GWASs and statistical analysis, sequence comparison and genetic testing) that have been proposed to mitigate these attacks and compare them in terms of their underlining cryptographic primitives, security goals and complexities—computation and transmission overheads. Finally, we identify and discuss the open issues, research challenges and future directions in the field of genomic privacy. We believe this article will provide researchers with the current trends and insights on the importance and challenges of privacy and security issues in the area of genomics.
APA, Harvard, Vancouver, ISO, and other styles
4

Raisaro, Jean Louis, Florian Tramèr, Zhanglong Ji, Diyue Bu, Yongan Zhao, Knox Carey, David Lloyd, et al. "Addressing Beacon re-identification attacks: quantification and mitigation of privacy risks." Journal of the American Medical Informatics Association 24, no. 4 (February 20, 2017): 799–805. http://dx.doi.org/10.1093/jamia/ocw167.

Full text
Abstract:
Abstract The Global Alliance for Genomics and Health (GA4GH) created the Beacon Project as a means of testing the willingness of data holders to share genetic data in the simplest technical context—a query for the presence of a specified nucleotide at a given position within a chromosome. Each participating site (or “beacon”) is responsible for assuring that genomic data are exposed through the Beacon service only with the permission of the individual to whom the data pertains and in accordance with the GA4GH policy and standards. While recognizing the inference risks associated with large-scale data aggregation, and the fact that some beacons contain sensitive phenotypic associations that increase privacy risk, the GA4GH adjudged the risk of re-identification based on the binary yes/no allele-presence query responses as acceptable. However, recent work demonstrated that, given a beacon with specific characteristics (including relatively small sample size and an adversary who possesses an individual’s whole genome sequence), the individual’s membership in a beacon can be inferred through repeated queries for variants present in the individual’s genome. In this paper, we propose three practical strategies for reducing re-identification risks in beacons. The first two strategies manipulate the beacon such that the presence of rare alleles is obscured; the third strategy budgets the number of accesses per user for each individual genome. Using a beacon containing data from the 1000 Genomes Project, we demonstrate that the proposed strategies can effectively reduce re-identification risk in beacon-like datasets.
APA, Harvard, Vancouver, ISO, and other styles
5

Aziz, Md Momin Al, Shahin Kamali, Noman Mohammed, and Xiaoqian Jiang. "Online Algorithm for Differentially Private Genome-wide Association Studies." ACM Transactions on Computing for Healthcare 2, no. 2 (March 2021): 1–27. http://dx.doi.org/10.1145/3431504.

Full text
Abstract:
Digitization of healthcare records contributed to a large volume of functional scientific data that can help researchers to understand the behaviour of many diseases. However, the privacy implications of this data, particularly genomics data, have surfaced recently as the collection, dissemination, and analysis of human genomics data is highly sensitive. There have been multiple privacy attacks relying on the uniqueness of the human genome that reveals a participant or a certain group’s presence in a dataset. Therefore, the current data sharing policies have ruled out any public dissemination and adopted precautionary measures prior to genomics data release, which hinders timely scientific innovation. In this article, we investigate an approach that only releases the statistics from genomic data rather than the whole dataset and propose a generalized Differentially Private mechanism for Genome-wide Association Studies (GWAS). Our method provides a quantifiable privacy guarantee that adds noise to the intermediate outputs but ensures satisfactory accuracy of the private results. Furthermore, the proposed method offers multiple adjustable parameters that the data owners can set based on the optimal privacy requirements. These variables are presented as equalizers that balance between the privacy and utility of the GWAS. The method also incorporates Online Bin Packing technique [1], which further bounds the privacy loss linearly, growing according to the number of open bins and scales with the incoming queries. Finally, we implemented and benchmarked our approach using seven different GWAS studies to test the performance of the proposed methods. The experimental results demonstrate that for 1,000 arbitrary online queries, our algorithms are more than 80% accurate with reasonable privacy loss and exceed the state-of-the-art approaches on multiple studies (i.e., EigenStrat, LMM, TDT).
APA, Harvard, Vancouver, ISO, and other styles
6

Öksüz, Abdullah Çağlar, Erman Ayday, and Uğur Güdükbay. "Privacy-preserving and robust watermarking on sequential genome data using belief propagation and local differential privacy." Bioinformatics 37, no. 17 (February 25, 2021): 2668–74. http://dx.doi.org/10.1093/bioinformatics/btab128.

Full text
Abstract:
Abstract Motivation Genome data is a subject of study for both biology and computer science since the start of the Human Genome Project in 1990. Since then, genome sequencing for medical and social purposes becomes more and more available and affordable. Genome data can be shared on public websites or with service providers (SPs). However, this sharing compromises the privacy of donors even under partial sharing conditions. We mainly focus on the liability aspect ensued by the unauthorized sharing of these genome data. One of the techniques to address the liability issues in data sharing is the watermarking mechanism. Results To detect malicious correspondents and SPs—whose aim is to share genome data without individuals’ consent and undetected—, we propose a novel watermarking method on sequential genome data using belief propagation algorithm. In our method, we have two criteria to satisfy. (i) Embedding robust watermarks so that the malicious adversaries cannot temper the watermark by modification and are identified with high probability. (ii) Achieving ϵ-local differential privacy in all data sharings with SPs. For the preservation of system robustness against single SP and collusion attacks, we consider publicly available genomic information like Minor Allele Frequency, Linkage Disequilibrium, Phenotype Information and Familial Information. Our proposed scheme achieves 100% detection rate against the single SP attacks with only 3% watermark length. For the worst case scenario of collusion attacks (50% of SPs are malicious), 80% detection is achieved with 5% watermark length and 90% detection is achieved with 10% watermark length. For all cases, the impact of ϵ on precision remained negligible and high privacy is ensured. Availability and implementation https://github.com/acoksuz/PPRW\_SGD\_BPLDP Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
7

Ayoz, Kerem, Miray Aysen, Erman Ayday, and A. Ercument Cicek. "The effect of kinship in re-identification attacks against genomic data sharing beacons." Bioinformatics 36, Supplement_2 (December 2020): i903—i910. http://dx.doi.org/10.1093/bioinformatics/btaa821.

Full text
Abstract:
Abstract Motivation Big data era in genomics promises a breakthrough in medicine, but sharing data in a private manner limit the pace of field. Widely accepted ‘genomic data sharing beacon’ protocol provides a standardized and secure interface for querying the genomic datasets. The data are only shared if the desired information (e.g. a certain variant) exists in the dataset. Various studies showed that beacons are vulnerable to re-identification (or membership inference) attacks. As beacons are generally associated with sensitive phenotype information, re-identification creates a significant risk for the participants. Unfortunately, proposed countermeasures against such attacks have failed to be effective, as they do not consider the utility of beacon protocol. Results In this study, for the first time, we analyze the mitigation effect of the kinship relationships among beacon participants against re-identification attacks. We argue that having multiple family members in a beacon can garble the information for attacks since a substantial number of variants are shared among kin-related people. Using family genomes from HapMap and synthetically generated datasets, we show that having one of the parents of a victim in the beacon causes (i) significant decrease in the power of attacks and (ii) substantial increase in the number of queries needed to confirm an individual’s beacon membership. We also show how the protection effect attenuates when more distant relatives, such as grandparents are included alongside the victim. Furthermore, we quantify the utility loss due adding relatives and show that it is smaller compared with flipping based techniques.
APA, Harvard, Vancouver, ISO, and other styles
8

Humbert, Mathias, Kévin Huguenin, Joachim Hugonot, Erman Ayday, and Jean-Pierre Hubaux. "De-anonymizing Genomic Databases Using Phenotypic Traits." Proceedings on Privacy Enhancing Technologies 2015, no. 2 (June 1, 2015): 99–114. http://dx.doi.org/10.1515/popets-2015-0020.

Full text
Abstract:
AbstractPeople increasingly have their genomes sequenced and some of them share their genomic data online. They do so for various purposes, including to find relatives and to help advance genomic research. An individual’s genome carries very sensitive, private information such as its owner’s susceptibility to diseases, which could be used for discrimination. Therefore, genomic databases are often anonymized. However, an individual’s genotype is also linked to visible phenotypic traits, such as eye or hair color, which can be used to re-identify users in anonymized public genomic databases, thus raising severe privacy issues. For instance, an adversary can identify a target’s genome using known her phenotypic traits and subsequently infer her susceptibility to Alzheimer’s disease. In this paper, we quantify, based on various phenotypic traits, the extent of this threat in several scenarios by implementing de-anonymization attacks on a genomic database of OpenSNP users sequenced by 23andMe. Our experimental results show that the proportion of correct matches reaches 23% with a supervised approach in a database of 50 participants. Our approach outperforms the baseline by a factor of four, in terms of the proportion of correct matches, in most scenarios. We also evaluate the adversary’s ability to predict individuals’ predisposition to Alzheimer’s disease, and we observe that the inference error can be halved compared to the baseline. We also analyze the effect of the number of known phenotypic traits on the success rate of the attack. As progress is made in genomic research, especially for genotype-phenotype associations, the threat presented in this paper will become more serious.
APA, Harvard, Vancouver, ISO, and other styles
9

Asgiani, Piping, Chriswardani Suryawati, and Farid Agushybana. "A literature review: Security Aspects in the Implementation of Electronic Medical Records in Hospitals." MEDIA ILMU KESEHATAN 10, no. 2 (January 29, 2022): 161–66. http://dx.doi.org/10.30989/mik.v10i2.561.

Full text
Abstract:
Backgrounds: Electronic Medical Records have complete and integrated patient health data, and are up to date because RME combines clinical and genomic data, this poses a great risk to data disclosure The priority of privacy is data security (security) so that data will not leak to other parties. That way cyber attacks can be suppressed by increasing cybersecurity, namely conducting regular evaluation and testing of security levels.Objectives: To determine the security technique that maintains privacy of electronic medical records.Methods: This type of research uses a literature review methodResults: Data security techniques are determined from each type of health service. Data security techniques that can be applied are cryptographic methods, firewalls, access control, and other security techniques. This method has proven to be a very promising and successful technique for safeguarding the privacy and security of RMEConclusion: Patient medical records or medical records are very private and sensitive because they store all data about complaints, diagnoses, disease histories, actions, and treatments about patients, so the information contained therein must be kept confidential. As well as the hospital as a medical record manager is required to apply for patient privacy data security techniques.
APA, Harvard, Vancouver, ISO, and other styles
10

Narayan, Ashwin. "Current regulations will not protect patient privacy in the age of machine learning." MIT Science Policy Review 1 (August 20, 2020): 3–9. http://dx.doi.org/10.38105/spr.ax4o7jkyr3.

Full text
Abstract:
Machine learning (ML) has shown great promise in advancing health outcomes by parsing ever more effectively through massive clinical and genomic datasets. These advances are tempered by fears that they come at the cost of privacy. Since data relating to health are particularly sensitive because of immutability and comprehensiveness, these privacy concerns must be seriously addressed. We consider examples (the Golden State Killer, the Personal Genome Project, and the rise of wearable fitness trackers) where the tension between technological progress and lost privacy is already apparent. We discuss, in light of ML capabilities, the current state of privacy regulation in healthcare. We note the Constitutional right to privacy does not yet in general protect voluntary disclosures of data; HIPAA, the current law regulating healthcare data in the US, does not apply to the burgeoning field of healthcare-adjacent companies and organizations collecting health data; and access controls remain subject to re-identification attacks. We then discuss the active research in algorithmic paradigms for privacy, highlighting their promise but also their limitations. In order to encourage technological progress, reframing privacy for the age of ML might involve extending the Constitutional right to privacy, extending the applicability of HIPAA, and/or enforcing transparent privacy policies.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Privacy attacks on genomic data"

1

Shang, Hui. "Privacy Preserving Kin Genomic Data Publishing." Miami University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=miami1594835227299524.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Maouche, Mohamed. "Protection against re-identification attacks in location privacy." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEI089.

Full text
Abstract:
De nos jours, avec la large propagation de différents appareils mobiles, de nombreux capteurs accompagnent des utilisateurs. Ces capteurs peuvent servir à collecter des données de mobilité qui sont utiles pour des urbanistes ou des chercheurs. Cependant, l'exploitation de ces données soulèvent de nombreuses menaces quant à la préservation de la vie privée des utilisateurs. En effet, des informations sensibles tel que le lieu domicile, le lieu de travail ou même les croyances religieuses peuvent être inférées de ces données. Durant la dernière décennie, des mécanismes de protections appelées "Location Privacy Protection Mechanisms (LPPM)" ont été proposé. Ils imposent des guarenties sur les données (e.g., k-anonymity ou differential privacy), obfusquent les informations sensibles (e.g., efface les points d'intéret) ou sont une contre-mesure à des attaques particulières. Nous portons notre attention à la ré-identification qui est un risque précis lié à la préservation de la vie privée dans les données de mobilité. Il consiste en a un attaquant qui des lors qu'il reçoit une trace de mobilité anonymisée, il cherche à retrouver l'identifiant de son propriétaire en la rattachant à un passif de traces non-anonymisées des utilisateurs du système. Dans ce cadre, nous proposons tout d'abords des attaques de ré-identification AP-Attack et ILL-Attack servant à mettre en exergue les vulnérabilités des mécanismes de protections de l'état de l'art et de quantifier leur efficacité. Nous proposons aussi un nouveau mécanisme de protection HMC qui utilise des heat maps afin de guider la transformation du comportement d'un individu pour qu'il ne ressemble plus au soi du passée mais à un autre utilisateur, le préservant ainsi de la ré-identification. Cet modification de la trace de mobilité est contrainte par des mesures d'utilité des données afin de minimiser la qualité de service ou les conclusions que l'on peut tirer à l'aide de ces données
With the wide propagation of handheld devices, more and more mobile sensors are being used by end users on a daily basis. Those sensors could be leveraged to gather useful mobility data for city planners, business analysts and researches. However, gathering and exploiting mobility data raises many privacy threats. Sensitive information such as one’s home or workplace, hobbies, religious beliefs, political or sexual preferences can be inferred from the gathered data. In the last decade, Location Privacy Protection Mechanisms (LPPMs) have been proposed to protect user data privacy. They alter data mobility to enforce formal guarantees (e.g., k-anonymity or differential privacy), hide sensitive information (e.g., erase points of interests) or act as countermeasures for particular attacks. In this thesis, we focus on the threat of re-identification which aims at re-linking an anonymous mobility trace to the know past mobility of its user. First, we propose re-identification attacks (AP-Attack and ILL-Attack) that find vulnerabilities and stress current state-of-the-art LPPMs to quantify their effectiveness. We also propose a new protection mechanism HMC that uses heat maps to guide the transformation of mobility data to change the behaviour of a user, in order to make her look similar to someone else rather than her past self which preserves her from re-identification attacks. This alteration of mobility trace is constrained with the control of the utility of the data to minimize the distortion in the quality of the analysis realized on this data
APA, Harvard, Vancouver, ISO, and other styles
3

Nuñez, del Prado Cortez Miguel. "Inference attacks on geolocated data." Thesis, Toulouse, INSA, 2013. http://www.theses.fr/2013ISAT0028/document.

Full text
Abstract:
Au cours des dernières années, nous avons observé le développement de dispositifs connectéset nomades tels que les téléphones mobiles, tablettes ou même les ordinateurs portablespermettant aux gens d’utiliser dans leur quotidien des services géolocalisés qui sont personnalisésd’après leur position. Néanmoins, les services géolocalisés présentent des risques enterme de vie privée qui ne sont pas forcément perçus par les utilisateurs. Dans cette thèse,nous nous intéressons à comprendre les risques en terme de vie privée liés à la disséminationet collection de données de localisation. Dans ce but, les attaques par inférence que nousavons développé sont l’extraction des points d’intérêts, la prédiction de la prochaine localisationainsi que la désanonymisation de traces de mobilité, grâce à un modèle de mobilité quenous avons appelé les chaînes de Markov de mobilité. Ensuite, nous avons établi un classementdes attaques d’inférence dans le contexte de la géolocalisation se basant sur les objectifsde l’adversaire. De plus, nous avons évalué l’impact de certaines mesures d’assainissement àprémunir l’efficacité de certaines attaques par inférence. En fin nous avons élaboré une plateformeappelé GEoPrivacy Enhanced TOolkit (GEPETO) qui permet de tester les attaques parinférences développées
In recent years, we have observed the development of connected and nomad devices suchas smartphones, tablets or even laptops allowing individuals to use location-based services(LBSs), which personalize the service they offer according to the positions of users, on a dailybasis. Nonetheless, LBSs raise serious privacy issues, which are often not perceived by the endusers. In this thesis, we are interested in the understanding of the privacy risks related to thedissemination and collection of location data. To address this issue, we developed inferenceattacks such as the extraction of points of interest (POI) and their semantics, the predictionof the next location as well as the de-anonymization of mobility traces, based on a mobilitymodel that we have coined as mobility Markov chain. Afterwards, we proposed a classificationof inference attacks in the context of location data based on the objectives of the adversary.In addition, we evaluated the effectiveness of some sanitization measures in limiting the efficiencyof inference attacks. Finally, we have developed a generic platform called GEPETO (forGEoPrivacy Enhancing Toolkit) that can be used to test the developed inference attacks
APA, Harvard, Vancouver, ISO, and other styles
4

Chini, Foroushan Amir Hossein. "Protecting Location-Data Against Inference Attacks Using Pre-Defined Personas." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-66792.

Full text
Abstract:
Usage of locational data is getting more popular day by day. Location-aware application, context aware application and Ubiquities applications are some of the major categories of applications which are based on locational data. One of the most concerning issues regarding such applications is how to protect user’s privacy against malicious attackers. Failing in this task would result in a total failure for the project, considering how privacy concerns are getting more and more important for the end users. In this project, we will propose a theoretical solution for protecting user privacy in location-based application against inference attacks. Our solution is based on categorizing target users into pre-defined groups (a. k. a. Personas) and utilizing their common characteristics in order to synthesize access control rules for the collected data.
APA, Harvard, Vancouver, ISO, and other styles
5

Miracle, Jacob M. "De-Anonymization Attack Anatomy and Analysis of Ohio Nursing Workforce Data Anonymization." Wright State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=wright1482825210051101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sun, Wenhai. "Towards Secure Outsourced Data Services in the Public Cloud." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/84396.

Full text
Abstract:
Past few years have witnessed a dramatic shift for IT infrastructures from a self-sustained model to a centralized and multi-tenant elastic computing paradigm -- Cloud Computing, which significantly reshapes the landscape of existing data utilization services. In truth, public cloud service providers (CSPs), e.g. Google, Amazon, offer us unprecedented benefits, such as ubiquitous and flexible access, considerable capital expenditure savings and on-demand resource allocation. Cloud has become the virtual ``brain" as well to support and propel many important applications and system designs, for example, artificial intelligence, Internet of Things, and so forth; on the flip side, security and privacy are among the primary concerns with the adoption of cloud-based data services in that the user loses control of her/his outsourced data. Encrypting the sensitive user information certainly ensures the confidentiality. However, encryption places an extra layer of ambiguity and its direct use may be at odds with the practical requirements and defeat the purpose of cloud computing technology. We believe that security in nature should not be in contravention of the cloud outsourcing model. Rather, it is expected to complement the current achievements to further fuel the wide adoption of the public cloud service. This, in turn, requires us not to decouple them from the very beginning of the system design. Drawing the successes and failures from both academia and industry, we attempt to answer the challenges of realizing efficient and useful secure data services in the public cloud. In particular, we pay attention to security and privacy in two essential functions of the cloud ``brain", i.e. data storage and processing. Our first work centers on the secure chunk-based deduplication of encrypted data for cloud backup and achieves the performance comparable to the plaintext cloud storage deduplication while effectively mitigating the information leakage from the low-entropy chunks. On the other hand, we comprehensively study the promising yet challenging issue of search over encrypted data in the cloud environment, which allows a user to delegate her/his search task to a CSP server that hosts a collection of encrypted files while still guaranteeing some measure of query privacy. In order to accomplish this grand vision, we explore both software-based secure computation research that often relies on cryptography and concentrates on algorithmic design and theoretical proof, and trusted execution solutions that depend on hardware-based isolation and trusted computing. Hopefully, through the lens of our efforts, insights could be furnished into future research in the related areas.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
7

Sharma, Sagar. "Towards Data and Model Confidentiality in Outsourced Machine Learning." Wright State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=wright1567529092809275.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Carlander-Reuterfelt, Gallo Matias. "Estimating human resilience to social engineering attacks through computer configuration data : A literature study on the state of social engineering vulnerabilities." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-277921.

Full text
Abstract:
Social engineering as a method of attack is increasingly becoming a problem for both corporations and individuals. From identity theft to enormous financial losses, this form of attack is notorious for affecting complex structures, yet often being very simple in its form. Whereas for other forms of cyber- attack, tools like antivirus and antimalware are now industry standard, have proven to be reliable ways to keep safe private and confidential data, there is no such equivalent for social engineering attacks. There is not, as of this day, a trustworthy and precise way of estimating resilience to these attacks, while still keeping the private data private. The purpose of this report is to compile the different aspects of a users computer data that have been proven to significantly indicative of their susceptibility to these kinds of attacks, and with them, devise a system that can, with some degree of precision, estimate the resilience to social engineering of the user. This report is a literature study on the topic of social engineering and how it relates to computer program data, configuration and personality. The different phases of research each led to a more comprehensive way of linking the different pieces of data together and devising a rudimentary way of estimating human resilience to social engineering through the observation of a few configuration aspects. For the purposes of this report, the data had to be reasonably accessible, respecting privacy, and being something that can be easily extrapolated from one user to another. Based on findings, ranging from psychological data and behavioral patterns, to network configurations, we conclude that, even though there is data that supports the possibility of estimating resilience, there is, as of this day, no empirically proven way of doing so in a precise manner. An estimation model is provided by the end of the report, but the limitations of this project did not allow for an experiment to prove its validity beyond the theories it is based upon.
Social Manipulering som attackmetod har blivit ett ökande problem både för företag och individer. Från identitetsstöld till enorma ekonomiska förluster, är denna form av attack känd för att kunna påverka komplexa system, men är ofta i sig mycket enkel i sin form. Medans andra typer av cyberattacker kan skyddas med verktyg som antivirus och antimalware och tillförlitligt hålla privat och konfidentiell information säker så finns det inga motsvarande verktyg för att skydda sig mot Social Manipulering attacker. Det finns alltså inte idag ett pålitligt och säkert sätt att motstå Social Manipulering attacker och skydda personliga uppgifter och privat data. Syftet med denna rapport är att visa olika aspekterna hur datoranvändares data är sårbarhet för dessa typer av attacker, och med dessa utforma ett system som med viss mån av precision kan mäta resiliens mot Social Manipulering. Rapporten är ett resultat av studier av litteratur inom ämnet Social Manipulering och hur den relaterar sig till datorns data, konfiguration och personuppgifter. De olika delarna av utredningen leder var och en till ett mer omfattande sätt att koppla samman de olika uppgifterna och utforma ett rudimentärt sätt att uppskatta en persons resiliens mot Social Manipulering, detta genom att observera olika aspekter av datorns konfiguration. För syftet av rapporten så har uppgifterna varit rimligt tillgängliga, har respekterat integriteten och varit något som lätt kan anpassas från en användare till en annan. Baserat på observationerna av psykologiska data, beteendemönster och nätverkskonfigurationer, så kan vi dra slutsatsen att även om det finns data som stöder möjligheten att uppskatta resiliens, finns det idag inget empiriskt bevisat sätt att göra det på ett exakt sätt. En exempel av modell för att uppskatta resiliens finns i slutet av rapporten. Ramen för detta projekt gjorde det inte möjligt att göra ett praktiskt experiment för att validera teorierna.
APA, Harvard, Vancouver, ISO, and other styles
9

Wilson, Aponte Natalia. "La protección de la intimidad y de la autonomía en relación con los datos genómicos." Doctoral thesis, Universitat de Girona, 2020. http://hdl.handle.net/10803/672234.

Full text
Abstract:
The automated processing of genomic data can cause serious problems, both to data holders and third parties, especially about the infringement of fundamental rights to privacy and to self-determination of information. At the same time, the treatment of genomic data of the members of certain communities, may clash with the interests of the respective community and cause harm, both to the members of that community and to the community itself. In this thesis, the risks related to the misuse of genomic data are examined taking into account that it is necessary to find a balance between the benefits evidenced by genetic research and the need to prevent this risks through the development of certain protection measures. Likewise, reference is made to the protection mechanisms of an administrative and civil nature, with which the victim of a personal data treatment counts, when the corresponding legal regime has been violated or damage has been caused
El tratamiento de datos genómicos puede causar serios problemas tanto a los titulares de los datos como a terceros, especialmente, respecto de la lesión de los derechos fundamentales a la intimidad y a la autonomía. Asimismo, el tratamiento de datos genómicos de los miembros de ciertas colectividades, puede chocar con los intereses de esta última y causar daños a sus miembros y a la colectividad. En la Tesis se examinan los riesgos relacionados con el uso inadecuado de los datos genómicos, teniendo en cuenta que es palmario encontrar un equilibrio entre los beneficios de la investigación genética y la necesidad de prevenir esos riesgos mediante la implementación de medidas de protección. Igualmente, se hace referencia a los mecanismos de tutela de carácter civil y administrativo con los que cuenta la víctima de un tratamiento de datos, cuando se ha causado un daño o se ha infringido el régimen jurídico correspondiente
Programa de Doctorat Interuniversitari en Dret, Economia i Empresa
APA, Harvard, Vancouver, ISO, and other styles
10

"DeRef: a privacy-preserving defense mechanism against request forgery attacks." 2011. http://library.cuhk.edu.hk/record=b5894609.

Full text
Abstract:
Fung, Siu Yuen.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2011.
Includes bibliographical references (p. 58-63).
Abstracts in English and Chinese.
Abstract --- p.i
Acknowledgement --- p.iv
Chapter 1 --- Introduction --- p.1
Chapter 2 --- Background and Related Work --- p.7
Chapter 2.1 --- Request Forgery Attacks --- p.7
Chapter 2.2 --- Current Defense Approaches --- p.10
Chapter 2.3 --- Lessons Learned --- p.13
Chapter 3 --- Design of DeRef --- p.15
Chapter 3.1 --- Threat Model --- p.16
Chapter 3.2 --- Fine-Grained Access Control --- p.18
Chapter 3.3 --- Two-Phase Privacy-Preserving Checking --- p.24
Chapter 3.4 --- Putting It All Together --- p.29
Chapter 3.5 --- Implementation --- p.33
Chapter 4 --- Deployment Case Studies --- p.36
Chapter 4.1 --- WordPress --- p.37
Chapter 4.2 --- Joomla! and Drupal --- p.42
Chapter 5 --- Evaluation --- p.44
Chapter 5.1 --- Performance Overhead of DeRef in Real Deployment --- p.45
Chapter 5.2 --- Performance Overhead of DeRef with Various Configurations --- p.50
Chapter 6 --- Conclusions --- p.56
Bibliography --- p.58
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Privacy attacks on genomic data"

1

Privacy, California Legislature Senate Committee on. Informational hearing: Privacy vs. security : the increase [sic] tension between privacy and security issues as a result of the September 11th terrorist attack. Sacramento, Calif. [1020 N St., B-53, Sacramento 95814]: Senate Publications, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

United States. Congress. Senate. Committee on Commerce, Science, and Transportation. Protecting personal consumer information from cyber attacks and data breaches: Hearing before the Committee on Commerce, Science, and Transportation, United States Senate, One Hundred Thirteenth Congress, second session, March 26, 2014. Washington: U.S. Government Publishing Office, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

California. Legislature. Senate. Committee on Privacy. Informational hearing: Recent hacking of state employee records at the Teale Data Center. Sacramento, CA. [1020 N St., B-53, Sacramento 95814]: Senate Publications, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hallinan, Dara. Protecting Genetic Privacy in Biobanking through Data Protection Law. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780192896476.001.0001.

Full text
Abstract:
Biobanks are critical infrastructure for medical research. Biobanks, however, are also the subject of considerable ethical and legal uncertainty. Given that biobanks process large quantities of genomic data, questions have emerged as to how genetic privacy should be protected. What types of genetic privacy rights and rights holders should be protected and to what extent? Since 25 May 2018, the General Data Protection Regulation (GDPR) has applied and now occupies a key position in the European legal framework for the regulation of biobanking. This book takes an in-depth look at the function, problems, and opportunities presented by European data protection law under the GDPR as a framework for the protection of genetic privacy in biobanking. It argues that the substantive framework presented by the GDPR already offers an admirable baseline level of protection for the range of genetic privacy rights engaged by biobanking. The book further contends that while numerous problems with this standard of protection are indeed identifiable, the GDPR offers the flexibility to accommodate solutions to these problems, as well as the procedural mechanisms to realise these solutions.
APA, Harvard, Vancouver, ISO, and other styles
5

Regan, Priscilla M. Global Privacy Issues. Oxford University Press, 2017. http://dx.doi.org/10.1093/acrefore/9780190846626.013.205.

Full text
Abstract:
Despite cultural differences, privacy tends to be rather universally viewed as important in protecting some realms of life that are seen as off limits to society more generally. Yet privacy has also been the cause of significant global issues over the years. In the late 1960s and early 1970s, government agencies and private sector organizations increasingly adopted computers to maintain records, precipitating a concern with the rights of the individuals who were subjects of that data and with the responsibilities of the organizations processing the information. During the 1980s, international and regional bodies recognized that domestic laws could affect the flow of personal information into and out of a country, bringing scholarly and policy attention to the issue of transborder data flows. Somewhat paralleling the principally business dominated debate and analyses over transborder data flows was a broader discussion about privacy issues resulting from global communication and information systems, particularly the internet, during the 1990s. The focus in policy and scholarship was less on variations in national laws and more on two features of networked communication systems: first, the technical infrastructure supporting the flow of information; and second, the globalization of communication systems and information flows. Later on, the privacy landscape and discourse changed dramatically throughout the world after the terrorist attacks in the US on September 11, 2001. Concerns about privacy and civil liberties were trumped by concerns about security and identifying possible terrorists.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Privacy attacks on genomic data"

1

Humbert, Mathias, Erman Ayday, Jean-Pierre Hubaux, and Amalio Telenti. "On Non-cooperative Genomic Privacy." In Financial Cryptography and Data Security, 407–26. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-47854-7_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ayday, Erman. "Cryptographic Solutions for Genomic Privacy." In Financial Cryptography and Data Security, 328–41. Berlin, Heidelberg: Springer Berlin Heidelberg, 2016. http://dx.doi.org/10.1007/978-3-662-53357-4_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ayday, Erman, and Jean-Pierre Hubaux. "Threats and Solutions for Genomic Data Privacy." In Medical Data Privacy Handbook, 463–92. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23633-9_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shen, Hong, and Jian Ma. "Privacy Challenges of Genomic Big Data." In Healthcare and Big Data Management, 139–48. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-6041-0_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ebrahimi, Maryam, Ahmed J. Obaid, and Kamran Yeganegi. "Protecting Cloud Data Privacy Against Attacks." In Learning and Analytics in Intelligent Systems, 421–34. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-65407-8_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Qu, Youyang, Mohammad Reza Nosouhi, Lei Cui, and Shui Yu. "Leading Attacks in Privacy Protection Domain." In Personalized Privacy Protection in Big Data, 15–21. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-3750-6_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ayday, Erman, Jean Louis Raisaro, Urs Hengartner, Adam Molyneaux, and Jean-Pierre Hubaux. "Privacy-Preserving Processing of Raw Genomic Data." In Data Privacy Management and Autonomous Spontaneous Security, 133–47. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-54568-9_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Laouir, Ala Eddine, and Abdessamad Imine. "On Privacy of Multidimensional Data Against Aggregate Knowledge Attacks." In Privacy in Statistical Databases, 92–104. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-13945-1_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shirazi, Hossein, Bruhadeshwar Bezawada, Indrakshi Ray, and Charles Anderson. "Adversarial Sampling Attacks Against Phishing Detection." In Data and Applications Security and Privacy XXXIII, 83–101. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-22479-0_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ge, Linqiang, Wei Yu, Paul Moulema, Guobin Xu, David Griffith, and Nada Golmie. "Detecting Data Integrity Attacks in Smart Grid." In Security and Privacy in Cyber-Physical Systems, 281–303. Chichester, UK: John Wiley & Sons, Ltd, 2017. http://dx.doi.org/10.1002/9781119226079.ch14.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Privacy attacks on genomic data"

1

Huang, Zhicong, Erman Ayday, Jacques Fellay, Jean-Pierre Hubaux, and Ari Juels. "GenoGuard: Protecting Genomic Data against Brute-Force Attacks." In 2015 IEEE Symposium on Security and Privacy (SP). IEEE, 2015. http://dx.doi.org/10.1109/sp.2015.34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Goodrich, Michael T. "The Mastermind Attack on Genomic Data." In 2009 30th IEEE Symposium on Security and Privacy (SP). IEEE, 2009. http://dx.doi.org/10.1109/sp.2009.4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, Junjie, Wendy Hui Wang, and Xinghua Shi. "Differential Privacy Protection Against Membership Inference Attack on Machine Learning for Genomic Data." In Pacific Symposium on Biocomputing 2021. WORLD SCIENTIFIC, 2020. http://dx.doi.org/10.1142/9789811232701_0003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Naveed, Muhammad. "Hurdles for Genomic Data Usage Management." In 2014 IEEE Security and Privacy Workshops (SPW). IEEE, 2014. http://dx.doi.org/10.1109/spw.2014.44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Simmons, Sean, Bonnie Berger, and Cenk Sahinalp. "Protecting Genomic Data Privacy with Probabilistic Modeling." In Proceedings of the Pacific Symposium. WORLD SCIENTIFIC, 2018. http://dx.doi.org/10.1142/9789813279827_0037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Poon, Anna, Steve Jankly, and Tingting Chen. "Privacy Preserving Fisher’s Exact Test on Genomic Data." In 2018 IEEE International Conference on Big Data (Big Data). IEEE, 2018. http://dx.doi.org/10.1109/bigdata.2018.8622575.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Shang, Hui, and Zaobo He. "Kin Genomic Data Inference Attacks Through Factor Graph." In 2019 IEEE 16th International Conference on Mobile Ad Hoc and Sensor Systems Workshops (MASSW). IEEE, 2019. http://dx.doi.org/10.1109/massw.2019.00010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yilmaz, Emre, Tianxi Ji, Erman Ayday, and Pan Li. "Genomic Data Sharing under Dependent Local Differential Privacy." In CODASPY '22: Twelveth ACM Conference on Data and Application Security and Privacy. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3508398.3511519.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Oprisanu, Bristena, Georgi Ganev, and Emiliano De Cristofaro. "On Utility and Privacy in Synthetic Genomic Data." In Network and Distributed System Security Symposium. Reston, VA: Internet Society, 2022. http://dx.doi.org/10.14722/ndss.2022.24092.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gkountouna, O., K. Lepenioti, and M. Terrovitis. "Privacy against aggregate knowledge attacks." In 2013 IEEE 29th International Conference on Data Engineering Workshops (ICDEW 2013). IEEE, 2013. http://dx.doi.org/10.1109/icdew.2013.6547435.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography