Dissertations / Theses on the topic 'Data privacy'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Data privacy.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Zhang, Nan. "Privacy-preserving data mining." [College Station, Tex. : Texas A&M University, 2006. http://hdl.handle.net/1969.1/ETD-TAMU-1080.
Full textNguyen, Benjamin. "Privacy-Centric Data Management." Habilitation à diriger des recherches, Université de Versailles-Saint Quentin en Yvelines, 2013. http://tel.archives-ouvertes.fr/tel-00936130.
Full textLin, Zhenmin. "Privacy Preserving Distributed Data Mining." UKnowledge, 2012. http://uknowledge.uky.edu/cs_etds/9.
Full textAron, Yotam. "Information privacy for linked data." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/85215.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 77-79).
As data mining over massive amounts of linked data becomes more and more prevalent in research applications, information privacy becomes a more important issue. This is especially true in the biological and medical fields, where information sensitivity is high. Previous experience has shown that simple anonymization techniques, such as removing an individual's name from a data set, are inadequate to fully protect the data's participants. While strong privacy guarantees have been studied for relational databases, these are virtually non-existent for graph-structured linked data. This line of research is important, however, since the aggregation of data across different web sources may lead to privacy leaks. The ontological structure of linked data especially aids these attacks on privacy. The purpose of this thesis is two-fold. The first is to investigate differential privacy, a strong privacy guarantee, and how to construct differentially-private mechanisms for linked data. The second involves the design and implementation of the SPARQL Privacy Insurance Module (SPIM). Using a combination of well-studied techniques, such as authentication and access control, and the mechanisms developed to maintain differential privacy over linked data, it attempts to limit privacy hazards for SPARQL queries. By using these privacy-preservation techniques, data owners may be more willing to share their data sets with other researchers without the fear that it will be misused. Consequently, we can expect greater sharing of information, which will foster collaboration and improve the types of data that researchers can have access to.
by Yotam Aron.
M. Eng.
Jawad, Mohamed. "Data privacy in P2P Systems." Nantes, 2011. http://www.theses.fr/2011NANT2020.
Full textOnline peer-to-peer (P2P) communities such as professional ones (e. G. , medical or research communities) are becoming popular due to increasing needs on data sharing. P2P environments offer valuable characteristics but limited guarantees when sharing sensitive data. They can be considered as hostile because data can be accessed by everyone (by potentially malicious peers) and used for everything (e. G. , for marketing or for activities against the owner’s preferences or ethics). This thesis proposes a privacy service that allows sharing sensitive data in P2P systems while protecting their privacy. The first contribution consists on analyzing existing techniques for data privacy in P2P architectures. The second contribution is a privacy model for P2P systems named PriMod which allows data owners to specify their privacy preferences in privacy policies and to associate them with their data. The third contribution is the development of PriServ, a privacy service located on top of DHT-based P2P systems which implements PriMod to prevent data privacy violations. Among others, PriServ uses trust techniques to predict peers behavior
Foresti, S. "Preserving privacy in data outsourcing." Doctoral thesis, Università degli Studi di Milano, 2010. http://hdl.handle.net/2434/156360.
Full textLivraga, G. "PRESERVING PRIVACY IN DATA RELEASE." Doctoral thesis, Università degli Studi di Milano, 2014. http://hdl.handle.net/2434/233324.
Full textLoukides, Grigorios. "Data utility and privacy protection in data publishing." Thesis, Cardiff University, 2008. http://orca.cf.ac.uk/54743/.
Full textSobati, Moghadam Somayeh. "Contributions to Data Privacy in Cloud Data Warehouses." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE2020.
Full textNowadays, data outsourcing scenarios are ever more common with the advent of cloud computing. Cloud computing appeals businesses and organizations because of a wide variety of benefits such as cost savings and service benefits. Moreover, cloud computing provides higher availability, scalability, and more effective disaster recovery rather than in-house operations. One of the most notable cloud outsourcing services is database outsourcing (Database-as-a-Service), where individuals and organizations outsource data storage and management to a Cloud Service Provider (CSP). Naturally, such services allow storing a data warehouse (DW) on a remote, untrusted CSP and running on-line analytical processing (OLAP).Although cloud data outsourcing induces many benefits, it also brings out security and in particular privacy concerns. A typical solution to preserve data privacy is encrypting data locally before sending them to an external server. Secure database management systems use various encryption schemes, but they either induce computational and storage overhead or reveal some information about data, which jeopardizes privacy.In this thesis, we propose a new secure secret splitting scheme (S4) inspired by Shamir’s secret sharing. S4 implements an additive homomorphic scheme, i.e., additions can be directly computed over encrypted data. S4 addresses the shortcomings of existing approaches by reducing storage and computational overhead while still enforcing a reasonable level of privacy. S4 is efficient both in terms of storage and computing, which is ideal for data outsourcing scenarios that consider the user has limited computation and storage resources. Experimental results confirm the efficiency of S4 in terms of computation and storage overhead with respect to existing solutions.Moreover, we also present new order-preserving schemes, order-preserving indexing (OPI) and wrap-around order-preserving indexing (waOPI), which are practical on cloud outsourced DWs. We focus on the problem of performing range and exact match queries over encrypted data. In contrast to existing solutions, our schemes prevent performing statistical and frequency analysis by an adversary. While providing data privacy, the proposed schemes bear good performance and lead to minimal change for existing software
Ma, Jianjie. "Learning from perturbed data for privacy-preserving data mining." Online access for everyone, 2006. http://www.dissertations.wsu.edu/Dissertations/Summer2006/j%5Fma%5F080406.pdf.
Full textSwapna, B., and R. VijayaPrakash. "Privacy Preserving Data Mining Operations without Disrupting Data Quality." International Journal of Computer Science and Network (IJCSN), 2012. http://hdl.handle.net/10150/271473.
Full textData mining operations help discover business intelligence from historical data. The extracted business intelligence or actionable knowledge helps in taking well informed decisions that leads to profit to the organization that makes use of it. While performing mining privacy of data has to be given utmost importance. To achieve this PPDM (Privacy Preserving Data Mining) came into existence by sanitizing database that prevents discovery of association rules. However, this leads to modification of data and thus disrupting the quality of data. This paper proposes a new technique and algorithms that can perform privacy preserving data mining operations while ensuring that the data quality is not lost. The empirical results revealed that the proposed technique is useful and can be used in real world applications.
Heerde, Harold Johann Wilhelm van. "Privacy-aware data management by means of data degradation." Versailles-St Quentin en Yvelines, 2010. http://www.theses.fr/2010VERS0031.
Full textService-providers collect more and more privacy-sensitive information, even though it is hard to protect this information against hackers, abuse of weak privacy policies, negligence, and malicious database administrators. In this thesis, we take the position that endless retention of privacy-sensitive information will inevitably lead to unauthorized data disclosure. Limiting the retention of privacy-sensitive information limits the amount of stored data and therefore the impact of such a disclosure. Removing data from a database system is not a straightforward task; data degradation has an impact on the storage structure, indexing, transaction management, and logging mechanisms. To show the feasibility of data degradation, we provide several techniques to implement it; mainly, a combination of keeping data sorted on degradation time and using encryption techniques where possible. The techniques are founded with a prototype implementation and a theoretical analysis
Bonatti, Piero A., Bert Bos, Stefan Decker, Garcia Javier David Fernandez, Sabrina Kirrane, Vassilios Peristeras, Axel Polleres, and Rigo Wenning. "Data Privacy Vocabularies and Controls: Semantic Web for Transparency and Privacy." CEUR Workshop Proceedings, 2018. http://epub.wu.ac.at/6490/1/SW4SG_2018.pdf.
Full textAtaei, Mehrnaz. "Location data privacy : principles to practice." Doctoral thesis, Universitat Jaume I, 2018. http://hdl.handle.net/10803/666740.
Full textLocation data is essential to the provision of relevant and tailored information in location-based services (LBS) but has the potential to reveal sensitive information about users. Unwanted disclosure of location data is associated with various threats known as dataveillance which can lead to risks like loss of control, (continuous) monitoring, identification, and social profiling. Striking a balance between providing a service based on the user’s location while protecting their (location) privacy is thus a key challenge in this area. Although many solutions have been developed to mitigate the data privacy-related threats, the aspects involving users (i.e. User Interfaces (UI)) and the way in which location data management can affects (location) data privacy have not received much attention in the literature. This thesis develops and evaluates approaches to facilitate the design and development of privacy-aware LBS. This work has explicitly focused on three areas: location data management in LBS, the design of UI for LBS, and compliance with (location) data privacy regulation. To address location data management, this thesis proposes modifications to LBS architectures and introduces the concept of temporal and spatial ephemerality as an alternative way to manage location privacy. The modifications include adding two components to the LBS architecture: one component dedicated to the management of decisions regarding collected location data such as applying restriction on the time that the service provider stores the data; and one component for adjusting location data privacy settings for the users of LBS. This thesis then develops a set of UI controls for fine-grained management of location privacy settings based on privacy theory (Westin), privacy by design principles and general UI design principles. Finally, this thesis brings forth a set of guidelines for the design and development of privacy-aware LBS through the analysis of the General Data Protection Regulation (GDPR) and expert recommendations. Service providers, designers, and developers of LBS can benefit from the contributions of this work as the proposed architecture and UI model can help them to recognise and address privacy issues during the LBS development process. The developed guidelines, on the other hand, can be helpful when developers and designers face difficulties understanding (location) data privacy-related regulations. The guidelines include both a list of legal requirements derived from GDPR’s text and expert suggestions for developers and designers of LBS in the process of complying with data privacy regulation.
Sivakumar, Anusha. "Enhancing Privacy Of Data Through Anonymization." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177349.
Full textEn kraftig ökning av tillgång på personligt relaterat data, har lett till oändliga möjligheter för dataforskare att utnyttja dessa data för forskning. En konsekvens är att det blir svårt att bevara personers integritet på grund av den enorma mängd uppgifter som är tillgängliga. För att skydda den personliga integriteten finns möjligheten att med traditionella metoder använda pseudonymer och alias, innan personen publicerar personligt data. Att enbart använda dessa traditionella metoder är inte tillräckligt för att skydda privatlivet, det finns alltid möjligheter att koppla data till verkliga individer. En potentiell lösning på detta problem är att använda anonymiseringstekniker, för att förändra data om individen på att anpassat sätt och på det viset försvåra att data sammankopplas med en individ. Vid undersökningar som innehåller personuppgifter blir anonymisering allt viktigare. Om vi försöker att ändra uppgifter för att bevara integriteten av forskningsdeltagare innan data publiceras, blir den resulterande uppgifter nästan oanvändbar för många undersökningar. För att bevara integriteten av individer representerade i underlaget och att minimera dataförlust orsakad av privatlivet bevarande är mycket viktigt. I denna avhandling har vi studerat de olika fall där attackerna kan ske, olika former av attacker och befintliga lösningar för att förhindra attackerna. Efter att noggrant granskat litteraturen och problemet, föreslår vi en teoretisk lösning för att bevara integriteten av forskningsdeltagarna så mycket som möjligt och att uppgifterna ska vara till nytta för forskning. Som stöd för vår lösning, gällande digitala fotspår som lagrar Facebook uppgifter med samtycke av användarna och släpper den lagrade informationen via olika användargränssnitt.
Sang, Lin. "Social Big Data and Privacy Awareness." Thesis, Uppsala universitet, Institutionen för informatik och media, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-242444.
Full textLazarovich, Amir. "Invisible Ink : blockchain for data privacy." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/98626.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 81-85).
The problem of maintaining complete control over and transparency with regard to our digital identity is growing more urgent as our lives become more dependent on online and digital services. What once was rightfully ours and under our control is now spread among uncountable entities across many locations. We have built a platform that securely distributes encrypted user-sensitive data. It uses the Bitcoin blockchain to keep a trust-less audit trail for data interactions and to manage access to user data. Our platform offers advantages to both users and service providers. The user enjoys the heightened transparency, control, and security of their personal data, while the service provider becomes much less vulnerable to single point-of failures and breaches, which in turn decreases their exposure to information-security liability, thereby saving them money and protecting their brand. Our work extends an idea developed by the author and two collaborators, a peer-to- peer network that uses blockchain technology and off-blockchain storage to securely distribute sensitive data in a decentralized manner using a custom blockchain protocol. Our two main contributions are: 1. developing this platform and 2. analyzing its feasibility in real-world applications. This includes designing a protocol for data authentication that runs on an Internet scale peer-to-peer network, abstracting complex interactions with encrypted data, building a dashboard for data auditing and management, as well as building servers and sample services that use this platform for testing and evaluation. This work has been supported by the MIT Communication Futures Program and the Digital Life Consortium.
by Amir Lazarovich.
S.M.
DeYoung, Mark E. "Privacy Preserving Network Security Data Analytics." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/82909.
Full textPh. D.
Shang, Hui. "Privacy Preserving Kin Genomic Data Publishing." Miami University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=miami1594835227299524.
Full textLin, Zehua. "Privacy Preserving Social Network Data Publishing." Miami University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=miami1610045108271476.
Full textGonçalves, João Miguel Ribeiro. "Context-awareness privacy in data communications." Doctoral thesis, Universidade de Aveiro, 2015. http://hdl.handle.net/10773/15760.
Full textInternet users consume online targeted advertising based on information collected about them and voluntarily share personal information in social networks. Sensor information and data from smart-phones is collected and used by applications, sometimes in unclear ways. As it happens today with smartphones, in the near future sensors will be shipped in all types of connected devices, enabling ubiquitous information gathering from the physical environment, enabling the vision of Ambient Intelligence. The value of gathered data, if not obvious, can be harnessed through data mining techniques and put to use by enabling personalized and tailored services as well as business intelligence practices, fueling the digital economy. However, the ever-expanding information gathering and use undermines the privacy conceptions of the past. Natural social practices of managing privacy in daily relations are overridden by socially-awkward communication tools, service providers struggle with security issues resulting in harmful data leaks, governments use mass surveillance techniques, the incentives of the digital economy threaten consumer privacy, and the advancement of consumergrade data-gathering technology enables new inter-personal abuses. A wide range of fields attempts to address technology-related privacy problems, however they vary immensely in terms of assumptions, scope and approach. Privacy of future use cases is typically handled vertically, instead of building upon previous work that can be re-contextualized, while current privacy problems are typically addressed per type in a more focused way. Because significant effort was required to make sense of the relations and structure of privacy-related work, this thesis attempts to transmit a structured view of it. It is multi-disciplinary - from cryptography to economics, including distributed systems and information theory - and addresses privacy issues of different natures. As existing work is framed and discussed, the contributions to the state-of-theart done in the scope of this thesis are presented. The contributions add to five distinct areas: 1) identity in distributed systems; 2) future context-aware services; 3) event-based context management; 4) low-latency information flow control; 5) high-dimensional dataset anonymity. Finally, having laid out such landscape of the privacy-preserving work, the current and future privacy challenges are discussed, considering not only technical but also socio-economic perspectives.
Quem usa a Internet vê publicidade direccionada com base nos seus hábitos de navegação, e provavelmente partilha voluntariamente informação pessoal em redes sociais. A informação disponível nos novos telemóveis é amplamente acedida e utilizada por aplicações móveis, por vezes sem razões claras para isso. Tal como acontece hoje com os telemóveis, no futuro muitos tipos de dispositivos elecónicos incluirão sensores que permitirão captar dados do ambiente, possibilitando o surgimento de ambientes inteligentes. O valor dos dados captados, se não for óbvio, pode ser derivado através de técnicas de análise de dados e usado para fornecer serviços personalizados e definir estratégias de negócio, fomentando a economia digital. No entanto estas práticas de recolha de informação criam novas questões de privacidade. As práticas naturais de relações inter-pessoais são dificultadas por novos meios de comunicação que não as contemplam, os problemas de segurança de informação sucedem-se, os estados vigiam os seus cidadãos, a economia digital leva á monitorização dos consumidores, e as capacidades de captação e gravação dos novos dispositivos eletrónicos podem ser usadas abusivamente pelos próprios utilizadores contra outras pessoas. Um grande número de áreas científicas focam problemas de privacidade relacionados com tecnologia, no entanto fazem-no de maneiras diferentes e assumindo pontos de partida distintos. A privacidade de novos cenários é tipicamente tratada verticalmente, em vez de re-contextualizar trabalho existente, enquanto os problemas actuais são tratados de uma forma mais focada. Devido a este fraccionamento no trabalho existente, um exercício muito relevante foi a sua estruturação no âmbito desta tese. O trabalho identificado é multi-disciplinar - da criptografia à economia, incluindo sistemas distribuídos e teoria da informação - e trata de problemas de privacidade de naturezas diferentes. À medida que o trabalho existente é apresentado, as contribuições feitas por esta tese são discutidas. Estas enquadram-se em cinco áreas distintas: 1) identidade em sistemas distribuídos; 2) serviços contextualizados; 3) gestão orientada a eventos de informação de contexto; 4) controlo de fluxo de informação com latência baixa; 5) bases de dados de recomendação anónimas. Tendo descrito o trabalho existente em privacidade, os desafios actuais e futuros da privacidade são discutidos considerando também perspectivas socio-económicas.
Thomas, Dilys. "Algorithms and architectures for data privacy /." May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.
Full textSmith, Tanshanika Turner. "Examining Data Privacy Breaches in Healthcare." ScholarWorks, 2016. https://scholarworks.waldenu.edu/dissertations/2623.
Full textde, Souza Tulio. "Data-level privacy through data perturbation in distributed multi-application environments." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:2b818039-bde4-41d6-96ca-0367704a53f0.
Full textZheng, Yao. "Privacy Preservation for Cloud-Based Data Sharing and Data Analytics." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/73796.
Full textPh. D.
Liyanaarachchi, Gajendra P. "Data Privacy Considerations for Redesigning Organizational Strategy." Thesis, Griffith University, 2022. http://hdl.handle.net/10072/417671.
Full textThesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Dept of Marketing
Griffith Business School
Full Text
Fernandez, Garcia Javier D., Fajar J. Ekaputra, Peb Ruswono Aryan, Amr Azzam, and Elmar Kiesling. "Privacy-aware Linked Widgets." ACM Press, 2019. http://epub.wu.ac.at/6859/1/Privacy_aware_Linked_Data_Widgets___WWW_19__Camera_Ready.pdf.
Full textJoines, Amy. "Impact of private data mining on personal privacy from agents of government." [Ames, Iowa : Iowa State University], 2009.
Find full textStouppa, Phiniki. "Deciding Data Privacy for ALC Knowledge Bases /." [S.l.] : [s.n.], 2009. http://www.ub.unibe.ch/content/bibliotheken_sammlungen/sondersammlungen/dissen_bestellformular/index_ger.html.
Full textBalla, Stefano. "Privacy-Preserving Data Mining: un approccio verticale." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/17517/.
Full textChen, Xiaoqiang. "Privacy Preserving Data Publishing for Recommender System." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-155785.
Full textScheffler, Thomas. "Privacy enforcement with data owner-defined policies." Phd thesis, Universität Potsdam, 2013. http://opus.kobv.de/ubp/volltexte/2013/6793/.
Full textIm Rahmen der Dissertation wurde ein Framework für die Durchsetzung von Richtlinien zum Schutz privater Daten geschaffen, welches darauf setzt, dass diese Richtlinien oder Policies direkt von den Eigentümern der Daten erstellt werden und automatisiert durchsetzbar sind. Der Schutz privater Daten ist ein sehr wichtiges Thema im Bereich der elektronischen Kommunikation, welches durch die fortschreitende Gerätevernetzung und die Verfügbarkeit und Nutzung privater Daten in Onlinediensten noch an Bedeutung gewinnt. In der Vergangenheit wurden verschiedene Techniken für den Schutz privater Daten entwickelt: so genannte Privacy Enhancing Technologies. Viele dieser Technologien arbeiten nach dem Prinzip der Datensparsamkeit und der Anonymisierung und stehen damit der modernen Netznutzung in Sozialen Medien entgegen. Das führt zu der Situation, dass private Daten umfassend verteilt und genutzt werden, ohne dass der Datenbesitzer gezielte Kontrolle über die Verteilung und Nutzung seiner privaten Daten ausüben kann. Existierende richtlinienbasiert Datenschutztechniken gehen in der Regel davon aus, dass der Nutzer und nicht der Eigentümer der Daten die Richtlinien für den Umgang mit privaten Daten vorgibt. Dieser Ansatz vereinfacht das Management und die Durchsetzung der Zugriffsbeschränkungen für den Datennutzer, lässt dem Datenbesitzer aber nur die Alternative den Richtlinien des Datennutzers zuzustimmen, oder keine Daten weiterzugeben. Es war daher unser Ansatz die Interessen des Datenbesitzers durch die Möglichkeit der Formulierung eigener Richtlinien zu stärken. Das dabei verwendete Modell zur Zugriffskontrolle wird auch als Owner-Retained Access Control (ORAC) bezeichnet und wurde 1990 von McCollum u.a. formuliert. Das Grundprinzip dieses Modells besteht darin, dass die Autorität über Zugriffsentscheidungen stets beim Urheber der Daten verbleibt. Aus diesem Ansatz ergeben sich zwei Herausforderungen. Zum einen muss der Besitzer der Daten, der Data Owner, in die Lage versetzt werden, aussagekräftige und korrekte Richtlinien für den Umgang mit seinen Daten formulieren zu können. Da es sich dabei um normale Computernutzer handelt, muss davon ausgegangen werden, dass diese Personen auch Fehler bei der Richtlinienerstellung machen. Wir haben dieses Problem dadurch gelöst, dass wir die Datenschutzrichtlinien in drei separate Bereiche mit unterschiedlicher Priorität aufteilen. Der Bereich mit der niedrigsten Priorität definiert grundlegende Schutzeigenschaften. Der Dateneigentümer kann diese Eigenschaften durch eigene Regeln mittlerer Priorität überschrieben. Darüber hinaus sorgt ein Bereich mit Sicherheitsrichtlinien hoher Priorität dafür, dass bestimmte Zugriffsrechte immer gewahrt bleiben. Die zweite Herausforderung besteht in der gezielten Kommunikation der Richtlinien und deren Durchsetzung gegenüber dem Datennutzer (auch als Data User bezeichnet). Um die Richtlinien dem Datennutzer bekannt zu machen, verwenden wir so genannte Sticky Policies. Das bedeutet, dass wir die Richtlinien über eine geeignete Kodierung an die zu schützenden Daten anhängen, so dass jederzeit darauf Bezug genommen werden kann und auch bei der Verteilung der Daten die Datenschutzanforderungen der Besitzer erhalten bleiben. Für die Durchsetzung der Richtlinien auf dem System des Datennutzers haben wir zwei verschiedene Ansätze entwickelt. Wir haben einen so genannten Reference Monitor entwickelt, welcher jeglichen Zugriff auf die privaten Daten kontrolliert und anhand der in der Sticky Policy gespeicherten Regeln entscheidet, ob der Datennutzer den Zugriff auf diese Daten erhält oder nicht. Dieser Reference Monitor wurde zum einen als Client-seitigen Lösung implementiert, die auf dem Sicherheitskonzept der Programmiersprache Java aufsetzt. Zum anderen wurde auch eine Lösung für Server entwickelt, welche mit Hilfe der Aspekt-orientierten Programmierung den Zugriff auf bestimmte Methoden eines Programms kontrollieren kann. In dem Client-seitigen Referenzmonitor werden Privacy Policies in Java Permissions übersetzt und automatisiert durch den Java Security Manager gegenüber beliebigen Applikationen durchgesetzt. Da dieser Ansatz beim Zugriff auf Daten mit anderer Privacy Policy den Neustart der Applikation erfordert, wurde für den Server-seitigen Referenzmonitor ein anderer Ansatz gewählt. Mit Hilfe der Java Reflection API und Methoden der Aspektorientierten Programmierung gelang es Datenzugriffe in existierenden Applikationen abzufangen und erst nach Prüfung der Datenschutzrichtlinie den Zugriff zuzulassen oder zu verbieten. Beide Lösungen wurden auf ihre Leistungsfähigkeit getestet und stellen eine Erweiterung der bisher bekannten Techniken zum Schutz privater Daten dar.
Sweatt, Brian M. (Brian Michael). "A privacy-preserving personal sensor data ecosystem." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/91875.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 79-82).
Despite the ubiquity of passively-collected sensor data (primarily attained via smartphones), there does not currently exist a comprehensive system for authorizing the collection of such data, collecting, storing, analyzing, and visualizing it in a manner that preserves the privacy of the user generating the data. This thesis shows the design and implementation of such a system, named openPDS, from both the client and server perspectives. Two server-side components are implemented: a centralized registry server for authentication and authorization of all entities in the system, and a distributed Personal Data Store that allows analysis to be run against the stored sensor data and aggregated across multiple Personal Data Stores in a privacy-preserving fashion. The client, implemented for the Android mobile phone operating system, makes use of the Funf Open Sensing framework to collect data and adds the ability for users to authenticate against the registry server, authorize third-party applications to analyze data once it reaches their Personal Data Store, and finally, visualize the result of such analysis within a mobile phone or web browser. A number of example quantified-self and social applications are built on top of this framework to demonstrate feasibility of the system from both development and user perspectives.
by Brian M. Sweatt.
M. Eng.
Paradesi, Sharon M. (Sharon Myrtle) 1986. "User-controlled privacy for personal mobile data." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/93839.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 81-82).
Smartphones collect a wide range of sensor data, ranging from the basic, such as location, accelerometer, and Bluetooth, to the more advanced, such as heart rate. Mobile apps on the Android and iOS platforms provide users with "all-or-nothing" controls during installation to get permission for data collection and use. Users have to either agree to have the app collect and use all the requested data or not use the app at all. This is slowly changing with the iOS framework, which now allows users to turn off location sharing with specific apps even after installation. MIT Living Lab platform is a mobile app development platform that uses openPDS to provide MIT users with personal data stores but currently lacks user controls for privacy. This thesis presents PrivacyMate, a suite of tools for MIT Living Labs that provide user-controllable privacy mechanisms for mobile apps. PrivacyMate aims to enable users to maintain better control over their mobile personal data. It extends the model of iOS and allows users to select or deselect various types of data (more than just location information) for collection and use by apps. Users can also provide temporal and spatial specifications to indicate a context in which they are comfortable sharing their data with certain apps. We incorporate the privacy mechanisms offered by PrivacyMate into two mobile apps built on the MIT Living Lab platform: ScheduleME and MIT-FIT. ScheduleME enables users to schedule meetings without disclosing either their locations or points of interest. MIT-FIT enables users to track personal and aggregate high-activity regions and times, as well as view personalized fitness-related event recommendations. The MIT Living Lab team is planning to eventually deploy PrivacyMate and MIT-FIT to the entire MIT community.
by Sharon Myrtle Paradesi.
Elec. E. in Computer Science
Simmons, Sean Kenneth. "Preserving patient privacy in biomedical data analysis." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/101821.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 147-154).
The growing number of large biomedical databases and electronic health records promise to be an invaluable resource for biomedical researchers. Recent work, however, has shown that sharing this data- even when aggregated to produce p-values, regression coefficients, count queries, and minor allele frequencies (MAFs)- may compromise patient privacy. This raises a fundamental question: how do we protect patient privacy while still making the most out of their data? In this thesis, we develop various methods to perform privacy preserving analysis on biomedical data, with an eye towards genomic data. We begin by introducing a model based measure, PrivMAF, that allows us to decide when it is safe to release MAFs. We modify this measure to deal with perturbed data, and show that we are able to achieve privacy guarantees while adding less noise (and thus preserving more useful information) than previous methods. We also consider using differentially private methods to preserve patient privacy. Motivated by cohort selection in medical studies, we develop an improved method for releasing differentially private medical count queries. We then turn our eyes towards differentially private genome wide association studies (GWAS). We improve the runtime and utility of various privacy preserving methods for genome analysis, bringing these methods much closer to real world applicability. Building off this result, we develop differentially private versions of more powerful statistics based off linear mixed models.
by Sean Kenneth Simmons.
Ph. D.
Mittal, Nupur. "Data, learning and privacy in recommendation systems." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S084/document.
Full textRecommendation systems have gained tremendous popularity, both in academia and industry. They have evolved into many different varieties depending mostly on the techniques and ideas used in their implementation. This categorization also marks the boundary of their application domain. Regardless of the types of recommendation systems, they are complex and multi-disciplinary in nature, involving subjects like information retrieval, data cleansing and preprocessing, data mining etc. In our work, we identify three different challenges (among many possible) involved in the process of making recommendations and provide their solutions. We elaborate the challenges involved in obtaining user-demographic data, and processing it, to render it useful for making recommendations. The focus here is to make use of Online Social Networks to access publicly available user data, to help the recommendation systems. Using user-demographic data for the purpose of improving the personalized recommendations, has many other advantages, like dealing with the famous cold-start problem. It is also one of the founding pillars of hybrid recommendation systems. With the help of this work, we underline the importance of user’s publicly available information like tweets, posts, votes etc. to infer more private details about her. As the second challenge, we aim at improving the learning process of recommendation systems. Our goal is to provide a k-nearest neighbor method that deals with very large amount of datasets, surpassing billions of users. We propose a generic, fast and scalable k-NN graph construction algorithm that improves significantly the performance as compared to the state-of-the art approaches. Our idea is based on leveraging the bipartite nature of the underlying dataset, and use a preprocessing phase to reduce the number of similarity computations in later iterations. As a result, we gain a speed-up of 14 compared to other significant approaches from literature. Finally, we also consider the issue of privacy. Instead of directly viewing it under trivial recommendation systems, we analyze it on Online Social Networks. First, we reason how OSNs can be seen as a form of recommendation systems and how information dissemination is similar to broadcasting opinion/reviews in trivial recommendation systems. Following this parallelism, we identify privacy threat in information diffusion in OSNs and provide a privacy preserving algorithm for the same. Our algorithm Riposte quantifies the privacy in terms of differential privacy and with the help of experimental datasets, we demonstrate how Riposte maintains the desirable information diffusion properties of a network
Keerthi, Thomas. "Distilling mobile privacy requirements from qualitative data." Thesis, Open University, 2014. http://oro.open.ac.uk/40121/.
Full textAinslie, Mandi. "Big data and privacy : a modernised framework." Diss., University of Pretoria, 2017. http://hdl.handle.net/2263/59805.
Full textMini Dissertation (MBA)--University of Pretoria, 2017.
ms2017
Gordon Institute of Business Science (GIBS)
MBA
Unrestricted
Nan, Lihao. "Privacy Preserving Representation Learning For Complex Data." Thesis, The University of Sydney, 2019. http://hdl.handle.net/2123/20662.
Full textHuang, Xueli. "Achieving Data Privacy and Security in Cloud." Diss., Temple University Libraries, 2016. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/372805.
Full textPh.D.
The growing concerns in term of the privacy of data stored in public cloud have restrained the widespread adoption of cloud computing. The traditional method to protect the data privacy is to encrypt data before they are sent to public cloud, but heavy computation is always introduced by this approach, especially for the image and video data, which has much more amount of data than text data. Another way is to take advantage of hybrid cloud by separating the sensitive data from non-sensitive data and storing them in trusted private cloud and un-trusted public cloud respectively. But if we adopt the method directly, all the images and videos containing sensitive data have to be stored in private cloud, which makes this method meaningless. Moreover, the emergence of the Software-Defined Networking (SDN) paradigm, which decouples the control logic from the closed and proprietary implementations of traditional network devices, enables researchers and practitioners to design new innovative network functions and protocols in a much easier, flexible, and more powerful way. The data plane will ask the control plane to update flow rules when the data plane gets new network packets with which it does not know how to deal with, and the control plane will then dynamically deploy and configure flow rules according to the data plane's requests, which makes the whole network could be managed and controlled efficiently. However, this kind of reactive control model could be used by hackers launching Distributed Denial-of-Service (DDoS) attacks by sending large amount of new requests from the data plane to the control plane. For image data, we divide the image is into pieces with equal size to speed up the encryption process, and propose two kinds of method to cut the relationship between the edges. One is to add random noise in each piece, the other is to design a one-to-one mapping function for each piece to map different pixel value into different another one, which cuts off the relationship between pixels as well the edges. Our mapping function is given with a random parameter as inputs to make each piece could randomly choose different mapping. Finally, we shuffle the pieces with another random parameter, which makes the problems recovering the shuffled image to be NP-complete. For video data, we propose two different methods separately for intra frame, I-frame, and inter frame, P-frame, based on their different characteristic. A hybrid selective video encryption scheme for H.264/AVC based on Advanced Encryption Standard (AES) and video data themselves is proposed for I-frame. For each P-slice of P-frame, we only abstract small part of them in private cloud based on the characteristic of intra prediction mode, which efficiently prevents P-frame being decoded. For cloud running with SDN, we propose a framework to keep the controller away from DDoS attack. We first predict the amount of new requests for each switch periodically based on its previous information, and the new requests will be sent to controller if the predicted total amount of new requests is less than the threshold. Otherwise these requests will be directed to the security gate way to check if there is a attack among them. The requests that caused the dramatic decrease of entropy will be filter out by our algorithm, and the rules of these request will be made and sent to controller. The controller will send the rules to each switch to make them direct the flows matching with the rules to honey pot.
Temple University--Theses
Brown, Emily Elizabeth. "Adaptable Privacy-preserving Model." Diss., NSUWorks, 2019. https://nsuworks.nova.edu/gscis_etd/1069.
Full textHuang, Zhengli. "Privacy and utility analysis of the randomization approach in Privacy-Preserving Data Publishing." Related electronic resource: Current Research at SU : database of SU dissertations, recent titles available full text, 2008. http://wwwlib.umi.com/cr/syr/main.
Full textOlsson, Mattias. "Klassificeringsalgoritmer vs differential privacy : Effekt på klassificeringsalgoritmer vid användande av numerisk differential privacy." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-15680.
Full textSehatkar, Morvarid. "Towards a Privacy Preserving Framework for Publishing Longitudinal Data." Thesis, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/31629.
Full textStroud, Caleb Zachary. "Implementing Differential Privacy for Privacy Preserving Trajectory Data Publication in Large-Scale Wireless Networks." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/84548.
Full textMaster of Science
Sallaku, Redlon <1994>. "Privacy and Protecting Privacy: Using Static Analysis for legal compliance. General Data Protection Regulation." Master's Degree Thesis, Università Ca' Foscari Venezia, 2019. http://hdl.handle.net/10579/14682.
Full textCasas, Roma Jordi. "Privacy-preserving and data utility in graph mining." Doctoral thesis, Universitat Autònoma de Barcelona, 2014. http://hdl.handle.net/10803/285566.
Full textIn recent years, an explosive increase of graph-formatted data has been made publicly available. Embedded within this data there is private information about users who appear in it. Therefore, data owners must respect the privacy of users before releasing datasets to third parties. In this scenario, anonymization processes become an important concern. However, anonymization processes usually introduce some kind of noise in the anonymous data, altering the data and also their results on graph mining processes. Generally, the higher the privacy, the larger the noise. Thus, data utility is an important factor to consider in anonymization processes. The necessary trade-off between data privacy and data utility can be reached by using measures and metrics to lead the anonymization process to minimize the information loss, and therefore, to maximize the data utility. In this thesis we have covered the fields of user's privacy-preserving in social networks and the utility and quality of the released data. A trade-off between both fields is a critical point to achieve good anonymization methods for the subsequent graph mining processes. Part of this thesis has focused on data utility and information loss. Firstly, we have studied the relation between the generic information loss measures and the clustering-specific ones, in order to evaluate whether the generic information loss measures are indicative of the usefulness of the data for subsequent data mining processes. We have found strong correlation between some generic information loss measures (average distance, betweenness centrality, closeness centrality, edge intersection, clustering coefficient and transitivity) and the precision index over the results of several clustering algorithms, demonstrating that these measures are able to predict the perturbation introduced in anonymous data. Secondly, two measures to reduce the information loss on graph modification processes have been presented. The first one, Edge neighbourhood centrality, is based on information flow throw 1-neighbourhood of a specific edge in the graph. The second one is based on the core number sequence and it preserves better the underlying graph structure, retaining more data utility. By an extensive experimental set up, we have demonstrated that both methods are able to preserve the most important edges in the network, keeping the basic structural and spectral properties close to the original ones. The other important topic of this thesis has been privacy-preserving methods. We have presented our random-based algorithm, which utilizes the concept of Edge neighbourhood centrality to drive the edge modification process to better preserve the most important edges in the graph, achieving lower information loss and higher data utility on the released data. Our method obtains a better trade-off between data utility and data privacy than other methods. Finally, two different approaches for k-degree anonymity on graphs have been developed. First, an algorithm based on evolutionary computing has been presented and tested on different small and medium real networks. Although this method allows us to fulfil the desired privacy level, it presents two main drawbacks: the information loss is quite large in some graph structural properties and it is not fast enough to work with large networks. Therefore, a second algorithm has been presented, which uses the univariate micro-aggregation to anonymize the degree sequence and reduce the distance from the original one. This method is quasi-optimal and it results in lower information loss and better data utility.
Rodríguez, García María Mercedes. "Semantic perturbative privacy-preserving methods for nominal data." Doctoral thesis, Universitat Rovira i Virgili, 2017. http://hdl.handle.net/10803/435689.
Full textLa explotación de microdatos personales (p. ej., datos del censo, preferencias, o registros de salud) es de gran interés para la minería de datos. Tales datos a menudo contienen información sensible que puede ser directa o indirectamente relacionada con los individuos. Por tanto, resulta necesario implementar medidas para preservar la privacidad y para minimizar el riesgo de re-identificación y, por consiguiente, de revelación de información confidencial sobre los individuos. Pese a que se han desarrollado numerosos métodos para preservar la privacidad de datos numéricos, la protección de valores nominales ha recibido escasa atención. Puesto que la utilidad de este tipo de datos está estrechamente relacionada con la preservación de su semántica, en este trabajo explotamos varias tecnologías semánticas para posibilitar una protección coherente con el significado de los datos nominales. Específicamente, utilizamos ontologías como base para proponer un marco de trabajo semántico que permita manejar datos nominales según su significado en tareas de protección; dicho marco consta de un conjunto de operadores que caracterizan y transforman datos nominales a la vez que tienen en consideración su semántica. A partir de aquí, utilizamos este marco para adaptar métodos perturbativos de preservación de la privacidad al dominio nominal. Particularmente, nos centramos en métodos basados en los dos principios subyacentes a la protección de los datos: enfoques basados en permutación, concretamente, rank swapping, and adición de ruido. Los métodos propuestos han sido extensamente evaluados con conjuntos de datos reales. Resultados experimentales muestran que manejar los datos nominales semánticamente mejora significativamente la interpretabilidad y la utilidad de los resultados protegidos.
The exploitation of personal microdata (such as census data, preferences or medical records) is of great interest for the data mining community. Such data often include sensitive information that can be directly or indirectly related to individuals. Therefore, privacy-preserving measures should be undertaken to minimize the risk of re-identification and, hence, of disclosing confidential information on the individuals. In the past, many privacy-preserving methods have been developed to deal with numerical data, but approaches tackling the protection of nominal values are scarce. Since the utility of this kind of data is closely related to the preservation of their semantics, in this work, we exploit several semantic technologies to enable a semantically-coherent protection of nominal data. Specifically, we use ontologies as the ground to propose a semantic framework that enables an appropriate management of nominal data in data protection tasks; such framework consists on a set of operators that characterize and transform nominal data while taking into account their semantics. Then, we use this framework to adapt perturbative privacy-preserving methods to the nominal domain. Specifically, we focus on methods based on the two main principles underlying to data protection: permutation-based approaches, i.e., rank swapping, and noise addition. The proposed methods have been extensively evaluated with real datasets. Experimental results show that a semantically-coherent management of nominal data significantly improves the semantic interpretability and the utility of the protected outcomes.
An, Nan. "Protect Data Privacy in E-Healthcare in Sweden." Thesis, Växjö University, School of Mathematics and Systems Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-1619.
Full textSweden healthcare adopted much ICT (information and communication technology). It is a highly information intensive place. This thesis gives a brief description of the background of healthcare in Sweden and ICT adoption in healthcare, introduces an Information system security model, describes the technology and law about data privacy and carries out a case through questionnaire and interview.
Wang, Hui. "Secure query answering and privacy-preserving data publishing." Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/31721.
Full textScience, Faculty of
Computer Science, Department of
Graduate