Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Data protection.

Dissertationen zum Thema „Data protection“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Data protection" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Budd, Chris. „Data Protection and Data Elimination“. International Foundation for Telemetering, 2015. http://hdl.handle.net/10150/596395.

Der volle Inhalt der Quelle
Annotation:
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV
Data security is becoming increasingly important in all areas of storage. The news services frequently have stories about lost or stolen storage devices and the panic it causes. Data security in an SSD usually involves two components: data protection and data elimination. Data protection includes passwords to protect against unauthorized access and encryption to protect against recovering data from the flash chips. Data elimination includes erasing the encryption key and erasing the flash. Telemetry applications frequently add requirements such as write protection, external erase triggers, and overwriting the flash after the erase. This presentation will review these data security features.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Sebé, Feixas Francesc. „Transparent Protection of Data“. Doctoral thesis, Universitat Politècnica de Catalunya, 2003. http://hdl.handle.net/10803/7026.

Der volle Inhalt der Quelle
Annotation:
Aquesta tesi tracta la protecció de dades quan aquestes han de ser lliurades a usuaris en qui no es té absoluta confiança. En aquesta situació, les dades s'han de protegir de manera que segueixin essent utilitzables. Aquesta protecció ha de ser imperceptible, per tal de no destorbar la utilització correcta de les dades, i alhora efectiva, protegint contra la utilització no autoritzada d'aquestes.
L'estudi es divideix tenint en compte els dos tipus de dades la protecció transparent de les quals s'estudia: continguts multimèdia i microdades estadístiques.

Pel que fa a dades multimèdia, es tracta la seva protecció des de dues vessants: la protecció del copyright i la protecció de la integritat i l'autentificació.

En comerç electrònic de continguts multimèdia, els comerciants venen dades a usuaris en qui no confien plenament i que és possible que en facin còpies il·legals. Aquest fet fa que sigui necessari protegir la propietat intel·lectual d'aquests productes.
Centrant-se en imatges digitals, es presenten diverses contribucions a les dues principals tècniques de protecció del copyright electrònic: marca d'aigua i empremta digital.
Concretament, pel que fa a marca d'aigua, es presenten dos nous esquemes per imatges digitals. El primer és semi-cec i robust contra atacs de compresió, filtratge i escalat. El segon és cec i robust contra atacs de compresió, filtratge, escalat i distorsió geomètrica moderada. Seguidament, es proposa una nova tècnica basada en mesclar objectes marcats que permet combinar i augmentar la robustesa de diferents esquemes de marca d'aigua actuals.
En empremta digital, es presenta una construcció per obtenir codis binaris segurs contra atacs de confabulació de fins a tres usuaris deshonestos. La proposta actual obté, per un nombre moderat de possibles compradors, paraules codi més curtes que les obtingudes fins al moment.

Freqüentment, els continguts multimèdia es publiquen en llocs de poca confiança on poden ser alterats. En aquestes situacions, la marca d'aigua es pot utilitzar per protegir dades proporcionant-los integritat i autenticació. Es demostra l'aplicabilitat de l'algorisme de marca d'aigua basat en expansió d'espectre en el domini espacial per proporcionar, de forma transparent, autenticació i integritat sense pèrdua a imatges digitals.


L'altre tipus de dades tractades en aquesta tesi són les microdades estadístiques.

Quan fitxers amb dades estadístiques que contenen informació sobre entitats individuals són lliurats per al seu estudi, és necessari protegir la privacitat d'aquestes entitats. Aquest tipus de dades s'han de lliurar de manera que es combini la utilitat estadística amb la protecció de la privacitat de les entitats afectades. Els mètodes per pertorbar dades amb aquest objectiu s'anomenen mètodes de control del risc de revelació estadística. En aquest camp, es proposa una modificació d'una mètrica existent per mesurar la pèrdua d'informació i el risc de revelació per tal que permeti avaluar mètodes que generen fitxers emmascarats amb un nombre de registres diferent a l'original.
Es proposa també un algorisme per post-processar fitxers de dades emmascarades per tal de reduir la pèrdua d'informació mantenint un risc de revelació similar. D'aquesta manera s'aconsegueix millorar els dos millors mètodes d'emmascarament actuals: 'microagregació multivariant' i 'intercanvi de rangs'.

Finalment, es presenta una nova aplicació per proporcionar accés multinivell a dades de precisió crítica. D'aquesta manera, les dades protegides es fan disponibles a diferents usuaris, que segons el seu nivell d'autorització, podran eliminar part de la protecció obtenint dades de millor qualitat.
This dissertation is about protection of data that have to be made available to possibly dishonest users. Data must be protected while keeping its usability. Such protection must be imperceptible, so as not to disrupt correct use of data, and effective against unauthorized uses.
The study is divided according to the two kinds of data whose transparent protection is studied: multimedia content and statistical microdata.

Regarding multimedia content, protection is addressed in two ways: 1)copyright protection; 2) integrity protection and authentication.

In electronic commerce of multimedia content, merchants sell data to untrusted buyers that may redistribute it. In this respect, intellectual property rights of content providers must be ensured.
Focusing on digital images, several contributions are presented on the two main electronic copyright protection techniques: watermarking and fingerprinting.
Two new schemes for watermarking for digital images are presented. The first is semi-public and robust against compression, filtering and scaling attacks. The second one is oblivious and robust against compression, filtering, scaling and moderate geometric distortion attacks. Next, a new technique based on mixture of watermarked digital objects is proposed that allows robustness to be increased by combining robustness properties of different current watermarking schemes.
In the field of fingerprinting, a new construction to obtain binary collusion-secure fingerprinting codes robust against collusions of up to three buyers is presented. This construction provides, for a moderate number of possible buyers, shorter codewords than those offered by current proposals.

Rather often, multimedia contents are published in untrusted sites where they may suffer malicious alterations. In this situation, watermarking can be applied to protecting data in order to provide integrity and authentication. A spatial-domain spread-spectrum watermarking algorithm is described and proven suitable for lossless image authentication.

The other kind of data addressed in this dissertation are statistical microdata.

When statistical files containing information about individual entities are released for public use, privacy is a major concern. Such data files must be released in a way that combines statistical utility and protection of the privacy of entities concerned. Methods to perturb data in this way are called statistical disclosure control methods. In this field, a modification to a current score to measure information loss and disclosure risk is proposed that allows masked data sets with a number of records not equal to the number of records of the original data set to be considered.
Next, a post-masking optimization procedure which reduces information loss while keeping disclosure risk approximately unchanged is proposed. Through this procedure, the two best performing masking methods are enhanced: 'multivariate microaggregation' and 'rankswapping'.

Finally, a novel application to providing multilevel access to precision-critical data is presented. In this way, protected data are made available to different users, who, depending on their clearance, can remove part of the noise introduced by protection, thus obtaining better data quality.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Loukides, Grigorios. „Data utility and privacy protection in data publishing“. Thesis, Cardiff University, 2008. http://orca.cf.ac.uk/54743/.

Der volle Inhalt der Quelle
Annotation:
Data about individuals is being increasingly collected and disseminated for purposes such as business analysis and medical research. This has raised some privacy concerns. In response, a number of techniques have been proposed which attempt to transform data prior to its release so that sensitive information about the individuals contained within it is protected. A:-Anonymisation is one such technique that has attracted much recent attention from the database research community. A:-Anonymisation works by transforming data in such a way that each record is made identical to at least A: 1 other records with respect to those attributes that are likely to be used to identify individuals. This helps prevent sensitive information associated with individuals from being disclosed, as each individual is represented by at least A: records in the dataset. Ideally, a /c-anonymised dataset should maximise both data utility and privacy protection, i.e. it should allow intended data analytic tasks to be carried out without loss of accuracy while preventing sensitive information disclosure, but these two notions are conflicting and only a trade-off between them can be achieved in practice. The existing works, however, focus on how either utility or protection requirement may be satisfied, which often result in anonymised data with an unnecessarily and/or unacceptably low level of utility or protection. In this thesis, we study how to construct /-anonymous data that satisfies both data utility and privacy protection requirements. We propose new criteria to capture utility and protection requirements, and new algorithms that allow A:-anonymisations with required utility/protection trade-off or guarantees to be generated. Our extensive experiments using both benchmarking and synthetic datasets show that our methods are efficient, can produce A:-anonymised data with desired properties, and outperform the state of the art methods in retaining data utility and providing privacy protection.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Islam, Naveed. „Cryptography based Visual Data Protection“. Thesis, Montpellier 2, 2011. http://www.theses.fr/2011MON20178/document.

Der volle Inhalt der Quelle
Annotation:
La transmission de données multimédia sur les réseaux sécurisés a une croissance exponentielle grâce aux progrès scientifique dans les technologies de l'information et de la communication. La sécurité des données dans certaines applications comme le stockage sécurisé, l'authentification, la protection des droits d'auteurs, la communication militaire ou la visioconférence confidentielles, nécessitent de nouvelles stratégies en matière de transmission sécurisée. Deux techniques sont couramment utilisées pour la transmission sécurisée de données visuelles, à savoir : la cryptographie et la stéganographie. La cryptographie sécurise les données en utilisant des clés secrètes afin de rendre les données illisibles, la stéganographie, elle, vise à insérer des données cruciales dans des signaux porteurs anodins.De plus, pour la confiance mutuelle et les systèmes distribués, le partage sécurisé de ressources est souvent une garantie suffisante pour les applications de communication. L'objectif principal de cette thèse est de réaliser une protection des données visuelles, en particulier les images numériques, par le biais des techniques modernes de cryptographie. Dans ce contexte, deux objectifs de recherche ont été développés durant ces travaux de thèse.La première partie de notre travail se concentre sur la sécurité des images numériques dans un environnement partagé. Ensuite, la deuxième partie porte sur l'intégrité des données visuelles pendant une transmission sécurisée.Nous avons proposé un nouveau schéma de partage des images qui exploite les propriétés d'addition et de multiplication homomorphique de deux crypto systèmes à clé publique largement utilisés : les algorithmes RSA et Paillier. Dans les schémas traditionnels de partage sécurisé, le ``dealer'' partitionne le secret en parties et le distribue à chacun des autres acteurs. Ainsi, aucun des acteurs impliqués ne participe à la création du partage sécurisé, mais il est toujours possible que le ``dealer'' transmette des données malveillantes. Au contraire, l'approche proposée utilise le système de partage de secret d'une manière qui limite l'influence du ‘‘dealer'' sur le protocole en permettant à chaque acteur de participer.La deuxième partie de ces travaux de thèse met l'accent sur l'intégrité des données visuelles lors de la transmission. L'intégrité des données signifie que les données gardent leurs structures complètes au cours d'une opération numérique comme le stockage, le transfert ou la récupération. Le changement d'un seul bit de données cryptées peut avoir un impact catastrophique sur les données décryptées. Nous abordons le problème de correction d'erreurs dans les images cryptées en utilisant le chiffrement à clé symétrique AES (Advanced Encryptions Standard) suivant différents modes. Trois mesures sont proposées afin d'exploiter les statistiques locales des données visuelles et l'algorithme de chiffrement, dans l'objectif de corriger les erreurs efficacement
Due to the advancements in the information and communication technologies, the transmission of multimedia data over secure or insecure communication channels has increased exponentially. The security of data in applications like safe storage, authentications, copyright protection,remote military image communication or confidential video-conferencing require new strategies for secure transmission. Two techniques are commonly used for the secure transmission of visual data, i.e. cryptography and steganography. Cryptography achieves security by using secret keysto make the data illegible while steganography aims to hide the data in some innocent carrier signal. For shared trust and distributed environment, secret sharing schemes provide sufficient security in various communication applications. The principal objective of this thesis is to achieveprotection of visual data especially images through modern cryptographic techniques. In this context, the focus of the work in perspective, is twofolded. The first part of our work focuses on the security of image data in shared environment while the second part focuses on the integrity ofimage data in the encrypted domain during transmission.We proposed a new sharing scheme for images which exploits the additive and multiplicative homomorphic properties of two well known public key cryptosystems, namely, the RSA and the Paillier. In traditional secret sharing schemes, the dealer partitions the secret into shares and distributethe shares to each of the player. Thus, none of the involved players participate in the creation of the shared secret and there is always a possibilitythat the dealer can cheat some player. On the contrary, the proposed approach employs the secret sharing scheme in a way that limits the influence of the dealer over the protocol by allowing each player to participate. The second part of our thesis emphasizes on the integrity of visual data during transmission. Data integrity means that the data have its complete structure during any operation like storage, transfer or retrieval. A single bit change in encrypted data can have catastrophic impact over the decrypted data. We address the problem of error correction in images encrypted using symmetric key cryptosystem of the Advanced Encryption Standard (AES) algorithm. Three methods are proposed to exploit the local statistics of the visual data and the encryption algorithm to successfully correct the errors
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Kasneci, Dede. „Data protection law: recent developments“. Doctoral thesis, Università degli studi di Trieste, 2010. http://hdl.handle.net/10077/3578.

Der volle Inhalt der Quelle
Annotation:
2008/2009
Privacy and data protection concern everyone and are issue of profound importance around the World. Privacy has been hailed as “an integral part of our humanity” the “hart of our liberty” and “the beginning of all freedoms” (Solove, 2008). Given its importance, privacy is recognized as a fundamental human right according to many International Instruments such as: the United Nations Universal Declaration of Human Rights of 1948 (Article 12), International Covenant on the Civil and Political Rights (Article 17) The European Convention of Human Rights of 1950 (Article 8), the Charter of Fundamental Rights of the European Union of 2007 (Article 8) and the Treaty of Lisbon of 2008 (Article 16 of the TFEU). However, beyond this worldwide consensus about the importance of privacy and the need for its protection, there is difficulty to conceptualize the privacy. Privacy is a contested legal concept, with several understandings and more misunderstandings. Privacy is actually shorthand for a complex bundle of issues, ranging from dignity to discrimination, and rooted in our need to control what we tell others about ourselves. The main difficulty to reach a satisfying conceptualization of the privacy is that there are some eternal privacy tensions, namely, the interests protected by privacy and data protection laws are inherently in conflict with other legitimate interests such as the freedom of speech, public security and the free flow of information. While, it is impossible to belong to a community and withhold all data, the collection and the processing of our data carry with it many risks and dangers. One such risk is that the data will be abused by those who access it, either by authorization or not. Data which was consensually provided for one purpose might be used against us in a different context. Other privacy tensions are driven by technology which gave rise to the emergence of the data protection law: the falling cost of data storage and communication makes it easier for merchants and governments to collect more data on people and thus to become more efficient to violate the privacy. The development of the Computer technology in the 1960’s and 1970’s and the enormous potential of the digital revolution made the civil libertarians worry. The nightmare of all-seeing, all-knowing “Big Brother” of George Orwell’s “1984” did not belong anymore to the realm of the fiction, but was a reality. And as the enormous potential of the digital revolution became more apparent and together with it the dangers posed to privacy, so the calls for the specific measures to protect individuals became louder. The data protection rules originally developed, at national level in the 1970s, as a response to the threats posed to the privacy by the technological developments of the 1960s and 1970s. It emerged as a new legal field, separate from the privacy law but dependent upon it. The task of the personal data law is to provide a legal framework which is capable of reconciling the needs and interests of those who make use of personal data (data controllers or data processors) with those of persons to whom these data relate (data subjects). Europe has proven to be the leader in protecting privacy and personal data of the individuals in the digital age. At the EU level, the first legal instrument in this field was Data Protection Directive, which was passed in 1995 to harmonize national data protection laws within the European Community, with the aim of protecting the fundamental rights and freedoms of individuals including their privacy and personal data. After 15 years the question is whether the Data Protection Directive 95/46/EC fit the objectives for which it was adopted in 1995. The European Commission considers that the Directive 95/46/EC fulfils its original objectives and therefore does not need to be amended. This thesis questions this static approach of the European Commission to the data protection regime and argues that the increasing pressure on privacy due to the development of privacy destroying technologies and the growing use of and demand for personal information by public and private sectors, requires quick legal answer and constant change of the data protection legislation. The research carried out for this thesis shows that, over time the social and regulatory environment surrounding the creation, management and the use of personal data has evolved significantly since the adoption of the Directive 95/46/EC. The Directive is showing its age and is failing to meet the new challenges posed to privacy by factors such as the huge growth of personal information on line and the growing availability and ability of the new technologies to process, use and abuse personal information in many ways. These factors have challenged the means and the methods used by Directive to protect personal data and have altered the environment for the implementation of the Directive. Thus, it is clear that the context in which the data protection Directive was created has been changed fundamentally and certain basic assumptions of the Directive have already been challenged in approach, in law and in practice. All these factors show that the Directive is out of touch to meet the technological, social and legal challenges of 21st century and therefore need to be reviewed and amended.
XXI Ciclo
1975
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Lloyd, Ian J. „Data processing and individual freedom : data protection and beyond“. Thesis, University of Strathclyde, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.233213.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Neophytou, Andonis. „Computer security : data control and protection“. Virtual Press, 1992. http://liblink.bsu.edu/uhtbin/catkey/834504.

Der volle Inhalt der Quelle
Annotation:
Computer security is a crucial area for any organization based on electronic devices that process data. The security of the devices themselves and the data they process are the backbone of the organization. Until today there have been no completely secure systems or procedures until and a lot of research is being done in this area. It impossible for a machine or a mechanical procedure to "guess" all possible events and lead to conclusive, cohesive and comprehensive secure systems, because of: 1) the human factor, and 2) acts of nature (fire, flood etc). However, proper managerial control can alleviate the extent of the damage caused by those factors.The purpose of this study is to examine the different frameworks of computer security. Emphasis is given to data/database security and the various kinds of attacks on the data. Controls over these attacks and preventative measures will be discussed, and high level language programs will demonstrate the protection issues. The Oracle, SOL query language will be used to demonstrate these controls and prevention measures. In addition the FORTRAN high level language will be used in conjunction with SOL (Only the FORTRAN and COBOL compilers are available for embedded SOL). The C language will be used to show attacks on password files and also as an encryption/decryption program.This study was based mainly on research. An investigation of literature spanning the past decade, was examined to produce the ideas and methods of prevention and control discussed in the study.
Department of Computer Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Pizzolante, Raffaele. „Compression and protection of multidimensional data“. Doctoral thesis, Universita degli studi di Salerno, 2015. http://hdl.handle.net/10556/1943.

Der volle Inhalt der Quelle
Annotation:
2013 - 2014
The main objective of this thesis is to explore and discuss novel techniques related to the compression and protection of multidimensional data (i.e., 3-D medical images, hyperspectral images, 3-D microscopy images and 5-D functional Magnetic Resonance Images). First, we outline a lossless compression scheme based on the predictive model, denoted as Medical Images Lossless Compression algorithm (MILC). MILC is characterized to provide a good trade-off between the compression performances and reduced usage of the hardware resources. Since in the medical and medical-related fields, the execution speed of an algorithm, could be a “critical” parameter, we investigate the parallelization of the compression strategy of the MILC algorithm, which is denoted as Parallel MILC. Parallel MILC can be executed on heterogeneous devices (i.e., CPUs, GPUs, etc.) and provides significant results in terms of speedup with respect to the MILC. This is followed by the important aspects related to the protection of two sensitive typologies of multidimensional data: 3-D medical images and 3-D microscopy images. Regarding the protection of 3-D medical images, we outline a novel hybrid approach, which allows for the efficient compression of 3-D medical images as well as the embedding of a digital watermark, at the same time. In relation to the protection of 3-D microscopy images, the simultaneous embedding of two watermarks is explained. It should be noted that 3-D microscopy images are often used in delicate tasks (i.e., forensic analysis, etc.). Subsequently, we review a novel predictive structure that is appropriate for the lossless compression of different typologies of multidimensional data... [edited by Author]
XIII n.s.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ng, Chi-kwan Miranda. „A study of the administrative provisions governing personal data protection in the Hong Kong Government“. [Hong Kong] : University of Hong Kong, 1987. http://sunzi.lib.hku.hk/hkuto/record.jsp?B12335368.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Pavlovic, Dusan <1981&gt. „Online Gambling in the UE: from Data Protection to Gambler Protection“. Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amsdottorato.unibo.it/8721/1/PAVLOVIC_DUSAN_tesi.pdf.

Der volle Inhalt der Quelle
Annotation:
The thesis tends to answer how does the processing of online gamblers’ personal data, which is used for identification and commercial communication purposes, affect the protection of online gamblers in the EU? After the introduction into the context of online gambling in Europe and risks that jeopardize the protection of online gamblers and the concept of responsible gambling, the first part of the thesis sheds light on the relationship between the processing of online gamblers’ data and the provocation of problem gambling. The attention is given to the gambling-related commercial communication and its role in provoking problem gambling on one side. On other, data processing to identify gamblers is taken as a contributing factor to the recognition of problem gamblers and prevention of negative consequences deriving from gambling. In the second part of the thesis, the relations, tensions, and conflicts between the protection of online gamblers and the protection of personal data, as processed by online gambling service providers are analyzed. The work of Jaap-Henk Hoepman on privacy design strategies was used as an inspirational source for the designing of strategies for processing online gamblers’ personal data. The strategies for the processing of online gamblers’ personal data and accompanying tactics based on Hoepman’s proposals for data minimization and data separation strategies and their antipodes – data maximization and data linking strategies outline scenarios that may prevent/provoke problem gambling. The thesis analyzes the business practice regarding the types of online gambler data that is processed for gambling-related activities, including data processed for the protection of online gamblers. Finally, the legal analysis answers whether and to what extent the implementation of privacy-invasive strategies (data maximization and data linking strategies) could be lawful.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Degli, Esposti Sara. „From dataveillance to data economy : firm view on data protection“. Thesis, Open University, 2016. http://oro.open.ac.uk/53072/.

Der volle Inhalt der Quelle
Annotation:
The increasing availability of electronic records and the expanded reliance on online communications and services have made available a huge amount of data about people’s behaviours, characteristics, and preferences. Advancements in data processing technology, known as big data, offer opportunities to increase organisational efficiency and competitiveness. Analytically sophisticated companies excel in their ability to extract value from the analysis of digital data. However, in order to exploit the potential economic benefits produced by big data and analytics, issues of data privacy and information security need to be addressed. In Europe, organisations processing personal data are being required to implement basic data protection principles, which are considered difficult to implement in big data environments. Little is known in the privacy studies literature about how companies manage the trade-off between data usage and data protection. This study contributes to explore the corporate data privacy environment, by focusing on the interrelationship between the data protection legal regime, the application of big data analytics to achieve corporate objectives, and the creation of an organisational privacy culture. It also draws insights from surveillance studies, particularly the idea of dataveillance, to identify potential limitations of the current legal privacy regime. The findings from the analysis of survey data show that big data and data protection support each other, but also that some frictions can emerge around data collection and data fusion. The demand for the integration of different data sources poses challenges to the implementation of data protection principles. However, this study finds no evidence that data protection laws prevent data gathering. Implications relevant for the debate on the reform of European data protection law are also drawn from these findings.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Duricu, Alexandra. „Data Protection Impact Assessment (DPIA) and Risk Assessment in the context of the General Data Protection Regulation (GDPR)“. Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-74384.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Ammar, Bassem AbuBakr. „Error protection and security for data transmission“. Thesis, Lancaster University, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.421640.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Lee, Ryan 1980. „Personal data protection in the semantic web“. Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/29656.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.
Includes bibliographical references (p. 167-168).
Growing concerns over the abuse of personal information via the World Wide Web can be addressed at political, social, and technical levels. As the Web evolves into the Semantic Web, where machines understand the information they process, technical solutions such as PEDAL become feasible. PEDAL, the Personal Data Access Language, defines a vocabulary for composing policies that describe characteristics of clients who are allowed or denied access to the personal information a policy governs. Policies can be merged together using PEDAL negotiation rules. Semantic Web logic processors reason through policies, arriving at a final determination on information distribution for each request. Software for implementing PEDAL and test cases for exercising its features demonstrate basic PEDAL functionality.
by Ryan Lee.
M.Eng.and S.B.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Shamsuri, Nurul A. „PROPOSAL ON REGIONAL DATA PROTECTION FOR ASEAN“. Thesis, Юриспруденція в сучасному інформаційному просторі: [Матеріали ІХ Міжнародної науково-практичної конференції, м. Київ, Національний авіаційний університет, 1 березня 2019 р.] Том 1. – Тернопіль: Вектор, 2019. – 394 с, 2019. http://er.nau.edu.ua/handle/NAU/38075.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Snell, Mark A. „Data protection and transborder data flow : a British and Australian perspective“. Thesis, University of Oxford, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.360014.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Cannon, Jennifer Elizabeth. „Strategies for Improving Data Protection to Reduce Data Loss from Cyberattacks“. ScholarWorks, 2019. https://scholarworks.waldenu.edu/dissertations/7277.

Der volle Inhalt der Quelle
Annotation:
Accidental and targeted data breaches threaten sustainable business practices and personal privacy, exposing all types of businesses to increased data loss and financial impacts. This single case study was conducted in a medium-sized enterprise located in Brevard County, Florida, to explore the successful data protection strategies employed by the information system and information technology business leaders. Actor-network theory was the conceptual framework for the study with a graphical syntax to model data protection strategies. Data were collected from semistructured interviews of 3 business leaders, archival documents, and field notes. Data were analyzed using thematic, analytic, and software analysis, and methodological triangulation. Three themes materialized from the data analyses: people--inferring security personnel, network engineers, system engineers, and qualified personnel to know how to monitor data; processes--inferring the activities required to protect data from data loss; and technology--inferring scientific knowledge used by people to protect data from data loss. The findings are indicative of successful application of data protection strategies and may be modeled to assess vulnerabilities from technical and nontechnical threats impacting risk and loss of sensitive data. The implications of this study for positive social change include the potential to alter attitudes toward data protection, creating a better environment for people to live and work; reduce recovery costs resulting from Internet crimes, improving social well-being; and enhance methods for the protection of sensitive, proprietary, and personally identifiable information, which advances the privacy rights for society.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Scalavino, Enrico. „A data protection architecture for derived data control in partially disconnected networks“. Thesis, Imperial College London, 2012. http://hdl.handle.net/10044/1/10203.

Der volle Inhalt der Quelle
Annotation:
Every organisation needs to exchange and disseminate data constantly amongst its employees, members, customers and partners. Disseminated data is often sensitive or confidential and access to it should be restricted to authorised recipients. Several enterprise rights management (ERM) systems and data protection solutions have been proposed by both academia and industry to enable usage control on disseminated data, i.e. to allow data originators to retain control over whom accesses their information, under which circumstances, and how it is used. This is often obtained by means of cryptographic techniques and thus by disseminating encrypted data that only trustworthy recipients can decrypt. Most of these solutions assume data recipients are connected to the network and able to contact remote policy evaluation authorities that can evaluate usage control policies and issue decryption keys. This assumption oversimplifies the problem by neglecting situations where connectivity is not available, as often happens in crisis management scenarios. In such situations, recipients may not be able to access the information they have received. Also, while using data, recipients and their applications can create new derived information, either by aggregating data from several sources or transforming the original data’s content or format. Existing solutions mostly neglect this problem and do not allow originators to retain control over this derived data despite the fact that it may be more sensitive or valuable than the data originally disseminated. In this thesis we propose an ERM architecture that caters for both derived data control and usage control in partially disconnected networks. We propose the use of a novel policy lattice model based on information flow and mandatory access control. Sets of policies controlling the usage of data can be specified and ordered in a lattice according to the level of protection they provide. At the same time, their association with specific data objects is mandated by rules (content verification procedures) defined in a data sharing agreement (DSA) stipulated amongst the organisations sharing information. When data is transformed, the new policies associated with it are automatically determined depending on the transformation used and the policies currently associated with the input data. The solution we propose takes into account transformations that can both increase or reduce the sensitivity of information, thus giving originators a flexible means to control their data and its derivations. When data must be disseminated in disconnected environments, the movement of users and the ad hoc connections they establish can be exploited to distribute information. To allow users to decrypt disseminated data without contacting remote evaluation authorities, we integrate our architecture with a mechanism for authority devolution, so that users moving in the disconnected area can be granted the right to evaluate policies and issue decryption keys. This allows recipients to contact any nearby user that is also a policy evaluation authority to obtain decryption keys. The mechanism has been shown to be efficient so that timely access to data is possible despite the lack of connectivity. Prototypes of the proposed solutions that protect XML documents have been developed. A realistic crisis management scenario has been used to show both the flexibility of the presented approach for derived data control and the efficiency of the authority devolution solution when handling data dissemination in simulated partially disconnected networks. While existing systems do not offer any means to control derived data and only offer partial solutions to the problem of lack of connectivity (e.g. by caching decryption keys), we have defined a set of solutions that help data originators faced with the shortcomings of current proposals to control their data in innovative, problem-oriented ways.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Sallaku, Redlon <1994&gt. „Privacy and Protecting Privacy: Using Static Analysis for legal compliance. General Data Protection Regulation“. Master's Degree Thesis, Università Ca' Foscari Venezia, 2019. http://hdl.handle.net/10579/14682.

Der volle Inhalt der Quelle
Annotation:
The main purpose of the thesis is to study Privacy and how protecting Privacy, including the new regulation framework proposed by EU the GDPR, investigating how static analysis could help GDPR enforcement, and develop a new static analysis prototype to fulfill this task in practice. GDPR (General Data Protection Regulation) is a recent European regulation to harmonize and enforce data privacy laws across Europe, to protect and empower all EU citizens data privacy, and to reshape the way organizations deal with sensitive data. This regulation has been enforced starting from May 2018. While it is already clear that there is no unique solution to deal with the whole spectrum of GDPR, it is still unclear how static analysis might help enterprises to fulfill the constraints imposed by this regulation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Puglisi, Silvia. „Analysis, modelling and protection of online private data“. Doctoral thesis, Universitat Politècnica de Catalunya, 2017. http://hdl.handle.net/10803/456205.

Der volle Inhalt der Quelle
Annotation:
Online communications generate a consistent amount of data flowing between users, services and applications. This information results from the interactions among different parties, and once collected, it is used for a variety of purposes, from marketing profiling to product recommendations, from news filtering to relationship suggestions. Understanding how data is shared and used by services on behalf of users is the motivation behind this work. When a user creates a new account on a certain platform, this creates a logical container that will be used to store the user's activity. The service aims to profile the user. Therefore, every time some data is created, shared or accessed, information about the user’s behaviour and interests is collected and analysed. Users produce this data but are unaware of how it will be handled by the service, and of whom it will be shared with. More importantly, once aggregated, this data could reveal more over time that the same users initially intended. Information revealed by one profile could be used to obtain access to another account, or during social engineering attacks. The main focus of this dissertation is modelling and analysing how user data flows among different applications and how this represents an important threat for privacy. A framework defining privacy violation is used to classify threats and identify issues where user data is effectively mishandled. User data is modelled as categorised events, and aggregated as histograms of relative frequencies of online activity along predefined categories of interests. Furthermore, a paradigm based on hypermedia to model online footprints is introduced. This emphasises the interactions between different user-generated events and their effects on the user’s measured privacy risk. Finally, the lessons learnt from applying the paradigm to different scenarios are discussed.
Las comunicaciones en línea generan una cantidad constante de datos que fluyen entre usuarios, servicios y aplicaciones. Esta información es el resultado de las interacciones entre diferentes partes y, una vez recolectada, se utiliza para una gran variedad de propósitos, desde perfiles de marketing hasta recomendaciones de productos, pasando por filtros de noticias y sugerencias de relaciones. La motivación detrás de este trabajo es entender cómo los datos son compartidos y utilizados por los servicios en nombre de los usuarios. Cuando un usuario crea una nueva cuenta en una determinada plataforma, ello crea un contenedor lógico que se utilizará para almacenar la actividad del propio usuario. El servicio tiene como objetivo perfilar al usuario. Por lo tanto, cada vez que se crean, se comparten o se accede a los datos, se recopila y analiza información sobre el comportamiento y los intereses del usuario. Los usuarios producen estos datos, pero desconocen cómo serán manejados por el servicio, o con quién se compartirán. O lo que es más importante, una vez agregados, estos datos podrían revelar, con el tiempo, más información de la que los mismos usuarios habían previsto inicialmente. La información revelada por un perfil podría utilizarse para obtener acceso a otra cuenta o durante ataques de ingeniería social. El objetivo principal de esta tesis es modelar y analizar cómo fluyen los datos de los usuarios entre diferentes aplicaciones y cómo esto representa una amenaza importante para la privacidad. Con el propósito de definir las violaciones de privacidad, se utilizan patrones que permiten clasificar las amenazas e identificar los problemas en los que los datos de los usuarios son mal gestionados. Los datos de los usuarios se modelan como eventos categorizados y se agregan como histogramas de frecuencias relativas de actividad en línea en categorías predefinidas de intereses. Además, se introduce un paradigma basado en hipermedia para modelar las huellas en línea. Esto enfatiza la interacción entre los diferentes eventos generados por el usuario y sus efectos sobre el riesgo medido de privacidad del usuario. Finalmente, se discuten las lecciones aprendidas de la aplicación del paradigma a diferentes escenarios.
Les comunicacions en línia generen una quantitat constant de dades que flueixen entre usuaris, serveis i aplicacions. Aquesta informació és el resultat de les interaccions entre diferents parts i, un cop recol·lectada, s’utilitza per a una gran varietat de propòsits, des de perfils de màrqueting fins a recomanacions de productes, passant per filtres de notícies i suggeriments de relacions. La motivació darrere d’aquest treball és entendre com les dades són compartides i utilitzades pels serveis en nom dels usuaris. Quan un usuari crea un nou compte en una determinada plataforma, això crea un contenidor lògic que s’utilitzarà per emmagatzemar l’activitat del propi usuari. El servei té com a objectiu perfilar a l’usuari. Per tant, cada vegada que es creen, es comparteixen o s’accedeix a les dades, es recopila i analitza informació sobre el comportament i els interessos de l’usuari. Els usuaris produeixen aquestes dades però desconeixen com seran gestionades pel servei, o amb qui es compartiran. O el que és més important, un cop agregades, aquestes dades podrien revelar, amb el temps, més informació de la que els mateixos usuaris havien previst inicialment. La informació revelada per un perfil podria utilitzar-se per accedir a un altre compte o durant atacs d’enginyeria social. L’objectiu principal d’aquesta tesi és modelar i analitzar com flueixen les dades dels usuaris entre diferents aplicacions i com això representa una amenaça important per a la privacitat. Amb el propòsit de definir les violacions de privacitat, s’utilitzen patrons que permeten classificar les amenaces i identificar els problemes en què les dades dels usuaris són mal gestionades. Les dades dels usuaris es modelen com esdeveniments categoritzats i s’agreguen com histogrames de freqüències relatives d’activitat en línia en categories predefinides d’interessos. A més, s’introdueix un paradigma basat en hipermèdia per modelar les petjades en línia. Això emfatitza la interacció entre els diferents esdeveniments generats per l’usuari i els seus efectes sobre el risc mesurat de privacitat de l’usuari. Finalment, es discuteixen les lliçons apreses de l’aplicació del paradigma a diferents escenaris.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Øztarman, Jo Mehmet Sollihagen. „End-to-End Data Protection of SMS Messages“. Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for telematikk, 2011. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-15075.

Der volle Inhalt der Quelle
Annotation:
Short Message Service (SMS) has become a very commonly used service. It does not only work as a substitute for voice telephony, but is also used for automated services. Some of these service are related to security issues like SMS banking, or one time passwords, even though SMS messages can be spoofed or eavesdropped.We propose a design where we add security to SMS by making an easily configurable module that utilizes a fast cryptographic scheme called Elliptic Curve Signcryption. To prove our concept, we implement an SMS client for Android smart phones that utilizes our security module and serves end-to-end data protection of SMS messages with the same security level as Top Secret content.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Sundqvist, Erik. „Protection of Non-Volatile Data in IaaS-environments“. Thesis, Linköpings universitet, Institutionen för datavetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-112954.

Der volle Inhalt der Quelle
Annotation:
Infrastructure-as-a-Service (IaaS) cloud solutions continue to experience growth, but many enterprises and organizations are of the opinion that cloud adoption has decreased security in several aspects. This thesis addresses protection of IaaS-environment non- volatile data. A risk analysis is conducted, using the CORAS method, to identify and evaluate risks, and to propose treatments to those risks considered non-acceptable. The complex and distributed nature of an IaaS deployment is investigated to identify di↵erent approaches to data protection using encryption in combination with Trusted Computing principles. Additionally, the outcome of the risk analysis is used to decide the advantages and/or drawbacks of the di↵erent approaches; encryption on the storage host, on the compute host or inside the virtual machine. As a result of this thesis, encryption on the compute host is decided to be most beneficial due to minimal needs for trust, minimal data exposure and key management aspects. At the same time, a high grade of automation can be obtained, retaining usability for cloud consumers without any specific security knowledge. A revisited risk analysis shows that both non- acceptable and acceptable risks are mitigated and partly eliminated, but leaves virtual machine security as an important topic for further research. Along with the risk analysis and treatment proposal, this thesis provides a proof-of-concept implementation using encryption and Trusted Computing on the compute host to protect block storage data in an OpenStack environment. The implementation directly follows the Domain-Based Storage Protection (DBSP) protocol, invented by Ericsson Research and SICS, for key management and attestation of involved hosts.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Holovach, M. „OS for data protection in modern tablet devices“. Thesis, Sumy State University, 2014. http://essuir.sumdu.edu.ua/handle/123456789/45439.

Der volle Inhalt der Quelle
Annotation:
Nowadays the tablet devices are steadily increasing in popularity with modern users. Because of their portability more and more people become getting used to them. The new possibilities make it easy to create and carry some data. As a consequence it needs some protection to secure our personal information.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Eskandari, Mojtaba. „Smartphone Data Transfer Protection According to Jurisdiction Regulations“. Doctoral thesis, Università degli studi di Trento, 2017. https://hdl.handle.net/11572/367700.

Der volle Inhalt der Quelle
Annotation:
The prevalence of mobile devices and their capability to access high speed Internet have transformed them into a portable pocket cloud interface. The sensitivity of a user’s personal data demands adequate level of protection in the cloud. In this regard, the European Union Data Protection regulations (e.g., article 25.1) restricts the transfer of European users’ personal data to certain locations. The matter of concern, however, is the enforcement of such regulations. Since cloud service provision is independent of physical location and data can travel to various servers, it is a challenging task to determine the location of data and enforce jurisdiction policies. In this dissertation, first we demonstrate how mobile apps mishandle personal data collection and transfer by analyzing a wide range of popular Android apps in Europe. Then we investigate approaches to monitor and enforce the location restrictions of collected personal data. Since there are multiple entities such as mobile devices, mobile apps, data controllers and cloud providers in the process of collecting and transferring data, we study each one separately. We introduce design and prototyping of a suitable approach to perform or at least facilitate the enforcement procedure with respect to the duty of each entity. Cloud service providers, provide their infrastructure to data controllers in form of virtual machines or containers; therefore, we design and implemented a tool, named VLOC, to verify the physical location of a virtual machine in cloud. Since VLOC requires the collaboration of the data controller, we design a framework, called DLOC, which enables the end users to determine the location of their data after being transferred to the cloud and probably replicated. DLOC is a distributed framework which does not need the data controller or cloud provider to participate or modify their systems; thus, it is economical to implement and to be used widely.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Eskandari, Mojtaba. „Smartphone Data Transfer Protection According to Jurisdiction Regulations“. Doctoral thesis, University of Trento, 2017. http://eprints-phd.biblio.unitn.it/2597/1/thesis-mojizz2.pdf.

Der volle Inhalt der Quelle
Annotation:
The prevalence of mobile devices and their capability to access high speed Internet have transformed them into a portable pocket cloud interface. The sensitivity of a user’s personal data demands adequate level of protection in the cloud. In this regard, the European Union Data Protection regulations (e.g., article 25.1) restricts the transfer of European users’ personal data to certain locations. The matter of concern, however, is the enforcement of such regulations. Since cloud service provision is independent of physical location and data can travel to various servers, it is a challenging task to determine the location of data and enforce jurisdiction policies. In this dissertation, first we demonstrate how mobile apps mishandle personal data collection and transfer by analyzing a wide range of popular Android apps in Europe. Then we investigate approaches to monitor and enforce the location restrictions of collected personal data. Since there are multiple entities such as mobile devices, mobile apps, data controllers and cloud providers in the process of collecting and transferring data, we study each one separately. We introduce design and prototyping of a suitable approach to perform or at least facilitate the enforcement procedure with respect to the duty of each entity. Cloud service providers, provide their infrastructure to data controllers in form of virtual machines or containers; therefore, we design and implemented a tool, named VLOC, to verify the physical location of a virtual machine in cloud. Since VLOC requires the collaboration of the data controller, we design a framework, called DLOC, which enables the end users to determine the location of their data after being transferred to the cloud and probably replicated. DLOC is a distributed framework which does not need the data controller or cloud provider to participate or modify their systems; thus, it is economical to implement and to be used widely.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Koutsias, Marios. „Consumer protection and global trade in the digital environment : the case of data protection“. Thesis, University of Essex, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.437826.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Ng, Chi-kwan Miranda, und 吳志坤. „A study of the administrative provisions governing personal data protection in the Hong Kong Government“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1987. http://hub.hku.hk/bib/B31975136.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Nyamunda, James. „Mandatory Business-To-Government Data Sharing: Exploring data protection through International Investment Law“. Thesis, Uppsala universitet, Juridiska institutionen, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-443655.

Der volle Inhalt der Quelle
Annotation:
As more data is gathered, analysed and stored, private companies create new products and unlock new commercial frontiers. Simultaneously, governments are beginning to realise that the laws in place require a revamp for the good of commercial innovation and for execution of governmental prerogatives. Hence, in a bid to catch up with the data economy, governments have begun looking for new legal measures that allow them to legally access the data that is held by private companies. Amongst the existing solutions and sprouting suggestions, mandatory business-to-government data sharing often features as a measure through which obligations may be imposed upon private data holding companies to share their data with governments. Other governments have already put in place laws and adopted practices that impose mandatory business-to-government data sharing obligations on private companies.  Many of the countries where private enterprises carry out their businesses have entered into International Investment Agreements (IIAs) which invariably entitle investors to Fair and Equitable treatment and prohibit unlawful compensation. Against this background, this thesis discusses the subject of mandatory business-to-government data sharing by dwelling on three main issues, that is, (i) whether data is/are protected as investment, (2) whether mandatory business-to-government data sharing obligations may infringe the Fair and Equitable Treatment standard and (3) whether mandatory business-to-government data sharing obligations may amount to unlawful expropriation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Routly, Wayne A. „SIDVI: a model for secure distributed data integration“. Thesis, Port Elizabeth Technikon, 2004. http://hdl.handle.net/10948/261.

Der volle Inhalt der Quelle
Annotation:
The new millennium has brought about an increase in the use of business intelligence and knowledge management systems. The very foundations of these systems are the multitude of source databases that store the data. The ability to derive information from these databases is brought about by means of data integration. With the current emphasis on security in all walks of information and communication technology, a renewed interest must be placed in the systems that provide us with information; data integration systems. This dissertation investigates security issues at specific stages in the data integration cycle, with special reference to problems when performing data integration in a peer-topeer environment, as in distributed data integration. In the database environment we are concerned with the database itself and the media used to connect to and from the database. In distributed data integration, the concept of the database is redefined to the source database, from which we extract data and the storage database in which the integrated data is stored. This postulates three distinct areas in which to apply security, the data source, the network medium and the data store. All of these areas encompass data integration and must be considered holistically when implementing security. Data integration is never only one server or one database; it is various geographically dispersed components working together towards a common goal. It is important then that we consider all aspects involved when attempting to provide security for data integration. This dissertation will focus on the areas of security threats and investigates a model to ensure the integrity and security of data during the entire integration process. In order to ensure effective security in a data integration environment, that security, should be present at all stages, it should provide for end-to-end protection.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Wallace, Amelia. „Protection of Personal Data in Blockchain Technology : An investigation on the compatibility of the General Data Protection Regulation and the public blockchain“. Thesis, Stockholms universitet, Institutet för rättsinformatik (IRI), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-167303.

Der volle Inhalt der Quelle
Annotation:
On 25 May 2018 the General Data Protection Regulation, GDPR, came into force in the EU. The regulation strengthened the rights of the data subjects’ in relation to the data controllers and processors and gave them more control over their personal data. The recitals of the GDPR state that it was the rapid development in technology and globalisation that brought new challenges for the protection of personal data. Private companies and public authorities where making use of personal data on an unprecedented scale in order to pursue their own activities. The protection should be technologically neutral and not dependant on the technique used. This leads to questions on whether the protection that is offered through the GDPR is de facto applicable on all technologies. One particular technology which has caught interest of both private companies and public authorities is the blockchain. The public distributed blockchain is completely decentralized, meaning it is the users who decide the rules and its content. There are no intermediaries in power and the transactions of value or other information is sent peer to peer. By using asymmetric cryptography and advanced hash algorithms the transactions sent in the blockchain are secured. Whilst the interest and use of blockchain is increasing and the GDPR attempting to be applicable on all techniques, the characteristics of the public blockchain must be analysed under the terms of the GDPR. The thesis examines whether natural persons can be identified in a public blockchain, who is considered data controller and data processor of a public blockchain and whether the principles of the GDPR can be applied in such a decentralised and publicly distributed technology.
Den 25 maj 2018 tradde den nya dataskyddsforordningen, GDPR, i kraft i EU vilken slog hardare mot personuppgiftsansvariga och personuppgiftsbitraden an vad det tidigare dataskyddsdirektivet gjort. Med reformen ville EU starka personuppgiftsskyddet genom att ge de registrerade mer kontroll over sina personuppgifter. I skalen till forordningen anges att det var den snabba tekniska utvecklingen och globaliseringen som skapat nya utmaningar for skyddet da privata foretag och offentliga myndigheter anvander personuppgifter i en helt ny omfattning idag. Skyddet bor saledes vara teknikneutralt och inte beroende av den teknik som anvands. Detta oppnar upp for fragor om huruvida skyddet som GDPR erbjuder faktiskt ar applicerbart pa samtliga tekniker. En sarskild teknologi som fangat intresse hos saval privatpersoner som foretag och offentliga myndigheter ar blockkedjan. Den oppet distribuerade blockkedjetekniken ar helt decentraliserad, vilket innebar att det ar dess anvandare som styr och bestammer over innehallet. Nagra mellanman finns inte, utan vardetransaktioner och andra overforingar av information sands direkt mellan anvandare. Genom asymmetrisk kryptografi och avancerade hash algoritmer sakras de overforingar som sker via blockkedjan. Nagot som uppmarksammats under den okande anvandningen och intresset for blockkedjan samt ikrafttradandet av GDPR ar hur personuppgifter bor hanteras i en sadan decentraliserad teknologi, dar inga mellanman kan bara ansvaret for eventuell personuppgiftsbehandling. Flera av den publika blockkedjeteknikens egenskaper bor problematiseras, framfor allt dess oppenhet och tillganglighet for varje person i varlden, samt dess forbud mot rattelse och radering av inlagda data. Denna uppsats behandlar fragorna huruvida fysiska personer kan identifieras i en publik blockkedja, vem som kan anses vara personuppgiftsansvarig och personuppgiftsbitrade i en publik blockkedja, samt om de principer och krav som uppstalls i GDPR kan efterlevas i en sadan decentraliserad och oppet distribuerad teknologi.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Marés, Soler Jordi. „Categorical Data Protection on Statistical Datasets and Social Networks“. Doctoral thesis, Universitat Autònoma de Barcelona, 2013. http://hdl.handle.net/10803/129327.

Der volle Inhalt der Quelle
Annotation:
L’augment continu de la publicació de dades amb contingut sensible ha incrementat el risc de violar la privacitat de les persones i/o institucions. Actualment aquest augment és cada cop mes ràpid degut a la gran expansió d’Internet. Aquest aspecte fa molt important la comprovació del rendiment dels mètodes de protecció utilitzats. Per tal de fer aquestes comprovacions existeixen dos tipus de mesures a tenir en compte: la pèrdua d’informació i el risc de revelació. Una altra àrea on la privacitat ha incrementat el seu rol n’és el de les xarxes socials. Les xarxes socials han esdevingut un ingredient essencial en la comunicació entre persones en l’actual món modern. Permeten als usuaris expressar i compartir els seus interessos i comentar els esdeveniments diaris amb tota la gent amb la qual estan connectats. Així doncs, el ràpid augment de la popularitat de les xarxes socials ha resultat en l’adopció d’aquestes com a àrea d’interès per a comunitats específiques. No obstant, el volum de dades compartides pot ser molt perillós en termes de privacitat. A més de la informació explícita compartida mitjanant els ”posts” de cada usuari, existeix informació semàntica implícita amagada en el conjunt de d’informació compartida per cada usuari. Per aquestes i altres raons, la protecció de les dades pertanyents a cada usuari ha de ser tractada. Així doncs, les principals contribucions d’aquesta tesi són: • El desenvolupament de mètodes de protecció basats en algorismes evolutius els quals busquen de manera automatitzada millors proteccions en termes de pèrdua d’informació i risc de revelació. • El desenvolupament d’un mètode evolutiu per tal d’optimitzar la matriu de probabilitats de transició amb la qual es basa el mètode Post- Randomization Method per tal de generar proteccions millors. • La definició d’un mètode de protecció per a dades categ`oriques basat en l’execució d’un algorisme de clustering abans de protegir per tal d’obtenir dades protegides amb millor utilitat. • La definició de com es pot extreure tant informació implícita com explicita d’una xarxa social real com Twitter, el desenvolupament d’un mètode de protecció per xarxes socials i la definició de noves mesures per avaluar la qualitat de les proteccions en aquests escenaris.
The continuous growth of public sensitive data has increased the risk of breaking the privacy of people or institutions in those datasets. This growing is, nowadays, even faster because of the expansion of the Internet. This fact makes very important the assessment of the performance of all the methods used to protect those datasets. In order to check the performance there exist two kind of measures: the information loss and the disclosure risk. Another area where privacy has an increasing role is the one of social networks. They have become an essential ingredient of interpersonal communication in the modern world. They enable users to express and share common interests, comment upon everyday events with all the people with whom they are connected. Indeed, the growth of social media has been rapid and has resulted in the adoption of social networks to meet specific communities of interest.However, this shared information space can prove to be dangerous in respect of user privacy issues. In addition to explicit ”posts” there is much implicit semantic information that is not explicitly given in the posts that the user shares. For these and other reasons, the protection of information pertaining to each user needs to be supported. This thesis shows some new approaches to face these problems. The main contributions are: • The development of an approach for protecting microdata datasets based on evolutionary algorithms which seeks automatically for better protections in terms of information loss and disclosure risk. • The development of an evolutionary approach to optimize the transition matrices used in the Post-Randomization masking method which performs better protections. • The definition of an approach to deal with categorical microdata protection based on a pre-clustering approach achieving protected data with better utility. • The definition of a way to extract both implicit and explicit information from a real social network like Twitter as well as the development of a protection method to deal with this information and some new measures to evaluate the protection quality.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Baena, Mirabete Daniel. „Exact and heuristic methods for statistical tabular data protection“. Doctoral thesis, Universitat Politècnica de Catalunya, 2017. http://hdl.handle.net/10803/456809.

Der volle Inhalt der Quelle
Annotation:
One of the main purposes of National Statistical Agencies (NSAs) is to provide citizens or researchers with a large amount of trustful and high quality statistical information. NSAs must guarantee that no confidential individual information can be obtained from the released statistical outputs. The discipline of Statistical disclosure control (SDC) aims to avoid that confidential information is derived from data released while, at the same time, maintaining as much as possible the data utility. NSAs work with two types of data: microdata and tabular data. Microdata files contain records of individuals or respondents (persons or enterprises) with attributes. For instance, a national census might collect attributes such as age, address, salary, etc. Tabular data contains aggregated information obtained by crossing one or more categorical variables from those microdata files. Several SDC methods are available to avoid that no confidential individual information can be obtained from the released microdata or tabular data. This thesis focus on tabular data protection, although the research carried out can be applied to other classes of problems. Controlled Tabular Adjustment(CTA) and Cell Suppression Problem (CSP) have concentrated most of the recent research in the tabular data protection field. Both methods formulate Mixed Integer Linear Programming problems (MILPs) which are challenging for tables of moderate size. Even finding a feasible initial solution may be a challenging task for large instances. Due to the fact that many end users give priority to fast executions and are thus satisfied, in practice, with suboptimal solutions, as a first result of this thesis we present an improvement of a known and successful heuristic for finding feasible solutions of MILPs, called feasibility pump. The new approach, based on the computation of analytic centers, is named the Analytic Center Feasbility Pump.The second contribution consists in the application of the fix-and-relax heuristic (FR) to the CTA method. FR (alone or in combination with other heuristics) is shown to be competitive compared to CPLEX branch-and-cut in terms of quickly finding either a feasible solution or a good upper bound. The last contribution of this thesis deals with general Benders decomposition, which is improved with the application of stabilization techniques. A stabilized Benders decomposition is presented,which focus on finding new solutions in the neighborhood of "good'' points. This approach is efficiently applied to the solution of realistic and real-world CSP instances, outperforming alternative approaches.The first two contributions are already published in indexed journals (Operations Research Letters and Computers and Operations Research). The third contribution is a working paper to be submitted soon.
Un dels principals objectius dels Instituts Nacionals d'Estadística (INEs) és proporcionar, als ciutadans o als investigadors, una gran quantitat de dades estadístiques fiables i precises. Al mateix temps els INEs deuen garantir la confidencialitat estadística i que cap dada personal pot ser obtinguda gràcies a les dades estadístiques disseminades. La disciplina Control de revelació estadística (en anglès Statistical Disclosure Control, SDC) s'ocupa de garantir que cap dada individual pot derivar-se dels outputs de estadístics publicats però intentant al mateix temps mantenir el màxim possible de riquesa de les dades. Els INEs treballen amb dos tipus de dades: microdades i dades tabulars. Les microdades son arxius amb registres individuals de persones o empreses amb un conjunt d'atributs. Per exemple, el censos nacional recull atributs tals com l'edat, sexe, adreça o salari entre d'altres. Les dades tabulars són dades agregades obtingudes a partir del creuament d’un o més atributs o variables categòriques dels fitxers de microdades. Varis mètodes CRE són disponibles per evitar la revelació estadística en fitxers de microdades o dades tabulars. Aquesta tesi es centra en la protecció de dades tabulars tot i que la recerca duta a terme pot ser aplicada també a altres tipus de problemes. Els mètodes CTA (en anglès Controlled Tabular Adjustment) i CSP (en anglès Cell Suppression Problem) ha centrat la major part de la recerca feta en el camp de protecció de dades tabulars. Tots dos mètodes formulen problemes MILP (Mixed Integer Linear Programming problems) difícils de solucionar en taules de mida moderada. Fins i tot trobar solucions inicials factibles pot resultar molt difícil. Donat el fet que molts usuaris finals donen prioritat a tenir solucions ràpides i bones tot i que aquestes no siguin les òptimes, la primera contribució de la tesis presenta una millora en una coneguda i exitosa heurística per trobar solucions factibles de MILPs, anomenada feasibility pump. La nova aproximació, basada en el càlcul de centres analítics, s'anomena Analytic Center Feasibility Pump. La segona contribució consisteix en l'aplicació de la heurística fix-and-relax (FR) al mètode CTA. FR (sol o en combinació amb d'altres heurístiques) es mostra com a competitiu davant CPLEX branch-and-cut en termes de trobar ràpidament solucions factibles o bons upper bounds. La darrera contribució d’aquesta tesi tracta sobre el problema general de descomposició de Benders, aportant una millora amb l'aplicació de tècniques d’estabilització. Presentem un mètode anomenat stabilized Benders decomposition que es centra en trobar noves solucions properes a punts considerats prèviament com a bons. Aquesta aproximació ha estat eficientment aplicada al problema CSP, obtenint molt bons resultats en dades tabulars reals, millorant altres alternatives conegudes del mètode CSP. Les dues primeres contribucions ja han estat publicades en revistes indexades (Operations Research Letters and Computers and Operations Research). Actualment estem treballant en la publicació de la tercera contribució i serà en breu enviada a revisar.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Völp, Marcus. „Provable Protection of Confidential Data in Microkernel-Based Systems“. Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-66757.

Der volle Inhalt der Quelle
Annotation:
Although modern computer systems process increasing amounts of sensitive, private, and valuable information, most of today’s operating systems (OSs) fail to protect confidential data against unauthorized disclosure over covert channels. Securing the large code bases of these OSs and checking the secured code for the absence of covert channels would come at enormous costs. Microkernels significantly reduce the necessarily trusted code. However, cost-efficient, provable confidential-data protection in microkernel-based systems is still challenging. This thesis makes two central contributions to the provable protection of confidential data against disclosure over covert channels: • A budget-enforcing, fixed-priority scheduler that provably eliminates covert timing channels in open microkernel-based systems; and • A sound control-flow-sensitive security type system for low-level operating-system code. To prevent scheduling-related timing channels, the proposed scheduler treats possibly leaking, blocked threads as if they were runnable. When it selects such a thread, it runs a higher classified budget consumer. A characterization of budget-consumer time as a blocking term makes it possible to reuse a large class of existing admission tests to determine whether the proposed scheduler can meet the real-time guarantees of all threads we envisage to run. Compared to contemporary information-flow-secure schedulers, significantly more real-time threads can be admitted for the proposed scheduler. The role of the proposed security type system is to prove those system components free of security policy violating information flows that simultaneously operate on behalf of differently classified clients. In an open microkernel-based system, these are the microkernel and the necessarily trusted multilevel servers. To reduce the complexity of the security type system, C++ operating-system code is translated into a corresponding Toy program, which in turn is complemented with calls to Toy procedures describing the side effects of interactions with the underlying hardware. Toy is a non-deterministic intermediate programming language, which I have designed specifically for this purpose. A universal lattice for shared-memory programs enables the type system to check the resulting Toy code for potentially harmful information flows, even if the security policy of the system is not known at the time of the analysis. I demonstrate the feasibility of the proposed analysis in three case studies: a virtual-memory access, L4 inter-process communication and a secure buffer cache. In addition, I prove Osvik’s countermeasure effective against AES cache side-channel attacks. To my best knowledge, this is the first security-type-system-based proof of such a countermeasure. The ability of a security type system to tolerate temporary breaches of confidentiality in lock-protected shared-memory regions turned out to be fundamental for this proof.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Bredtmann, Oliver [Verfasser]. „Unequal Error Protection Coding of Quantized Data / Oliver Bredtmann“. Aachen : Shaker, 2011. http://d-nb.info/1098039947/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Sweeney, Latanya. „Computational disclosure control : a primer on data privacy protection“. Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/8589.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.
Includes bibliographical references (leaves 213-216) and index.
Today's globally networked society places great demand on the dissemination and sharing of person specific data for many new and exciting uses. When these data are linked together, they provide an electronic shadow of a person or organization that is as identifying and personal as a fingerprint even when the information contains no explicit identifiers, such as name and phone number. Other distinctive data, such as birth date and ZIP code, often combine uniquely and can be linked to publicly available information to re-identify individuals. Producing anonymous data that remains specific enough to be useful is often a very difficult task and practice today tends to either incorrectly believe confidentiality is maintained when it is not or produces data that are practically useless. The goal of the work presented in this book is to explore computational techniques for releasing useful information in such a way that the identity of any individual or entity contained in data cannot be recognized while the data remain practically useful. I begin by demonstrating ways to learn information about entities from publicly available information. I then provide a formal framework for reasoning about disclosure control and the ability to infer the identities of entities contained within the data. I formally define and present null-map, k-map and wrong-map as models of protection. Each model provides protection by ensuring that released information maps to no, k or incorrect entities, respectively. The book ends by examining four computational systems that attempt to maintain privacy while releasing electronic information. These systems are: (1) my Scrub System, which locates personally-identifying information in letters between doctors and notes written by clinicians; (2) my Datafly II System, which generalizes and suppresses values in field-structured data sets; (3) Statistics Netherlands' pt-Argus System, which is becoming a European standard for producing public-use data; and, (4) my k-Similar algorithm, which finds optimal solutions such that data are minimally distorted while still providing adequate protection. By introducing anonymity and quality metrics, I show that Datafly II can overprotect data, Scrub and p-Argus can fail to provide adequate protection, but k-similar finds optimal results.
by Latanya Sweeney.
Ph.D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Laribi, Atika. „A protection model for distributed data base management systems“. Diss., Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/53883.

Der volle Inhalt der Quelle
Annotation:
Security is important for Centralized Data Base Management Systems (CDBMS) and becomes crucial for Distributed Data Base Management Systems (DDBMS) when different organizations share information. Secure cooperation can be achieved only if each participating organization is assured that the data it makes available will not be abused by other users. In this work differences between CDBMS and DDBMS that characterize the nature of the protection problem in DDBMS are identified. These differences are translated into basic protection requirements. Policies that a distributed data base management protection system should allow are described. The system proposed in this work is powerful enough to satisfy the stated requirements and allow for variations on the policies. This system is a hybrid one where both authorizations and constraints can be defined. The system is termed hybrid because it combines features of both open and closed protection systems. In addition the hybrid system, although designed to offer the flexibility of discretionary systems, incorporates the flow control of information between users, a feature found only in some nondiscretionary systems. Furthermore, the proposed system is said to be integrated because authorizations and constraints can be defined on any of the data bases supported by the system including the data bases containing the authorizations, and the constraints themselves. The hybrid system is incorporated in a general model of DDBMS protection. A modular approach is taken for the design of the model. This approach allows us to represent the different options for the model depending on the set of policy choices taken. Three levels of abstraction describing different aspects of DDBMS protection problems are defined. The conceptual level describes the protection control of the DDBMS transactions and information flows. The logical level is concerned with the interaction between the different organizations participating in the DDBMS. The physical level is involved with the architectural implementation of the logical level.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Montecuollo, Ferdinando. „Compression and indexing of genomic data with confidentiality protection“. Doctoral thesis, Universita degli studi di Salerno, 2015. http://hdl.handle.net/10556/1945.

Der volle Inhalt der Quelle
Annotation:
2013 - 2014
The problem of data compression having specific security properties in order to guarantee user’s privacy is a living matter. On the other hand, high-throughput systems in genomics (e.g. the so-called Next Generation Sequencers) generate massive amounts of genetic data at affordable costs. As a consequence, huge DBMSs integrating many types of genomic information, clinical data and other (personal, environmental, historical, etc.) information types are on the way. This will allow for an unprecedented capability of doing large-scale, comprehensive and in-depth analysis of human beings and diseases; however, it will also constitute a formidable threat to user’s privacy. Whilst the confidential storage of clinical data can be done with well-known methods in the field of relational databases, it is not the same for genomic data; so the main goal of my research work was the design of new compressed indexing schemas for the management of genomic data with confidentiality protection. For the effective processing of a huge amount of such data, a key point will be the possibility of doing high speed search operations in secondary storage, directly operating on the data in compressed and encrypted form; therefore, I spent a big effort to obtain algorithms and data structures enabling pattern search operations on compressed and encrypted data in secondary storage, so that there is no need to preload data in main memory before starting that operations. [edited by Author]
XIII n.s.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Yang, Cao. „Rigorous and Flexible Privacy Protection Framework for Utilizing Personal Spatiotemporal Data“. 京都大学 (Kyoto University), 2017. http://hdl.handle.net/2433/225733.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Benson, Glenn Stuart. „A formal protection model of security in distributed systems“. Diss., Georgia Institute of Technology, 1989. http://hdl.handle.net/1853/12238.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Widener, Patrick M. (Patrick McCall). „Dynamic Differential Data Protection for High-Performance and Pervasive Applications“. Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/7239.

Der volle Inhalt der Quelle
Annotation:
Modern distributed applications are long-lived, are expected to provide flexible and adaptive data services, and must meet the functionality and scalability challenges posed by dynamically changing user communities in heterogeneous execution environments. The practical implications of these requirements are that reconfiguration and upgrades are increasingly necessary, but opportunities to perform such tasks offline are greatly reduced. Developers are responding to this situation by dynamically extending or adjusting application functionality and by tuning application performance, a typical method being the incorporation of client- or context-specific code into applications' execution loops. Our work addresses a basic roadblock in deploying such solutions: the protection of key application components and sensitive data in distributed applications. Our approach, termed Dynamic Differential Data Protection (D3P), provides fine-grain methods for providing component-based protection in distributed applications. Context-sensitive, application-specific security methods are deployed at runtime to enforce restrictions in data access and manipulation. D3P is suitable for low- or zero-downtime environments, since deployments are performed while applications run. D3P is appropriate for high performance environments and for highly scalable applications like publish/subscribe, because it creates native codes via dynamic binary code generation. Finally, due to its integration into middleware, D3P can run across a wide variety of operating system and machine platforms. This dissertation introduces D3P, using sample applications from the high performance and pervasive computing domains to illustrate the problems addressed by our D3P solution. It also describes how D3P can be integrated into modern middleware. We present experimental evaluations which demonstrate the fine-grain nature of D3P, that is, its ability to capture individual end users' or components' needs for data protection, and also describe the performance implications of using D3P in data-intensive applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

DASHTI, SALIMEH. „An Assisted Methodology to Conduct a Data Protection Impact Assessment“. Doctoral thesis, Università degli studi di Genova, 2021. http://hdl.handle.net/11567/1050120.

Der volle Inhalt der Quelle
Annotation:
The uptake of digital technologies in our everyday activities today is unlike any time in history. Consequently, the amount of personal data produced and shared is staggering. Indeed, they have become the primary asset for many businesses. While users benefit from online engagement, an increasing number of critics have voiced their privacy concerns. To protect peoples’ fundamental rights concerning processing their personal data, General Data Protection Regulation (GDPR) has been introduced. GDPR requires to conduct a Data Protection Impact Assessment (DPIA) when data processing is likely to result in a high risk to the rights and freedoms of individuals. For example, where the processing may lead to discrimination, damage to the reputation, loss of confidentiality personal data. Therefore, it requires assessing security risks and privacy risks—we learned identifying the latter is not easy even for information security and data protection experts. GDPR is not clear about when and how to conduct a DPIA. Thus, academic works and legal bodies introduced guidelines and tools to help controllers conduct the DPIA. However, these works lack to either provide an assistance, include all steps of the DPIA or be applicable to all domains. These shortages motivated us to propose an assisted methodology to conduct a DPIA. The methodology provides assistance from identifying the required data type for a given data processing to identifying and evaluating privacy and security risks. We have adopted our methodology to conduct a DPIA-compliance risk analysis for OAuth/OIDC-based financial services. That is because of: (1) the growth of open banking, (2) the necessity of deploying appropriate identity management solutions—as stated in PSD2, which requires to respect the GDPR requirement—and (3) the wide usage of OAuth/OIDC identity management solutions that are secure but error-prone. The methodology can also be used for any OAuth/OIDC-based services.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

„Transparent Protection of Data“. Universitat Politècnica de Catalunya, 2003. http://www.tesisenxarxa.net/TDX-0307103-114315/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

„Data Protection over Cloud“. Master's thesis, 2016. http://hdl.handle.net/2286/R.I.38668.

Der volle Inhalt der Quelle
Annotation:
abstract: Data protection has long been a point of contention and a vastly researched field. With the advent of technology and advances in Internet technologies, securing data has become much more challenging these days. Cloud services have become very popular. Given the ease of access and availability of the systems, it is not easy to not use cloud to store data. This however, pose a significant risk to data security as more of your data is available to a third party. Given the easy transmission and almost infinite storage of data, securing one's sensitive information has become a major challenge. Cloud service providers may not be trusted completely with your data. It is not very uncommon to snoop over the data for finding interesting patterns to generate ad revenue or divulge your information to a third party, e.g. government and law enforcing agencies. For enterprises who use cloud service, it pose a risk for their intellectual property and business secrets. With more and more employees using cloud for their day to day work, business now face a risk of losing or leaking out information. In this thesis, I have focused on ways to protect data and information over cloud- a third party not authorized to use your data, all this while still utilizing cloud services for transfer and availability of data. This research proposes an alternative to an on-premise secure infrastructure giving exibility to user for protecting the data and control over it. The project uses cryptography to protect data and create a secure architecture for secret key migration in order to decrypt the data securely for the intended recipient. It utilizes Intel's technology which gives it an added advantage over other existing solutions.
Dissertation/Thesis
Masters Thesis Computer Science 2016
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Ting, Kuan-Chi, und 丁冠齊. „Big Data and the Personal Data Protection Law“. Thesis, 2015. http://ndltd.ncl.edu.tw/handle/43499798353409527461.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Qian, Li, und 錢. 力. „The “Precautionary Protection” Mechanism under the GDPR: Focusing on the Principles of “Data Protection by Design” and “Data Protection Impact Assessment”“. Thesis, 2018. http://ndltd.ncl.edu.tw/handle/pjbm8k.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

CHEN, CHIEN-YUN, und 陳倩韻. „The Theory and Practice of Data Protection after the Amendment of the New Data Protection Act“. Thesis, 2017. http://ndltd.ncl.edu.tw/handle/97560954807508456263.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Cao, Ming. „Privacy Protection on RFID Data Publishing“. Thesis, 2009. http://spectrum.library.concordia.ca/976641/1/MR63109.pdf.

Der volle Inhalt der Quelle
Annotation:
Radio Frequency IDentification (RFID) is a technology of automatic object identification. Retailers and manufacturers have created compelling business cases for deploying RFID in their supply chains. Yet, the uniquely identifiable objects pose a privacy threat to individuals. In this paper, we study the privacy threats caused by publishing RFID data. Even if the explicit identifying information, such as name and social security number, has been removed from the published RFID data, an adversary may identify a target victim's record or infer her sensitive value by matching a priori known visited locations and time. RFID data by its nature is high-dimensional and sparse, so applying traditional k -anonymity to RFID data suffers from the curse of high-dimensionality, and results in poor information usefulness. We define a new privacy model and develop an anonymization algorithm to accommodate special challenges on RFID data. Then, we evaluate its effectiveness on synthetic data sets.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Chen, Yi-Jie, und 陳羿傑. „Privacy Information Protection through Data Anonymization“. Thesis, 2015. http://ndltd.ncl.edu.tw/handle/3bpp9z.

Der volle Inhalt der Quelle
Annotation:
碩士
中原大學
資訊工程研究所
103
Mobile apps is moving power behind the prevalence of intelligent mobile devices, which, in turn, bring in the exponentially growing number of mobile apps being developed. The personalized and ubiquitous characteristics of intelligent mobile devices, with the added variety of record taking and data sensing capabilities become a serious threat to user privacy when linked with the communication ability of the mobile devices. How to allow us to enjoy all the conveniences and services without privacy risk is an important issue to all users of mobile devices. The available privacy protection schemes or methods either require change made at the mobile device system framework and core, or require complicate technology process and skill. In this thesis, we proposed a proxy server based approach to develop a solution practical to ordinary users. A prototype has been implemented to demonstrate the practicality and usability of the privacy protection mechanism.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

HSU, MING-WEI, und 許銘瑋. „Cloud Services and Personal Data Protection“. Thesis, 2017. http://ndltd.ncl.edu.tw/handle/nd6344.

Der volle Inhalt der Quelle
Annotation:
碩士
東吳大學
法律學系
105
With the progress of science and technology as well as the popularity of the Internet, in recent years, cloud services emerge. Cloud service means individuals store his personal resources in the remote data center managed and operated by others, and through the Internet the resources in the cloud can be accessed. While cloud computing allows its users to easily access to their information at anytime and anywhere, as long as there is internet connect, thus technology like this brings serious data security and privacy concerns. This article first introduces the concepts of cloud services, including features, architecture, service patterns, key technologies and challenges. Second, the discussion of the relevant personal data protection law related issues is revealed: from the personal data protection point of view, cloud service requires the study of legal relations among cloud computing providers, cloud service users, and data subjects. The information stored in the cloud can be divided into personal data and non-personal data. Personal data is applicable to personal data protection law; non-personal information part discussed in this article, is mainly to explore the criminal law protection for the digital data. In addition, since relevant parties may not know where personal data is located at any particular time, it is also worth considering whether this characteristic may cause adverse impact on data protection. Thus this paper aims to comprehensively review the related issues based on the newly enacted Personal Data Protection Act, and to provide suggestions for further discussion in the field.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Silva, Paulo Miguel Guimarães da. „Data Privacy Protection for the Cloud“. Master's thesis, 2016. http://hdl.handle.net/10316/93238.

Der volle Inhalt der Quelle
Annotation:
Dissertação de Mestrado em Engenharia Informática apresentada à Faculdade de Ciências e Tecnologia da Universidade de Coimbra.
Privacy is for a long time a concern when data is being discussed. Nowadays, with an increasing amount of personal and confidential data being transmitted and stored online, data curators have to assure certain guarantees of data protection and privacy. This Master Dissertation presents a background of anonymization and concealing techniques. Their characteristics and capabilities are described, as well as tools to implement and evaluate anonymization and concealing. The evaluation of the applicability of the DNA-inspired concealing algorithm is the main objective of this work. Usually, various metrics are used to measure aspects like risk or utility of the anonymized data. This work presents a new approach of evaluating how well concealed is the data. By using the Cosine Similarity as a measure of similarity between the private and concealed data, this metric proves its worthiness not only in information retrieval or text mining applications but also in the analysis of concealed or anonymized files. Nowadays there is a continuously growing demand for Cloud services and storage. The evaluation in the Master Dissertation is directed to find how suitable is the application of the DNA-inspired concealing algorithm over the data being stored or transmitted in the Cloud. The evaluation is made by analyzing the concealing results as well as the performance of the algorithm itself. The application of the algorithm is made over various texts and audio files with different characteristics, like size or contents. However, both file types are unstructured data. Which is an advantage for being accepted as an input by the algorithm. Unlike many anonymization algorithms which demand structured data. With the final results and analysis, it will be possible to determine the applicability and performance of the referred algorithm for a possible integration with the Cloud.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie