Dissertations / Theses on the topic 'Data security and Data privacy'

To see the other types of publications on this topic, follow the link: Data security and Data privacy.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Data security and Data privacy.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

DeYoung, Mark E. "Privacy Preserving Network Security Data Analytics." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/82909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The problem of revealing accurate statistics about a population while maintaining privacy of individuals is extensively studied in several related disciplines. Statisticians, information security experts, and computational theory researchers, to name a few, have produced extensive bodies of work regarding privacy preservation. Still the need to improve our ability to control the dissemination of potentially private information is driven home by an incessant rhythm of data breaches, data leaks, and privacy exposure. History has shown that both public and private sector organizations are not immune to loss of control over data due to lax handling, incidental leakage, or adversarial breaches. Prudent organizations should consider the sensitive nature of network security data and network operations performance data recorded as logged events. These logged events often contain data elements that are directly correlated with sensitive information about people and their activities -- often at the same level of detail as sensor data. Privacy preserving data publication has the potential to support reproducibility and exploration of new analytic techniques for network security. Providing sanitized data sets de-couples privacy protection efforts from analytic research. De-coupling privacy protections from analytical capabilities enables specialists to tease out the information and knowledge hidden in high dimensional data, while, at the same time, providing some degree of assurance that people's private information is not exposed unnecessarily. In this research we propose methods that support a risk based approach to privacy preserving data publication for network security data. Our main research objective is the design and implementation of technical methods to support the appropriate release of network security data so it can be utilized to develop new analytic methods in an ethical manner. Our intent is to produce a database which holds network security data representative of a contextualized network and people's interaction with the network mid-points and end-points without the problems of identifiability.
Ph. D.
2

Ma, Jianjie. "Learning from perturbed data for privacy-preserving data mining." Online access for everyone, 2006. http://www.dissertations.wsu.edu/Dissertations/Summer2006/j%5Fma%5F080406.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Huang, Xueli. "Achieving Data Privacy and Security in Cloud." Diss., Temple University Libraries, 2016. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/372805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Computer and Information Science
Ph.D.
The growing concerns in term of the privacy of data stored in public cloud have restrained the widespread adoption of cloud computing. The traditional method to protect the data privacy is to encrypt data before they are sent to public cloud, but heavy computation is always introduced by this approach, especially for the image and video data, which has much more amount of data than text data. Another way is to take advantage of hybrid cloud by separating the sensitive data from non-sensitive data and storing them in trusted private cloud and un-trusted public cloud respectively. But if we adopt the method directly, all the images and videos containing sensitive data have to be stored in private cloud, which makes this method meaningless. Moreover, the emergence of the Software-Defined Networking (SDN) paradigm, which decouples the control logic from the closed and proprietary implementations of traditional network devices, enables researchers and practitioners to design new innovative network functions and protocols in a much easier, flexible, and more powerful way. The data plane will ask the control plane to update flow rules when the data plane gets new network packets with which it does not know how to deal with, and the control plane will then dynamically deploy and configure flow rules according to the data plane's requests, which makes the whole network could be managed and controlled efficiently. However, this kind of reactive control model could be used by hackers launching Distributed Denial-of-Service (DDoS) attacks by sending large amount of new requests from the data plane to the control plane. For image data, we divide the image is into pieces with equal size to speed up the encryption process, and propose two kinds of method to cut the relationship between the edges. One is to add random noise in each piece, the other is to design a one-to-one mapping function for each piece to map different pixel value into different another one, which cuts off the relationship between pixels as well the edges. Our mapping function is given with a random parameter as inputs to make each piece could randomly choose different mapping. Finally, we shuffle the pieces with another random parameter, which makes the problems recovering the shuffled image to be NP-complete. For video data, we propose two different methods separately for intra frame, I-frame, and inter frame, P-frame, based on their different characteristic. A hybrid selective video encryption scheme for H.264/AVC based on Advanced Encryption Standard (AES) and video data themselves is proposed for I-frame. For each P-slice of P-frame, we only abstract small part of them in private cloud based on the characteristic of intra prediction mode, which efficiently prevents P-frame being decoded. For cloud running with SDN, we propose a framework to keep the controller away from DDoS attack. We first predict the amount of new requests for each switch periodically based on its previous information, and the new requests will be sent to controller if the predicted total amount of new requests is less than the threshold. Otherwise these requests will be directed to the security gate way to check if there is a attack among them. The requests that caused the dramatic decrease of entropy will be filter out by our algorithm, and the rules of these request will be made and sent to controller. The controller will send the rules to each switch to make them direct the flows matching with the rules to honey pot.
Temple University--Theses
4

Molema, Karabo Omphile. "The conflict of interest between data sharing and data privacy : a middleware approach." Thesis, Cape Peninsula University of Technology, 2016. http://hdl.handle.net/20.500.11838/2415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (MTech (Information Technology))--Cape Peninsula University of Technology, 2016.
People who are referred to as data owners in this study, use the Internet for various purposes and one of those is using online services like Gmail, Facebook, Twitter and so on. These online services are offered by organizations which are referred to as data controllers. When data owners use these service provided by data controllers they usually have to agree to the terms and conditions which gives data controllers indemnity against any privacy issues that may be raised by the data owner. Data controllers are then free to share that data with any other organizations, referred to as third parties. Though data controllers are protected from lawsuits it does not necessarily mean they are free of any act that may be considered a privacy violation by the data owner. This thesis aims to arrive at a design proposition using the design science research paradigm for a middleware extension, specifically focused on the Tomcat server which is a servlet engine running on the JVM. The design proposition proposes a client side annotation based API to be used by developers to specify classes which will carry data outside the scope of the data controller's system to a third party system, the specified classes will then have code weaved in that will communicate with a Privacy Engine component that will determine based on data owner's preferences if their data should be shared or not. The output of this study is a privacy enhancing platform that comprises of three components the client side annotation based API used by developers, an extension to Tomcat and finally a Privacy Engine.
5

Nan, Lihao. "Privacy Preserving Representation Learning For Complex Data." Thesis, The University of Sydney, 2019. http://hdl.handle.net/2123/20662.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Here we consider a common data encryption problem encountered by users who want to disclose some data to gain utility but preserve their private information. Specifically, we consider the inference attack, in which an adversary conducts inference on the disclosed data to gain information about users' private data. Following privacy funnel \cite{makhdoumi2014information}, assuming that the original data $X$ is transformed into $Z$ before disclosing and the log loss is used for both privacy and utility metrics, then the problem can be modeled as finding a mapping $X \rightarrow Z$ that maximizes mutual information between $X$ and $Z$ subject to a constraint that the mutual information between $Z$ and private data $S$ is smaller than a predefined threshold $\epsilon$. In contrast to the original study \cite{makhdoumi2014information}, which only focused on discrete data, we consider the more general and practical setting of continuous and high-dimensional disclosed data (e.g., image data). Most previous work on privacy-preserving representation learning is based on adversarial learning or generative adversarial networks, which has been shown to suffer from the vanishing gradient problem, and it is experimentally difficult to eliminate the relationship with private data $Y$ when $Z$ is constrained to retain more information about $X$. Here we propose a simple but effective variational approach that does not rely on adversarial training. Our experimental results show that our approach is stable and outperforms previous methods in terms of both downstream task accuracy and mutual information estimation.
6

Smith, Tanshanika Turner. "Examining Data Privacy Breaches in Healthcare." ScholarWorks, 2016. https://scholarworks.waldenu.edu/dissertations/2623.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Healthcare data can contain sensitive, personal, and confidential information that should remain secure. Despite the efforts to protect patient data, security breaches occur and may result in fraud, identity theft, and other damages. Grounded in the theoretical backdrop of integrated system theory, the purpose of this study was to determine the association between data privacy breaches, data storage locations, business associates, covered entities, and number of individuals affected. Study data consisted of secondary breach information retrieved from the Department of Health and Human Services Office of Civil Rights. Loglinear analytical procedures were used to examine U.S. healthcare breach incidents and to derive a 4-way loglinear model. Loglinear analysis procedures included in the model yielded a significance value of 0.000, p > .05 for the both the likelihood ratio and Pearson chi-square statistics indicating that an association among the variables existed. Results showed that over 70% of breaches involve healthcare providers and revealed that security incidents often consist of electronic or other digital information. Findings revealed that threats are evolving and showed that likely factors other than data loss and theft contribute to security events, unwanted exposure, and breach incidents. Research results may impact social change by providing security professionals with a broader understanding of data breaches required to design and implement more secure and effective information security prevention programs. Healthcare leaders might affect social change by utilizing findings to further the security dialogue needed to minimize security risk factors, protect sensitive healthcare data, and reduce breach mitigation and incident response costs.
7

Wernberg, Max. "Security and Privacy of Controller Pilot Data Link Communication." Thesis, Linköpings universitet, Kommunikations- och transportsystem, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-156337.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Newly implemented technologies within the aviation lack, according to recent studies, built in security measures to protect them against outside interference. In this thesis we study the security and privacy status of the digital wireless Controller Pilot Data Link Communication (CPDLC) used in air traffic management alongside other systems to increase the safety and traffic capacity of controlled airspaces. The findings show that CPDCL is currently insecure and exposed to attacks. Any solutions to remedy this must adhere to its low levels of performance. Elliptical Curve Cryptography, Protected ACARS and Host Identity Protocol have been identified as valid solutions to the system’s security drawbacks and all three are possible to implement in the present state of CPDLC.
8

Gholami, Ali. "Security and Privacy of Sensitive Data in Cloud Computing." Doctoral thesis, KTH, Parallelldatorcentrum, PDC, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-186141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cloud computing offers the prospect of on-demand, elastic computing, provided as a utility service, and it is revolutionizing many domains of computing. Compared with earlier methods of processing data, cloud computing environments provide significant benefits, such as the availability of automated tools to assemble, connect, configure and reconfigure virtualized resources on demand. These make it much easier to meet organizational goals as organizations can easily deploy cloud services. However, the shift in paradigm that accompanies the adoption of cloud computing is increasingly giving rise to security and privacy considerations relating to facets of cloud computing such as multi-tenancy, trust, loss of control and accountability. Consequently, cloud platforms that handle sensitive information are required to deploy technical measures and organizational safeguards to avoid data protection breakdowns that might result in enormous and costly damages. Sensitive information in the context of cloud computing encompasses data from a wide range of different areas and domains. Data concerning health is a typical example of the type of sensitive information handled in cloud computing environments, and it is obvious that most individuals will want information related to their health to be secure. Hence, with the growth of cloud computing in recent times, privacy and data protection requirements have been evolving to protect individuals against surveillance and data disclosure. Some examples of such protective legislation are the EU Data Protection Directive (DPD) and the US Health Insurance Portability and Accountability Act (HIPAA), both of which demand privacy preservation for handling personally identifiable information. There have been great efforts to employ a wide range of mechanisms to enhance the privacy of data and to make cloud platforms more secure. Techniques that have been used include: encryption, trusted platform module, secure multi-party computing, homomorphic encryption, anonymization, container and sandboxing technologies. However, it is still an open problem about how to correctly build usable privacy-preserving cloud systems to handle sensitive data securely due to two research challenges. First, existing privacy and data protection legislation demand strong security, transparency and audibility of data usage. Second, lack of familiarity with a broad range of emerging or existing security solutions to build efficient cloud systems. This dissertation focuses on the design and development of several systems and methodologies for handling sensitive data appropriately in cloud computing environments. The key idea behind the proposed solutions is enforcing the privacy requirements mandated by existing legislation that aims to protect the privacy of individuals in cloud-computing platforms. We begin with an overview of the main concepts from cloud computing, followed by identifying the problems that need to be solved for secure data management in cloud environments. It then continues with a description of background material in addition to reviewing existing security and privacy solutions that are being used in the area of cloud computing. Our first main contribution is a new method for modeling threats to privacy in cloud environments which can be used to identify privacy requirements in accordance with data protection legislation. This method is then used to propose a framework that meets the privacy requirements for handling data in the area of genomics. That is, health data concerning the genome (DNA) of individuals. Our second contribution is a system for preserving privacy when publishing sample availability data. This system is noteworthy because it is capable of cross-linking over multiple datasets. The thesis continues by proposing a system called ScaBIA for privacy-preserving brain image analysis in the cloud. The final section of the dissertation describes a new approach for quantifying and minimizing the risk of operating system kernel exploitation, in addition to the development of a system call interposition reference monitor for Lind - a dual sandbox.
“Cloud computing”, eller “molntjänster” som blivit den vanligaste svenska översättningen, har stor potential. Molntjänster kan tillhandahålla exaktden datakraft som efterfrågas, nästan oavsett hur stor den är; dvs. molntjäns-ter möjliggör vad som brukar kallas för “elastic computing”. Effekterna avmolntjänster är revolutionerande inom många områden av datoranvändning.Jämfört med tidigare metoder för databehandling ger molntjänster mångafördelar; exempelvis tillgänglighet av automatiserade verktyg för att monte-ra, ansluta, konfigurera och re-konfigurera virtuella resurser “allt efter behov”(“on-demand”). Molntjänster gör det med andra ord mycket lättare för or-ganisationer att uppfylla sina målsättningar. Men det paradigmskifte, sominförandet av molntjänster innebär, skapar även säkerhetsproblem och förutsätter noggranna integritetsbedömningar. Hur bevaras det ömsesidiga förtro-endet, hur hanteras ansvarsutkrävandet, vid minskade kontrollmöjligheter tillföljd av delad information? Följaktligen behövs molnplattformar som är såkonstruerade att de kan hantera känslig information. Det krävs tekniska ochorganisatoriska hinder för att minimera risken för dataintrång, dataintrångsom kan resultera i enormt kostsamma skador såväl ekonomiskt som policymässigt. Molntjänster kan innehålla känslig information från många olikaområden och domäner. Hälsodata är ett typiskt exempel på sådan information. Det är uppenbart att de flesta människor vill att data relaterade tillderas hälsa ska vara skyddad. Så den ökade användningen av molntjänster påsenare år har medfört att kraven på integritets- och dataskydd har skärptsför att skydda individer mot övervakning och dataintrång. Exempel på skyd-dande lagstiftning är “EU Data Protection Directive” (DPD) och “US HealthInsurance Portability and Accountability Act” (HIPAA), vilka båda kräverskydd av privatlivet och bevarandet av integritet vid hantering av informa-tion som kan identifiera individer. Det har gjorts stora insatser för att utvecklafler mekanismer för att öka dataintegriteten och därmed göra molntjänsternasäkrare. Exempel på detta är; kryptering, “trusted platform modules”, säker“multi-party computing”, homomorfisk kryptering, anonymisering, container-och “sandlåde”-tekniker.Men hur man korrekt ska skapa användbara, integritetsbevarande moln-tjänster för helt säker behandling av känsliga data är fortfarande i väsentligaavseenden ett olöst problem på grund av två stora forskningsutmaningar. Fördet första: Existerande integritets- och dataskydds-lagar kräver transparensoch noggrann granskning av dataanvändningen. För det andra: Bristande kän-nedom om en rad kommande och redan existerande säkerhetslösningar för att skapa effektiva molntjänster.Denna avhandling fokuserar på utformning och utveckling av system ochmetoder för att hantera känsliga data i molntjänster på lämpligaste sätt.Målet med de framlagda lösningarna är att svara de integritetskrav som ställsi redan gällande lagstiftning, som har som uttalad målsättning att skyddaindividers integritet vid användning av molntjänster.Vi börjar med att ge en överblick av de viktigaste begreppen i molntjäns-ter, för att därefter identifiera problem som behöver lösas för säker databe-handling vid användning av molntjänster. Avhandlingen fortsätter sedan med en beskrivning av bakgrundsmaterial och en sammanfattning av befintligasäkerhets- och integritets-lösningar inom molntjänster.Vårt främsta bidrag är en ny metod för att simulera integritetshot vidanvändning av molntjänster, en metod som kan användas till att identifierade integritetskrav som överensstämmer med gällande dataskyddslagar. Vårmetod används sedan för att föreslå ett ramverk som möter de integritetskravsom ställs för att hantera data inom området “genomik”. Genomik handlari korthet om hälsodata avseende arvsmassan (DNA) hos enskilda individer.Vårt andra större bidrag är ett system för att bevara integriteten vid publice-ring av biologiska provdata. Systemet har fördelen att kunna sammankopplaflera olika uppsättningar med data. Avhandlingen fortsätter med att före-slå och beskriva ett system kallat ScaBIA, ett integritetsbevarande systemför hjärnbildsanalyser processade via molntjänster. Avhandlingens avslutan-de kapitel beskriver ett nytt sätt för kvantifiering och minimering av risk vid“kernel exploitation” (“utnyttjande av kärnan”). Denna nya ansats är ävenett bidrag till utvecklingen av ett nytt system för (Call interposition referencemonitor for Lind - the dual layer sandbox).

QC 20160516

9

Mai, Guangcan. "Biometric system security and privacy: data reconstruction and template protection." HKBU Institutional Repository, 2018. https://repository.hkbu.edu.hk/etd_oa/544.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Biometric systems are being increasingly used, from daily entertainment to critical applications such as security access and identity management. It is known that biometric systems should meet the stringent requirement of low error rate. In addition, for critical applications, the security and privacy issues of biometric systems are required to be concerned. Otherwise, severe consequence such as the unauthorized access (security) or the exposure of identity-related information (privacy) can be caused. Therefore, it is imperative to study the vulnerability to potential attacks and identify the corresponding risks. Furthermore, the countermeasures should also be devised and patched on the systems. In this thesis, we study the security and privacy issues in biometric systems. We first make an attempt to reconstruct raw biometric data from biometric templates and demonstrate the security and privacy issues caused by the data reconstruction. Then, we make two attempts to protect biometric templates from being reconstructed and improve the state-of-the-art biometric template protection techniques.
10

Liu, Lian. "PRIVACY PRESERVING DATA MINING FOR NUMERICAL MATRICES, SOCIAL NETWORKS, AND BIG DATA." UKnowledge, 2015. http://uknowledge.uky.edu/cs_etds/31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Motivated by increasing public awareness of possible abuse of confidential information, which is considered as a significant hindrance to the development of e-society, medical and financial markets, a privacy preserving data mining framework is presented so that data owners can carefully process data in order to preserve confidential information and guarantee information functionality within an acceptable boundary. First, among many privacy-preserving methodologies, as a group of popular techniques for achieving a balance between data utility and information privacy, a class of data perturbation methods add a noise signal, following a statistical distribution, to an original numerical matrix. With the help of analysis in eigenspace of perturbed data, the potential privacy vulnerability of a popular data perturbation is analyzed in the presence of very little information leakage in privacy-preserving databases. The vulnerability to very little data leakage is theoretically proved and experimentally illustrated. Second, in addition to numerical matrices, social networks have played a critical role in modern e-society. Security and privacy in social networks receive a lot of attention because of recent security scandals among some popular social network service providers. So, the need to protect confidential information from being disclosed motivates us to develop multiple privacy-preserving techniques for social networks. Affinities (or weights) attached to edges are private and can lead to personal security leakage. To protect privacy of social networks, several algorithms are proposed, including Gaussian perturbation, greedy algorithm, and probability random walking algorithm. They can quickly modify original data in a large-scale situation, to satisfy different privacy requirements. Third, the era of big data is approaching on the horizon in the industrial arena and academia, as the quantity of collected data is increasing in an exponential fashion. Three issues are studied in the age of big data with privacy preservation, obtaining a high confidence about accuracy of any specific differentially private queries, speedily and accurately updating a private summary of a binary stream with I/O-awareness, and launching a mutual private information retrieval for big data. All three issues are handled by two core backbones, differential privacy and the Chernoff Bound.
11

Scheffler, Thomas. "Privacy enforcement with data owner-defined policies." Phd thesis, Universität Potsdam, 2013. http://opus.kobv.de/ubp/volltexte/2013/6793/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis proposes a privacy protection framework for the controlled distribution and use of personal private data. The framework is based on the idea that privacy policies can be set directly by the data owner and can be automatically enforced against the data user. Data privacy continues to be a very important topic, as our dependency on electronic communication maintains its current growth, and private data is shared between multiple devices, users and locations. The growing amount and the ubiquitous availability of personal private data increases the likelihood of data misuse. Early privacy protection techniques, such as anonymous email and payment systems have focused on data avoidance and anonymous use of services. They did not take into account that data sharing cannot be avoided when people participate in electronic communication scenarios that involve social interactions. This leads to a situation where data is shared widely and uncontrollably and in most cases the data owner has no control over further distribution and use of personal private data. Previous efforts to integrate privacy awareness into data processing workflows have focused on the extension of existing access control frameworks with privacy aware functions or have analysed specific individual problems such as the expressiveness of policy languages. So far, very few implementations of integrated privacy protection mechanisms exist and can be studied to prove their effectiveness for privacy protection. Second level issues that stem from practical application of the implemented mechanisms, such as usability, life-time data management and changes in trustworthiness have received very little attention so far, mainly because they require actual implementations to be studied. Most existing privacy protection schemes silently assume that it is the privilege of the data user to define the contract under which personal private data is released. Such an approach simplifies policy management and policy enforcement for the data user, but leaves the data owner with a binary decision to submit or withhold his or her personal data based on the provided policy. We wanted to empower the data owner to express his or her privacy preferences through privacy policies that follow the so-called Owner-Retained Access Control (ORAC) model. ORAC has been proposed by McCollum, et al. as an alternate access control mechanism that leaves the authority over access decisions by the originator of the data. The data owner is given control over the release policy for his or her personal data, and he or she can set permissions or restrictions according to individually perceived trust values. Such a policy needs to be expressed in a coherent way and must allow the deterministic policy evaluation by different entities. The privacy policy also needs to be communicated from the data owner to the data user, so that it can be enforced. Data and policy are stored together as a Protected Data Object that follows the Sticky Policy paradigm as defined by Mont, et al. and others. We developed a unique policy combination approach that takes usability aspects for the creation and maintenance of policies into consideration. Our privacy policy consists of three parts: A Default Policy provides basic privacy protection if no specific rules have been entered by the data owner. An Owner Policy part allows the customisation of the default policy by the data owner. And a so-called Safety Policy guarantees that the data owner cannot specify disadvantageous policies, which, for example, exclude him or her from further access to the private data. The combined evaluation of these three policy-parts yields the necessary access decision. The automatic enforcement of privacy policies in our protection framework is supported by a reference monitor implementation. We started our work with the development of a client-side protection mechanism that allows the enforcement of data-use restrictions after private data has been released to the data user. The client-side enforcement component for data-use policies is based on a modified Java Security Framework. Privacy policies are translated into corresponding Java permissions that can be automatically enforced by the Java Security Manager. When we later extended our work to implement server-side protection mechanisms, we found several drawbacks for the privacy enforcement through the Java Security Framework. We solved this problem by extending our reference monitor design to use Aspect-Oriented Programming (AOP) and the Java Reflection API to intercept data accesses in existing applications and provide a way to enforce data owner-defined privacy policies for business applications.
Im Rahmen der Dissertation wurde ein Framework für die Durchsetzung von Richtlinien zum Schutz privater Daten geschaffen, welches darauf setzt, dass diese Richtlinien oder Policies direkt von den Eigentümern der Daten erstellt werden und automatisiert durchsetzbar sind. Der Schutz privater Daten ist ein sehr wichtiges Thema im Bereich der elektronischen Kommunikation, welches durch die fortschreitende Gerätevernetzung und die Verfügbarkeit und Nutzung privater Daten in Onlinediensten noch an Bedeutung gewinnt. In der Vergangenheit wurden verschiedene Techniken für den Schutz privater Daten entwickelt: so genannte Privacy Enhancing Technologies. Viele dieser Technologien arbeiten nach dem Prinzip der Datensparsamkeit und der Anonymisierung und stehen damit der modernen Netznutzung in Sozialen Medien entgegen. Das führt zu der Situation, dass private Daten umfassend verteilt und genutzt werden, ohne dass der Datenbesitzer gezielte Kontrolle über die Verteilung und Nutzung seiner privaten Daten ausüben kann. Existierende richtlinienbasiert Datenschutztechniken gehen in der Regel davon aus, dass der Nutzer und nicht der Eigentümer der Daten die Richtlinien für den Umgang mit privaten Daten vorgibt. Dieser Ansatz vereinfacht das Management und die Durchsetzung der Zugriffsbeschränkungen für den Datennutzer, lässt dem Datenbesitzer aber nur die Alternative den Richtlinien des Datennutzers zuzustimmen, oder keine Daten weiterzugeben. Es war daher unser Ansatz die Interessen des Datenbesitzers durch die Möglichkeit der Formulierung eigener Richtlinien zu stärken. Das dabei verwendete Modell zur Zugriffskontrolle wird auch als Owner-Retained Access Control (ORAC) bezeichnet und wurde 1990 von McCollum u.a. formuliert. Das Grundprinzip dieses Modells besteht darin, dass die Autorität über Zugriffsentscheidungen stets beim Urheber der Daten verbleibt. Aus diesem Ansatz ergeben sich zwei Herausforderungen. Zum einen muss der Besitzer der Daten, der Data Owner, in die Lage versetzt werden, aussagekräftige und korrekte Richtlinien für den Umgang mit seinen Daten formulieren zu können. Da es sich dabei um normale Computernutzer handelt, muss davon ausgegangen werden, dass diese Personen auch Fehler bei der Richtlinienerstellung machen. Wir haben dieses Problem dadurch gelöst, dass wir die Datenschutzrichtlinien in drei separate Bereiche mit unterschiedlicher Priorität aufteilen. Der Bereich mit der niedrigsten Priorität definiert grundlegende Schutzeigenschaften. Der Dateneigentümer kann diese Eigenschaften durch eigene Regeln mittlerer Priorität überschrieben. Darüber hinaus sorgt ein Bereich mit Sicherheitsrichtlinien hoher Priorität dafür, dass bestimmte Zugriffsrechte immer gewahrt bleiben. Die zweite Herausforderung besteht in der gezielten Kommunikation der Richtlinien und deren Durchsetzung gegenüber dem Datennutzer (auch als Data User bezeichnet). Um die Richtlinien dem Datennutzer bekannt zu machen, verwenden wir so genannte Sticky Policies. Das bedeutet, dass wir die Richtlinien über eine geeignete Kodierung an die zu schützenden Daten anhängen, so dass jederzeit darauf Bezug genommen werden kann und auch bei der Verteilung der Daten die Datenschutzanforderungen der Besitzer erhalten bleiben. Für die Durchsetzung der Richtlinien auf dem System des Datennutzers haben wir zwei verschiedene Ansätze entwickelt. Wir haben einen so genannten Reference Monitor entwickelt, welcher jeglichen Zugriff auf die privaten Daten kontrolliert und anhand der in der Sticky Policy gespeicherten Regeln entscheidet, ob der Datennutzer den Zugriff auf diese Daten erhält oder nicht. Dieser Reference Monitor wurde zum einen als Client-seitigen Lösung implementiert, die auf dem Sicherheitskonzept der Programmiersprache Java aufsetzt. Zum anderen wurde auch eine Lösung für Server entwickelt, welche mit Hilfe der Aspekt-orientierten Programmierung den Zugriff auf bestimmte Methoden eines Programms kontrollieren kann. In dem Client-seitigen Referenzmonitor werden Privacy Policies in Java Permissions übersetzt und automatisiert durch den Java Security Manager gegenüber beliebigen Applikationen durchgesetzt. Da dieser Ansatz beim Zugriff auf Daten mit anderer Privacy Policy den Neustart der Applikation erfordert, wurde für den Server-seitigen Referenzmonitor ein anderer Ansatz gewählt. Mit Hilfe der Java Reflection API und Methoden der Aspektorientierten Programmierung gelang es Datenzugriffe in existierenden Applikationen abzufangen und erst nach Prüfung der Datenschutzrichtlinie den Zugriff zuzulassen oder zu verbieten. Beide Lösungen wurden auf ihre Leistungsfähigkeit getestet und stellen eine Erweiterung der bisher bekannten Techniken zum Schutz privater Daten dar.
12

Kong, Yibing. "Security and privacy model for association databases." Access electronically, 2003. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20031126.142250/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Jie. "MATRIX DECOMPOSITION FOR DATA DISCLOSURE CONTROL AND DATA MINING APPLICATIONS." UKnowledge, 2008. http://uknowledge.uky.edu/gradschool_diss/677.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Access to huge amounts of various data with private information brings out a dual demand for preservation of data privacy and correctness of knowledge discovery, which are two apparently contradictory tasks. Low-rank approximations generated by matrix decompositions are a fundamental element in this dissertation for the privacy preserving data mining (PPDM) applications. Two categories of PPDM are studied: data value hiding (DVH) and data pattern hiding (DPH). A matrix-decomposition-based framework is designed to incorporate matrix decomposition techniques into data preprocessing to distort original data sets. With respect to the challenge in the DVH, how to protect sensitive/confidential attribute values without jeopardizing underlying data patterns, we propose singular value decomposition (SVD)-based and nonnegative matrix factorization (NMF)-based models. Some discussion on data distortion and data utility metrics is presented. Our experimental results on benchmark data sets demonstrate that our proposed models have potential for outperforming standard data perturbation models regarding the balance between data privacy and data utility. Based on an equivalence between the NMF and K-means clustering, a simultaneous data value and pattern hiding strategy is developed for data mining activities using K-means clustering. Three schemes are designed to make a slight alteration on submatrices such that user-specified cluster properties of data subjects are hidden. Performance evaluation demonstrates the efficacy of the proposed strategy since some optimal solutions can be computed with zero side effects on nonconfidential memberships. Accordingly, the protection of privacy is simplified by one modified data set with enhanced performance by this dual privacy protection. In addition, an improved incremental SVD-updating algorithm is applied to speed up the real-time performance of the SVD-based model for frequent data updates. The performance and effectiveness of the improved algorithm have been examined on synthetic and real data sets. Experimental results indicate that the introduction of the incremental matrix decomposition produces a significant speedup. It also provides potential support for the use of the SVD technique in the On-Line Analytical Processing for business data analysis.
14

Burkhart, Martin [Verfasser]. "Enabling Collaborative Network Security with Privacy-Preserving Data Aggregation / Martin Burkhart." Aachen : Shaker, 2011. http://d-nb.info/1071528394/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Cui, Yingjie, and 崔英杰. "A study on privacy-preserving clustering." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B4357225X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Garcia, Arturo, Luis Calle, Carlos Raymundo, Francisco Dominguez, and Javier M. Moguerza. "Personal data protection maturity model for the micro financial sector in Peru." Institute of Electrical and Electronics Engineers Inc, 2018. http://hdl.handle.net/10757/624636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado.
The micro financial sector is a strategic element in the economy of developing countries since it facilitates the integration and development of all social classes and let the economic growth. In this point is the growth of data is high every day in sector like the micro financial, resulting from transactions and operations carried out with these companies on a daily basis. Appropriate management of the personal data privacy policies is therefore necessary because, otherwise, it will comply with personal data protection laws and regulations and let take quality information for decision-making and process improvement. The present study proposes a personal data protection maturity model based on international standards of privacy and information security, which also reveals personal data protection capabilities in organizations. Finally, the study proposes a diagnostic and tracing assessment tool that was carried out for five companies in the micro financial sector and the obtained results were analyzed to validate the model and to help in success of data protection initiatives.
Revisión por pares
17

Hu, Jun. "Privacy-Preserving Data Integration in Public Health Surveillance." Thèse, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/19994.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
With widespread use of the Internet, data is often shared between organizations in B2B health care networks. Integrating data across all sources in a health care network would be useful to public health surveillance and provide a complete view of how the overall network is performing. Because of the lack of standardization for a common data model across organizations, matching identities between different locations in order to link and aggregate records is difficult. Moreover, privacy legislation controls the use of personal information, and health care data is very sensitive in nature so the protection of data privacy and prevention of personal health information leaks is more important than ever. Throughout the process of integrating data sets from different organizations, consent (explicitly or implicitly) and/or permission to use must be in place, data sets must be de-identified, and identity must be protected. Furthermore, one must ensure that combining data sets from different data sources into a single consolidated data set does not create data that may be potentially re-identified even when only summary data records are created. In this thesis, we propose new privacy preserving data integration protocols for public health surveillance, identify a set of privacy preserving data integration patterns, and propose a supporting framework that combines a methodology and architecture with which to implement these protocols in practice. Our work is validated with two real world case studies that were developed in partnership with two different public health surveillance organizations.
18

Mustafa, Mustafa Asan. "Smart Grid security : protecting users' privacy in smart grid applications." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/smart-grid-security-protecting-users-privacy-in-smart-grid-applications(565d4c36-8c83-4848-a142-a6ff70868d93).html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Smart Grid (SG) is an electrical grid enhanced with information and communication technology capabilities, so it can support two-way electricity and communication flows among various entities in the grid. The aim of SG is to make the electricity industry operate more efficiently and to provide electricity in a more secure, reliable and sustainable manner. Automated Meter Reading (AMR) and Smart Electric Vehicle (SEV) charging are two SG applications tipped to play a major role in achieving this aim. The AMR application allows different SG entities to collect users’ fine-grained metering data measured by users’ Smart Meters (SMs). The SEV charging application allows EVs’ charging parameters to be changed depending on the grid’s state in return for incentives for the EV owners. However, both applications impose risks on users’ privacy. Entities having access to users’ fine-grained metering data may use such data to infer individual users’ personal habits. In addition, users’ private information such as users’/EVs’ identities and charging locations could be exposed when EVs are charged. Entities may use such information to learn users’ whereabouts, thus breach their privacy. This thesis proposes secure and user privacy-preserving protocols to support AMR and SEV charging in an efficient, scalable and cost-effective manner. First, it investigates both applications. For AMR, (1) it specifies an extensive set of functional requirements taking into account the way liberalised electricity markets work and the interests of all SG entities, (2) it performs a comprehensive threat analysis, based on which, (3) it specifies security and privacy requirements, and (4) it proposes to divide users’ data into two types: operational data (used for grid management) and accountable data (used for billing). For SEV charging, (1) it specifies two modes of charging: price-driven mode and price-control-driven mode, and (2) it analyses two use-cases: price-driven roaming SEV charging at home location and price-control-driven roaming SEV charging at home location, by performing threat analysis and specifying sets of functional, security and privacy requirements for each of the two cases. Second, it proposes a novel Decentralized, Efficient, Privacy-preserving and Selective Aggregation (DEP2SA) protocol to allow SG entities to collect users’ fine-grained operational metering data while preserving users’ privacy. DEP2SA uses the homomorphic Paillier cryptosystem to ensure the confidentiality of the metering data during their transit and data aggregation process. To preserve users’ privacy with minimum performance penalty, users’ metering data are classified and aggregated accordingly by their respective local gateways based on the users’ locations and their contracted suppliers. In this way, authorised SG entities can only receive the aggregated data of users they have contracts with. DEP2SA has been analysed in terms of security, computational and communication overheads, and the results show that it is more secure, efficient and scalable as compared with related work. Third, it proposes a novel suite of five protocols to allow (1) suppliers to collect users accountable metering data, and (2) users (i) to access, manage and control their own metering data and (ii) to switch between electricity tariffs and suppliers, in an efficient and scalable manner. The main ideas are: (i) each SM to have a register, named accounting register, dedicated only for storing the user’s accountable data, (ii) this register is updated by design at a low frequency, (iii) the user’s supplier has unlimited access to this register, and (iv) the user cancustomise how often this register is updated with new data. The suite has been analysed in terms of security, computational and communication overheads. Fourth, it proposes a novel protocol, known as Roaming Electric Vehicle Charging and Billing, an Anonymous Multi-User (REVCBAMU) protocol, to support the priced-driven roaming SEV charging at home location. During a charging session, a roaming EV user uses a pseudonym of the EV (known only to the user’s contracted supplier) which is anonymously signed by the user’s private key. This protocol protects the user’s identity privacy from other suppliers as well as the user’s privacy of location from its own supplier. Further, it allows the user’s contracted supplier to authenticate the EV and the user. Using two-factor authentication approach a multi-user EV charging is supported and different legitimate EV users (e.g., family members) can be held accountable for their charging sessions. With each charging session, the EV uses a different pseudonym which prevents adversaries from linking the different charging sessions of the same EV. On an application level, REVCBAMU supports fair user billing, i.e., each user pays only for his/her own energy consumption, and an open EV marketplace in which EV users can safely choose among different remote host suppliers. The protocol has been analysed in terms of security and computational overheads.
19

Zhang, Kaijin ZHANG. "Efficiency and security in data-driven applications." Case Western Reserve University School of Graduate Studies / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=case1522443817978176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Ji, Shouling. "Evaluating the security of anonymized big graph/structural data." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We studied the security of anonymized big graph data. Our main contributions include: new De-Anonymization (DA) attacks, comprehensive anonymity, utility, and de-anonymizability quantifications, and a secure graph data publishing/sharing system SecGraph. New DA Attacks. We present two novel graph DA frameworks: cold start single-phase Optimization-based DA (ODA) and De-anonymizing Social-Attribute Graphs (De-SAG). Unlike existing seed-based DA attacks, ODA does not priori knowledge. In addition, ODA’s DA results can facilitate existing DA attacks by providing more seed information. De-SAG is the first attack that takes into account both graph structure and attribute information. Through extensive evaluations leveraging real world graph data, we validated the performance of both ODA and De-SAG. Graph Anonymity, Utility, and De-anonymizability Quantifications. We developed new techniques that enable comprehensive graph data anonymity, utility, and de-anonymizability evaluation. First, we proposed the first seed-free graph de-anonymizability quantification framework under a general data model which provides the theoretical foundation for seed-free SDA attacks. Second, we conducted the first seed-based quantification on the perfect and partial de-anonymizability of graph data. Our quantification closes the gap between seed-based DA practice and theory. Third, we conducted the first attribute-based anonymity analysis for Social-Attribute Graph (SAG) data. Our attribute-based anonymity analysis together with existing structure-based de-anonymizability quantifications provide data owners and researchers a more complete understanding of the privacy of graph data. Fourth, we conducted the first graph Anonymity-Utility-De-anonymity (AUD) correlation quantification and provided close-forms to explicitly demonstrate such correlation. Finally, based on our quantifications, we conducted large-scale evaluations leveraging 100+ real world graph datasets generated by various computer systems and services. Using the evaluations, we demonstrated the datasets’ anonymity, utility, and de-anonymizability, as well as the significance and validity of our quantifications. SecGraph. We designed, implemented, and evaluated the first uniform and open-source Secure Graph data publishing/sharing (SecGraph) system. SecGraph enables data owners and researchers to conduct accurate comparative studies of anonymization/DA techniques, and to comprehensively understand the resistance/vulnerability of existing or newly developed anonymization techniques, the effectiveness of existing or newly developed DA attacks, and graph and application utilities of anonymized data.
21

Wang, Xiwei. "Data Privacy Preservation in Collaborative Filtering Based Recommender Systems." UKnowledge, 2015. http://uknowledge.uky.edu/cs_etds/35.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This dissertation studies data privacy preservation in collaborative filtering based recommender systems and proposes several collaborative filtering models that aim at preserving user privacy from different perspectives. The empirical study on multiple classical recommendation algorithms presents the basic idea of the models and explores their performance on real world datasets. The algorithms that are investigated in this study include a popularity based model, an item similarity based model, a singular value decomposition based model, and a bipartite graph model. Top-N recommendations are evaluated to examine the prediction accuracy. It is apparent that with more customers' preference data, recommender systems can better profile customers' shopping patterns which in turn produces product recommendations with higher accuracy. The precautions should be taken to address the privacy issues that arise during data sharing between two vendors. Study shows that matrix factorization techniques are ideal choices for data privacy preservation by their nature. In this dissertation, singular value decomposition (SVD) and nonnegative matrix factorization (NMF) are adopted as the fundamental techniques for collaborative filtering to make privacy-preserving recommendations. The proposed SVD based model utilizes missing value imputation, randomization technique, and the truncated SVD to perturb the raw rating data. The NMF based models, namely iAux-NMF and iCluster-NMF, take into account the auxiliary information of users and items to help missing value imputation and privacy preservation. Additionally, these models support efficient incremental data update as well. A good number of online vendors allow people to leave their feedback on products. It is considered as users' public preferences. However, due to the connections between users' public and private preferences, if a recommender system fails to distinguish real customers from attackers, the private preferences of real customers can be exposed. This dissertation addresses an attack model in which an attacker holds real customers' partial ratings and tries to obtain their private preferences by cheating recommender systems. To resolve this problem, trustworthiness information is incorporated into NMF based collaborative filtering techniques to detect the attackers and make reasonably different recommendations to the normal users and the attackers. By doing so, users' private preferences can be effectively protected.
22

Basciftci, Yuksel O. Basciftci. "Private and Secure Data Communication: Information Theoretic Approach." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1469137249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Möller, Carolin. "The evolution of data protection and privacy in the public security context : an institutional analysis of three EU data retention and access regimes." Thesis, Queen Mary, University of London, 2017. http://qmro.qmul.ac.uk/xmlui/handle/123456789/25911.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Since nearly two decades threats to public security through events such as 9/11, the Madrid (2004) and London (2005) bombings and more recently the Paris attacks (2015) resulted in the adoption of a plethora of national and EU measures aiming at fighting terrorism and serious crime. In addition, the Snowden revelations brought the privacy and data protection implications of these public security measures into the spotlight. In this highly contentious context, three EU data retention and access measures have been introduced for the purpose of fighting serious crime and terrorism: The Data Retention Directive (DRD), the EU-US PNR Agreement and the EU-US SWIFT Agreement. All three regimes went through several revisions (SWIFT, PNR) or have been annulled (DRD) exemplifying the difficulty of determining how privacy and data protection ought to be protected in the context of public security. The trigger for this research is to understand the underlying causes of these difficulties by examining the problem from different angles. The thesis applies the theory of 'New Institutionalism' (NI) which allows both a political and legal analysis of privacy and data protection in the public security context. According to NI, 'institutions' are defined as the operational framework in which actors interact and they steer the behaviours of the latter in the policy-making cycle. By focusing on the three data retention and access regimes, the aim of this thesis is to examine how the EU 'institutional framework' shapes data protection and privacy in regard to data retention and access measures in the public security context. Answering this research question the thesis puts forward three main hypotheses: (i) privacy and data protection in the Area of Freedom, Security and Justice (AFSJ) is an institutional framework in transition where historic and new features determine how Articles 7 and 8 of the Charter of Fundamental Rights of the European Union (CFREU) are shaped; (ii) policy outcomes on Articles 7 and 8 CFREU are influenced by actors' strategic preferences pursued in the legislation-making process; and (iii) privacy and data protection are framed by the evolution of the Court of Justice of the European Union (CJEU) from a 'legal basis arbiter' to a political actor in its own right as a result of the constitutional changes brought by the Lisbon Treaty.
24

Dambra, Savino. "Data-driven risk quantification for proactive security." Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS356.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La faisabilité et l'efficacité des mesures proactives dépendent d'une cascade de défis: comment quantifier les cyber-risques d'une entité, quels indicateurs peuvent être utilisés pour les prédire, et de quelles sources de données peuvent-ils être extraits? Dans cette thèse, nous énumérons les défis auxquels les praticiens et les chercheurs sont confrontés lorsqu'ils tentent de quantifier les cyber-risques et nous les examinons dans le domaine émergent de la cyber-assurance. Nous évaluons ensuite l'incidence de différentes mesures et postures de sécurité sur les risques d'infection par des logiciels malveillants et évaluons la pertinence de neuf indicateurs pour étudier la nature systématique de ces risques. Enfin, nous démontrons l'importance de la sélection des sources de données dans la mesure des risques. Nous nous penchons sur le 'web tracking' et démontrons à quel point les risques liés à la vie privée sont sous-estimés lorsque l'on exclut la perspective des utilisateurs
The feasibility and efficacy of proactive measures depend upon a cascading of challenges: how can one quantify the cyber risks of a given entity, what reliable indicators can be used to predict them, and from which data sources can they be extracted? In this thesis, we enumerate active challenges that practitioners and researchers face when attempting to quantify cyber-risks and contextualise them in the emerging domain of cyber insurance, and propose several research directions. We then explore some of these areas, evaluate the incidence that different security measures and security postures have on malware-infection risks and assess the goodness of nine host- extracted indicators when investigating the systematic nature of those risks. We finally provide evidence about the importance that data-source selection together with a holistic approach have on risk measurements. We look at web-tracking and demonstrate how underestimated privacy risks are when excluding the users' perspective
25

Melis, Andrea. "Data sanitization in a clearing system for public transport operators." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amslaurea.unibo.it/7248/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this work we will discuss about a project started by the Emilia-Romagna Regional Government regarding the manage of the public transport. In particular we will perform a data mining analysis on the data-set of this project. After introducing the Weka software used to make our analysis, we will discover the most useful data mining techniques and algorithms; and we will show how these results can be used to violate the privacy of the same public transport operators. At the end, despite is off topic of this work, we will spend also a few words about how it's possible to prevent this kind of attack.
26

Ophoff, Jacobus Albertus. "WSP3: a web service model for personal privacy protection." Thesis, Port Elizabeth Technikon, 2003. http://hdl.handle.net/10948/272.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The prevalent use of the Internet not only brings with it numerous advantages, but also some drawbacks. The biggest of these problems is the threat to the individual’s personal privacy. This privacy issue is playing a growing role with respect to technological advancements. While new service-based technologies are considerably increasing the scope of information flow, the cost is a loss of control over personal information and therefore privacy. Existing privacy protection measures might fail to provide effective privacy protection in these new environments. This dissertation focuses on the use of new technologies to improve the levels of personal privacy. In this regard the WSP3 (Web Service Model for Personal Privacy Protection) model is formulated. This model proposes a privacy protection scheme using Web Services. Having received tremendous industry backing, Web Services is a very topical technology, promising much in the evolution of the Internet. In our society privacy is highly valued and a very important issue. Protecting personal privacy in environments using new technologies is crucial for their future success. These facts, combined with the detail that the WSP3 model focusses on Web Service environments, lead to the following realizations for the model: The WSP3 model provides users with control over their personal information and allows them to express their desired level of privacy. Parties requiring access to a user’s information are explicitly defined by the user, as well as the information available to them. The WSP3 model utilizes a Web Services architecture to provide privacy protection. In addition, it integrates security techniques, such as cryptography, into the architecture as required. The WSP3 model integrates with current standards to maintain their benefits. This allows the implementation of the model in any environment supporting these base technologies. In addition, the research involves the development of a prototype according to the model. This prototype serves to present a proof-of-concept by illustrating the WSP3 model and all the technologies involved. The WSP3 model gives users control over their privacy and allows everyone to decide their own level of protection. By incorporating Web Services, the model also shows how new technologies can be used to offer solutions to existing problem areas.
27

Parameswaran, Rupa. "A Robust Data Obfuscation Technique for Privacy Preserving Collaborative Filtering." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/11459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Privacy is defined as the freedom from unauthorized intrusion. The availability of personal information through online databases, such as government records, medical records, and voters and #146; lists, pose a threat to personal privacy. The concern over individual privacy has led to the development of legal codes for safeguarding privacy in several countries. However, the ignorance of individuals as well as loopholes in the systems, have led to information breaches even in the presence of such rules and regulations. Protection against data privacy requires modification of the data itself. The term {em data obfuscation} is used to refer to the class of algorithms that modify the values of the data items without distorting the usefulness of the data. The main goal of this thesis is the development of a data obfuscation technique that provides robust privacy protection with minimal loss in usability of the data. Although medical and financial services are two of the major areas where information privacy is a concern, privacy breaches are not restricted to these domains. One of the areas where the concern over data privacy is of growing interest is collaborative filtering. Collaborative filtering systems are being widely used in E-commerce applications to provide recommendations to users regarding products that might be of interest to them. The prediction accuracy of these systems is dependent on the size and accuracy of the data provided by users. However, the lack of sufficient guidelines governing the use and distribution of user data raises concerns over individual privacy. Users often provide the minimal information that is required for accessing these E-commerce services. The lack of rules governing the use and distribution of data disallows sharing of data among different communities for collaborative filtering. The goals of this thesis are (a) the definition of a standard for classifying DO techniques, (b) the development of a robust cluster preserving data obfuscation algorithm, and (c) the design and implementation of a privacy-preserving shared collaborative filtering framework using the data obfuscation algorithm.
28

Thilakanathan, Danan. "Secure Data Sharing and Collaboration in the Cloud." Thesis, The University of Sydney, 2015. http://hdl.handle.net/2123/15164.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cloud technology can be leveraged to enable data-sharing capabilities, which can benefit the user through greater productivity and efficiency. However, the Cloud is susceptible to many privacy and security vulnerabilities, which hinders the progress and widescale adoption of data sharing for the purposes of collaboration. Thus, there is a strong demand for data owners to not only ensure that their data is kept private and secure in the Cloud, but to also have a degree of control over their own data contents once they are shared with data consumers. Specifically, the main issues for data sharing in the Cloud include key management, security attacks, and data-owner access control. In terms of key management, it is vital that data must first be encrypted before storage in the Cloud, to prevent privacy and security breaches. However, the management of encryption keys is a great challenge. The sharing of keys with data consumers has proven to be ineffective, especially when considering data-consumer revocation. Security attacks may also prevent the widescale usage of the Cloud for data-sharing purposes. Common security attacks include insider attacks, collusion attacks, and man-in-the-middle attacks. In terms of access control, authorised data consumers could do anything they wish with an owner's data, including sending it to their peers and colleagues without the data owner's knowledge. Throughout this thesis, we investigate ways in which to address these issues. We first propose a key partitioning technique that aims to address the key management problem. We deploy this technique in a number of scenarios, such as remote healthcare management. We also develop secure data-sharing protocols that aim to mitigate and prevent security attacks on the Cloud. Finally, we focus on giving the data owner greater control, by developing a self-controlled software object called SafeProtect.
29

Michel, Axel. "Personalising privacy contraints in Generalization-based Anonymization Models." Thesis, Bourges, INSA Centre Val de Loire, 2019. http://www.theses.fr/2019ISAB0001/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les bénéfices engendrés par les études statistiques sur les données personnelles des individus sont nombreux, que ce soit dans le médical, l'énergie ou la gestion du trafic urbain pour n'en citer que quelques-uns. Les initiatives publiques de smart-disclosure et d'ouverture des données rendent ces études statistiques indispensables pour les institutions et industries tout autour du globe. Cependant, ces calculs peuvent exposer les données personnelles des individus, portant ainsi atteinte à leur vie privée. Les individus sont alors de plus en plus réticent à participer à des études statistiques malgré les protections garanties par les instituts. Pour retrouver la confiance des individus, il devient nécessaire de proposer dessolutions de user empowerment, c'est-à-dire permettre à chaque utilisateur de contrôler les paramètres de protection des données personnelles les concernant qui sont utilisées pour des calculs.Cette thèse développe donc un nouveau concept d'anonymisation personnalisé, basé sur la généralisation de données et sur le user empowerment.En premier lieu, ce manuscrit propose une nouvelle approche mettant en avant la personnalisation des protections de la vie privée par les individus, lors de calculs d'agrégation dans une base de données. De cette façon les individus peuvent fournir des données de précision variable, en fonction de leur perception du risque. De plus, nous utilisons une architecture décentralisée basée sur du matériel sécurisé assurant ainsi les garanties de respect de la vie privée tout au long des opérations d'agrégation.En deuxième lieu, ce manuscrit étudie la personnalisations des garanties d'anonymat lors de la publication de jeux de données anonymisés. Nous proposons l'adaptation d'heuristiques existantes ainsi qu'une nouvelle approche basée sur la programmation par contraintes. Des expérimentations ont été menées pour étudier l'impact d’une telle personnalisation sur la qualité des données. Les contraintes d’anonymat ont été construites et simulées de façon réaliste en se basant sur des résultats d'études sociologiques
The benefit of performing Big data computations over individual’s microdata is manifold, in the medical, energy or transportation fields to cite only a few, and this interest is growing with the emergence of smart-disclosure initiatives around the world. However, these computations often expose microdata to privacy leakages, explaining the reluctance of individuals to participate in studies despite the privacy guarantees promised by statistical institutes. To regain indivuals’trust, it becomes essential to propose user empowerment solutions, that is to say allowing individuals to control the privacy parameter used to make computations over their microdata.This work proposes a novel concept of personalized anonymisation based on data generalization and user empowerment.Firstly, this manuscript proposes a novel approach to push personalized privacy guarantees in the processing of database queries so that individuals can disclose different amounts of information (i.e. data at different levels of accuracy) depending on their own perception of the risk. Moreover, we propose a decentralized computing infrastructure based on secure hardware enforcing these personalized privacy guarantees all along the query execution process.Secondly, this manuscript studies the personalization of anonymity guarantees when publishing data. We propose the adaptation of existing heuristics and a new approach based on constraint programming. Experiments have been done to show the impact of such personalization on the data quality. Individuals’privacy constraints have been built and realistically using social statistic studies
30

Lee, Kum-Yu Enid. "Privacy and security of an intelligent office form." Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Miles, Shaun Graeme. "An investigation of issues of privacy, anonymity and multi-factor authentication in an open environment." Thesis, Rhodes University, 2012. http://hdl.handle.net/10962/d1006653.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis performs an investigation into issues concerning the broad area ofIdentity and Access Management, with a focus on open environments. Through literature research the issues of privacy, anonymity and access control are identified. The issue of privacy is an inherent problem due to the nature of the digital network environment. Information can be duplicated and modified regardless of the wishes and intentions ofthe owner of that information unless proper measures are taken to secure the environment. Once information is published or divulged on the network, there is very little way of controlling the subsequent usage of that information. To address this issue a model for privacy is presented that follows the user centric paradigm of meta-identity. The lack of anonymity, where security measures can be thwarted through the observation of the environment, is a concern for users and systems. By an attacker observing the communication channel and monitoring the interactions between users and systems over a long enough period of time, it is possible to infer knowledge about the users and systems. This knowledge is used to build an identity profile of potential victims to be used in subsequent attacks. To address the problem, mechanisms for providing an acceptable level of anonymity while maintaining adequate accountability (from a legal standpoint) are explored. In terms of access control, the inherent weakness of single factor authentication mechanisms is discussed. The typical mechanism is the user-name and password pair, which provides a single point of failure. By increasing the factors used in authentication, the amount of work required to compromise the system increases non-linearly. Within an open network, several aspects hinder wide scale adoption and use of multi-factor authentication schemes, such as token management and the impact on usability. The framework is developed from a Utopian point of view, with the aim of being applicable to many situations as opposed to a single specific domain. The framework incorporates multi-factor authentication over multiple paths using mobile phones and GSM networks, and explores the usefulness of such an approach. The models are in tum analysed, providing a discussion into the assumptions made and the problems faced by each model.
Adobe Acrobat Pro 9.5.1
Adobe Acrobat 9.51 Paper Capture Plug-in
32

Ziegeldorf, Jan Henrik [Verfasser]. "Designing Digital Services with Cryptographic Guarantees for Data Security and Privacy / Jan Henrik Ziegeldorf." Aachen : Shaker, 2018. http://d-nb.info/1159835845/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Kong, Jiantao. "Trusted data path protecting shared data in virtualized distributed systems." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33820.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
When sharing data across multiple sites, service applications should not be trusted automatically. Services that are suspected of faulty, erroneous, or malicious behaviors, or that run on systems that may be compromised, should not be able to gain access to protected data or entrusted with the same data access rights as others. This thesis proposes a context flow model that controls the information flow in a distributed system. Each service application along with its surrounding context in a distributed system is treated as a controllable principal. This thesis defines a trust-based access control model that controls the information exchange between these principals. An online monitoring framework is used to evaluate the trustworthiness of the service applications and the underlining systems. An external communication interception runtime framework enforces trust-based access control transparently for the entire system.
34

Jellen, Isabel. "Towards Security and Privacy in Networked Medical Devices and Electronic Healthcare Systems." DigitalCommons@CalPoly, 2020. https://digitalcommons.calpoly.edu/theses/2141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
E-health is a growing eld which utilizes wireless sensor networks to enable access to effective and efficient healthcare services and provide patient monitoring to enable early detection and treatment of health conditions. Due to the proliferation of e-health systems, security and privacy have become critical issues in preventing data falsification, unauthorized access to the system, or eavesdropping on sensitive health data. Furthermore, due to the intrinsic limitations of many wireless medical devices, including low power and limited computational resources, security and device performance can be difficult to balance. Therefore, many current networked medical devices operate without basic security services such as authentication, authorization, and encryption. In this work, we survey recent work on e-health security, including biometric approaches, proximity-based approaches, key management techniques, audit mechanisms, anomaly detection, external device methods, and lightweight encryption and key management protocols. We also survey the state-of-the art in e-health privacy, including techniques such as obfuscation, secret sharing, distributed data mining, authentication, access control, blockchain, anonymization, and cryptography. We then propose a comprehensive system model for e-health applications with consideration of battery capacity and computational ability of medical devices. A case study is presented to show that the proposed system model can support heterogeneous medical devices with varying power and resource constraints. The case study demonstrates that it is possible to signicantly reduce the overhead for security on power-constrained devices based on the proposed system model.
35

Floderus, Sebastian, and Vincent Tewolde. "Analysing privacy concerns in smartcameras : in correlation with GDPR and Privacy by Design." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21980.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Background. The right to privacy is every persons right, data regulation laws suchas the GDPR and privacy preserving concepts like Privacy by Design (PbD) aid inthis matter. IoT devices are highly vulnerable to attacks because of their limitedstorage and processing capabilities, even more so for internet connected cameras.With the use of security auditing techniques and privacy analysis methods it ispossible to identify security and privacy issues for Internet of Things (IoT) devices. Objectives. The research aims to evaluate three selected IoT cameras’ ability toprotect privacy of their consumers. As well as investigating the role GDPR and PbDhas in the design and operation of each device. Methods. A literature review was performed in order to gain valuable knowledgeof how to design a case study that would evaluate privacy issues of IoT devices incorrelation with GDPR and PbD. The case study consists of 14 cases designed toexplore security and privacy related issues. They were executed in a monitored andcontrolled network environment to detect data flow between devices. Results. There was a noticeable difference in the security and privacy enhancingtechnologies used between some manufactures. Furthermore, there was a distinctdisparity of how transparent each system was with the processed data, which is acrucial part of both GDPR and PbD. Conclusions. All three companies had taken GDPR and PbD into considerationin the design on the IoT systems, however to different extents. One of the IoTmanufactures could benefit from incorporating PbD more thoroughly into the designand operation of their product. Also the GDPR could benefit from having referencesto security standards and frameworks in order simplify the process for companies tosecure their systems.
36

Shahandashti, Siamak Fayyaz. "Contributions to secure and privacy-preserving use of electronic credentials." School of Computer Science and Software Engineering - Faculty of Informatics, 2009. http://ro.uow.edu.au/theses/3036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this thesis, we make contributions to secure and privacy preserving use of electronic credentials in three different levels.First, we address the case in credential systems where a credential owner wants to show her credential to a verifier without taking the risk that the ability to prove ownership of her credential is transferred to the verifier. We define credential ownership proof protocols for credentials signed by standard signature schemes. We also propose proper security definitions for the protocol, aiming to protect the security of both the credential issuer and the credential owner against concurrent attacks. We give two generic constructions of credential ownership proofs based on identity-based encryption and identity based identification schemes. Furthermore, we show that signatures with credential ownership proofs are equivalent to identity-based identification schemes, in the sense that any secure construction of each implies a secure construction of the other. Moreover, we show that the GQ identification protocol yields an efficient credential ownership proof for credentials signed by the RSA signature scheme and prove the protocol concurrently-secure.Then, we give a generic construction for universal (mutli) designated-verifier signature schemes from a large class of signature schemes, referred to as Class C. The resulting schemes are efficient and have two important properties. Firstly, they are provably DV-unforgeable, non-transferable and also non-delegatable. Secondly, the signer and the designated verifier can independently choose their cryptographic settings. We also propose a generic construction for (hierarchical) identity-based signature schemes from any signature scheme in C and prove that the construction is secure against adaptive chosen message and identity attacks. We discuss possible extensions of our constructions to identity-based ring signatures and identity-based designated-verifier signatures from any signature in C. Furthermore, we show that it is possible to combine the above constructions to obtain signatures with combined functionalities.Finally, inspired by the recent developments in attribute-based encryption, we propose threshold attribute-based signatures (t-ABS). In a t-ABS, signers are associated with a set of attributes and verification of a signed document against a verification attribute set succeeds if the signer has a threshold number of (at least t) attributes in common with the verification attribute set. A t-ABS scheme enables a signature holder to prove possession of signatures by revealing only the relevant (to the verification attribute set) attributes of the signer, hence providing signer-attribute privacy for the signature holder. We define t-ABS schemes, formalize their security and propose two t-ABS schemes: a basic scheme secure against selective forgery and a second one secure against existential forgery, both provable in the standard model, assuming hardness of the computational Diffie-Hellman problem. We show that our basic t-ABS scheme can be augmented with two extra protocols that are used for efficiently issuing and verifying t-ABS signatures on committed values. We call the augmented scheme a threshold attribute based c-signature scheme (t-ABCS). We show how a t-ABCS scheme can be used to realize a secure threshold attribute-based anonymous credential system (t-ABACS) providing signer-attribute privacy. We propose a security model for t-ABACS and give a concrete scheme using t-ABCS scheme. Using the simulation paradigm, we prove that the credential system is secure if the t-ABCS scheme is secure.
37

Andersen, Adelina. "Exploring Security and Privacy Practices of Home IoT Users." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-303002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Internet of Things (IoT) devices are becoming more and more common in homes, making the security and privacy of these increasingly important. Previous research has found that home IoT users can become a threat to themselves if they lack knowledge of their devices and awareness of potential threats. To investigate how the users’ security and privacy practices can be improved, it is necessary to understand the current everyday practices and what impacts these. This is examined in 10 interviews, revealing that the practices are primarily influenced by convenience, motivation and the effort required from the user. Using these insights, this thesis suggests that tangible interaction needs to be used as a complement to digital solutions to improve the security and privacy practices. By having a physical object that in a simple way can inform everyone of the current security and privacy situation and is equally accessible for all members of a household, the security and privacy can become more attainable for all users no matter their level of knowledge and experience.
Internet of Things (IoT) enheter har blivit vanligt förekommande i hem vilket gör deras säkerhet och integritet allt viktigare. Det har tidigare visats att användare av IoT i hemmet kan utgöra ett hot mot sig själva om de saknar kunskap om enheterna och kännedom om potentiella hot. För att undersöka hur användarnas vanor kring säkerhet och integitet kan förbättras är det först nödvändigt att utforska de nuvarande vanorna och vad som påverkar dessa. Detta undersöks i tio intervjuer som visar att vanorna främst påverkas av bekvämlighet, motivation och ansträngningen som krävs av användaren. Utifrån dessa insikter föreslås det att fysisk interaktion används som ett komplement till digitala lösningar för att förbättra vanorna kring säkerhet och integritet. Genom att ha ett fysiskt objekt som på ett enkelt sätt kan förmedla enheternas nuvarande status och är lika tillgängligt för alla medlemmar i ett hushåll kan säkerhet och integritet bli mer uppnåeligt för alla användare, oavsett deras nivå av kunskap och erfarenhet.
38

Carlsson, Nicole. "Vulnerable data interactions — augmenting agency." Thesis, Malmö universitet, Fakulteten för kultur och samhälle (KS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-23309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis project opens up an interaction design space in the InfoSec domain concerning raising awareness of common vulnerabilities and facilitating counter practices through seamful design.This combination of raising awareness coupled with boosting possibilities for deliberate action (or non-action) together account for augmenting agency. This augmentation takes the form of bottom up micro-movements and daily gestures contributing to opportunities for greater agency in the increasingly fraught InfoSec domain.
39

HajYasien, Ahmed. "Preserving Privacy in Association Rule Mining." Thesis, Griffith University, 2007. http://hdl.handle.net/10072/365286.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
With the development and penetration of data mining within different fields and disciplines, security and privacy concerns have emerged. Data mining technology which reveals patterns in large databases could compromise the information that an individual or an organization regards as private. The aim of privacy-preserving data mining is to find the right balance between maximizing analysis results (that are useful for the common good) and keeping the inferences that disclose private information about organizations or individuals at a minimum. In this thesis we present a new classification for privacy preserving data mining problems, we propose a new heuristic algorithm called the QIBC algorithm that improves the privacy of sensitive knowledge (as itemsets) by blocking more inference channels. We demonstrate the efficiency of the algorithm, we propose two techniques (item count and increasing cardinality) based on item-restriction that hide sensitive itemsets (and we perform experiments to compare the two techniques), we propose an efficient protocol that allows parties to share data in a private way with no restrictions and without loss of accuracy (and we demonstrate the efficiency of the protocol), and we review the literature of software engineering related to the associationrule mining domain and we suggest a list of considerations to achieve better privacy on software.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Information and Communication Technology
Faculty of Engineering and Information Technology
Full Text
40

Stivanello, Alice <1993&gt. "Strategic Management over Data Privacy and Cyber Security Risk in Smart City and Smart Home." Master's Degree Thesis, Università Ca' Foscari Venezia, 2018. http://hdl.handle.net/10579/12673.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The world population growth combined with the unprecedented levels of urban density is posing serious challenges for the future of our cities which demand an efficient, effective, and sustainable management of urban infrastructures and resource consumption. Through the integration of information and communication technologies (ICT), the smart city is identified as a ‘system-of-systems’ created to process real-time information exchange at a large-scale and consequently distribute a better life quality to its citizens. Grounded in learning capability and cross-domain interoperability, the embedded Internet of Things (IoT) infrastructure represents a high-value attack platform and thus its adoption should be carefully weighed up against the cyber risk exposure. The main objective of this research is to explore the inner workings of a such complex ecosystem and understand the criticalities of the cyber-security requirements. Since the smart home market represents a fundamental component of a smart city and the most promising application of IoT technology, an accurate investigation is carried out. Defining the smart home as an intertwined advanced automated system which provide the inhabitants remote access and centralized control over the building’s functions, the role played by the advancement of IoT technology is crucial. A multi-layer architectural model is presented in order to grasp the logical conditions underlying the intelligence-driven networks. Installed under the guise of customer service, surveillance facility and remote monitoring are responsible for the potential abuse of data retrieved and thus the failure of safety and security solutions. In response, a cyber-physical vulnerability assessment is conducted and evaluated into a threat-based Defence approach. The scope of this thesis is the identification and formulation of a safe and secure human-machine space, associating proper countermeasures to prevent data leakages and mitigate damages. Although this analysis tries to be exhaustive in all its part, the major focus is on cyber-security concern as it represents a significant barrier to smart systems adoption and all stakeholders should take it seriously. Neglecting the current cyber-security vulnerabilities and underestimate the impact of a cyber intrusion may reveal cascading disasters across the entire smart industry.
41

Åhlfeldt, Rose-Mharie. "Information Security in Distributed Healthcare : Exploring the Needs for Achieving Patient Safety and Patient Privacy." Doctoral thesis, Stockholm University, Department of Computer and Systems Sciences (together with KTH), 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-7407.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

In healthcare, patient information is a critical factor. The right information at the right time is a necessity in order to provide the best possible care for a patient. Patient information must also be protected from unauthorized access in order to protect patient privacy. It is furthermore common for patients to visit more than one healthcare provider, which implies a need for cross border healthcare and continuity in the patient process.

This thesis is focused on information security in healthcare when patient information has to be managed and communicated between various healthcare actors and organizations. The work takes a practical approach with a set of investigations from different perspectives and with different professionals involved. Problems and needs have been identified, and a set of guidelines and recommendations has been suggested and developed in order to improve patient safety as well as patient privacy.

The results show that a comprehensive view of the entire area concerning patient information management between different healthcare actors is missing. Healthcare, as well as patient processes, have to be analyzed in order to gather knowledge needed for secure patient information management.

Furthermore, the results clearly show that there are deficiencies both at the technical and the administrative level of security in all investigated healthcare organizations.

The main contribution areas are: an increased understanding of information security by elaborating on the administrative part of information security, the identification of information security problems and needs in cross border healthcare, and a set of guidelines and recommendations in order to advance information security measures in healthcare.

42

Iwaya, Leonardo H. "Secure and Privacy-aware Data Collection and Processing in Mobile Health Systems." Licentiate thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-46982.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Healthcare systems have assimilated information and communication technologies in order to improve the quality of healthcare and patient's experience at reduced costs. The increasing digitalization of people's health information raises however new threats regarding information security and privacy. Accidental or deliberate data breaches of health data may lead to societal pressures, embarrassment and discrimination. Information security and privacy are paramount to achieve high quality healthcare services, and further, to not harm individuals when providing care. With that in mind, we give special attention to the category of Mobile Health (mHealth) systems. That is, the use of mobile devices (e.g., mobile phones, sensors, PDAs) to support medical and public health. Such systems, have been particularly successful in developing countries, taking advantage of the flourishing mobile market and the need to expand the coverage of primary healthcare programs. Many mHealth initiatives, however, fail to address security and privacy issues. This, coupled with the lack of specific legislation for privacy and data protection in these countries, increases the risk of harm to individuals. The overall objective of this thesis is to enhance knowledge regarding the design of security and privacy technologies for mHealth systems. In particular, we deal with mHealth Data Collection Systems (MDCSs), which consists of mobile devices for collecting and reporting health-related data, replacing paper-based approaches for health surveys and surveillance. This thesis consists of publications contributing to mHealth security and privacy in various ways: with a comprehensive literature review about mHealth in Brazil; with the design of a security framework for MDCSs (SecourHealth); with the design of a MDCS (GeoHealth); with the design of Privacy Impact Assessment template for MDCSs; and with the study of ontology-based obfuscation and anonymisation functions for health data.
Information security and privacy are paramount to achieve high quality healthcare services, and further, to not harm individuals when providing care. With that in mind, we give special attention to the category of Mobile Health (mHealth) systems. That is, the use of mobile devices (e.g., mobile phones, sensors, PDAs) to support medical and public health. Such systems, have been particularly successful in developing countries, taking advantage of the flourishing mobile market and the need to expand the coverage of primary healthcare programs. Many mHealth initiatives, however, fail to address security and privacy issues. This, coupled with the lack of specific legislation for privacy and data protection in these countries, increases the risk of harm to individuals. The overall objective of this thesis is to enhance knowledge regarding the design of security and privacy technologies for mHealth systems. In particular, we deal with mHealth Data Collection Systems (MDCSs), which consists of mobile devices for collecting and reporting health-related data, replacing paper-based approaches for health surveys and surveillance.
43

Burdon, Mark. "The conceptual and operational compatibility of data breach notification and information privacy laws." Thesis, Queensland University of Technology, 2011. https://eprints.qut.edu.au/47512/1/Mark_Burdon_Thesis.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Mandatory data breach notification laws are a novel and potentially important legal instrument regarding organisational protection of personal information. These laws require organisations that have suffered a data breach involving personal information to notify those persons that may be affected, and potentially government authorities, about the breach. The Australian Law Reform Commission (ALRC) has proposed the creation of a mandatory data breach notification scheme, implemented via amendments to the Privacy Act 1988 (Cth). However, the conceptual differences between data breach notification law and information privacy law are such that it is questionable whether a data breach notification scheme can be solely implemented via an information privacy law. Accordingly, this thesis by publications investigated, through six journal articles, the extent to which data breach notification law was conceptually and operationally compatible with information privacy law. The assessment of compatibility began with the identification of key issues related to data breach notification law. The first article, Stakeholder Perspectives Regarding the Mandatory Notification of Australian Data Breaches started this stage of the research which concluded in the second article, The Mandatory Notification of Data Breaches: Issues Arising for Australian and EU Legal Developments (‘Mandatory Notification‘). A key issue that emerged was whether data breach notification was itself an information privacy issue. This notion guided the remaining research and focused attention towards the next stage of research, an examination of the conceptual and operational foundations of both laws. The second article, Mandatory Notification and the third article, Encryption Safe Harbours and Data Breach Notification Laws did so from the perspective of data breach notification law. The fourth article, The Conceptual Basis of Personal Information in Australian Privacy Law and the fifth article, Privacy Invasive Geo-Mashups: Privacy 2.0 and the Limits of First Generation Information Privacy Laws did so for information privacy law. The final article, Contextualizing the Tensions and Weaknesses of Information Privacy and Data Breach Notification Laws synthesised previous research findings within the framework of contextualisation, principally developed by Nissenbaum. The examination of conceptual and operational foundations revealed tensions between both laws and shared weaknesses within both laws. First, the distinction between sectoral and comprehensive information privacy legal regimes was important as it shaped the development of US data breach notification laws and their subsequent implementable scope in other jurisdictions. Second, the sectoral versus comprehensive distinction produced different emphases in relation to data breach notification thus leading to different forms of remedy. The prime example is the distinction between market-based initiatives found in US data breach notification laws compared to rights-based protections found in the EU and Australia. Third, both laws are predicated on the regulation of personal information exchange processes even though both laws regulate this process from different perspectives, namely, a context independent or context dependent approach. Fourth, both laws have limited notions of harm that is further constrained by restrictive accountability frameworks. The findings of the research suggest that data breach notification is more compatible with information privacy law in some respects than others. Apparent compatibilities clearly exist as both laws have an interest in the protection of personal information. However, this thesis revealed that ostensible similarities are founded on some significant differences. Data breach notification law is either a comprehensive facet to a sectoral approach or a sectoral adjunct to a comprehensive regime. However, whilst there are fundamental differences between both laws they are not so great to make them incompatible with each other. The similarities between both laws are sufficient to forge compatibilities but it is likely that the distinctions between them will produce anomalies particularly if both laws are applied from a perspective that negates contextualisation.
44

Chen, YiQun. "Contributions to privacy preserving with ring signatures." Access electronically, 2006. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20070104.134826/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Banerjea-Brodeur, Nicolas Paul. "Advance passenger information passenger name record : privacy rights and security awareness." Thesis, McGill University, 2003. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=80909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
An in-depth study of Advance Passenger Information and Passenger Name Record has never been accomplished prior to the events of September 11 th. It is of great importance to distinguish both of these concepts as they entail different legal consequence. API is to be understood as a data transmission that Border Control Authorities possess in advance in order to facilitate the movements of passengers. It is furthermore imperative that harmonization and inter-operability between States be achieved in order for this system to work. Although the obligations seem to appear for air carriers to be extraneous, the positive impact is greater than the downfalls.
Passenger Name Record access permits authorities to have additional data that could identify individuals requiring more questioning prior to border control clearance. This data does not cause in itself privacy issues other than perhaps the potential retention and manipulation of information that Border Control Authorities may acquire. In essence, bilateral agreements between governments should be sought in order to protect national legislation.
The common goal of the airline industry is to ensure safe and efficient air transport. API and PNR should be viewed as formalities that can facilitate border control clearance and prevent the entrance of potentially high-risk individuals.
46

Giraud, Matthieu. "Secure Distributed MapReduce Protocols : How to have privacy-preserving cloud applications?" Thesis, Université Clermont Auvergne‎ (2017-2020), 2019. http://www.theses.fr/2019CLFAC033/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
À l’heure des réseaux sociaux et des objets connectés, de nombreuses et diverses données sont produites à chaque instant. L’analyse de ces données a donné lieu à une nouvelle science nommée "Big Data". Pour traiter du mieux possible ce flux incessant de données, de nouvelles méthodes de calcul ont vu le jour. Les travaux de cette thèse portent sur la cryptographie appliquée au traitement de grands volumes de données, avec comme finalité la protection des données des utilisateurs. En particulier, nous nous intéressons à la sécurisation d’algorithmes utilisant le paradigme de calcul distribué MapReduce pour réaliser un certain nombre de primitives (ou algorithmes) indispensables aux opérations de traitement de données, allant du calcul de métriques de graphes (e.g. PageRank) aux requêtes SQL (i.e. intersection d’ensembles, agrégation, jointure naturelle). Nous traitons dans la première partie de cette thèse de la multiplication de matrices. Nous décrivons d’abord une multiplication matricielle standard et sécurisée pour l’architecture MapReduce qui est basée sur l’utilisation du chiffrement additif de Paillier pour garantir la confidentialité des données. Les algorithmes proposés correspondent à une hypothèse spécifique de sécurité : collusion ou non des nœuds du cluster MapReduce, le modèle général de sécurité étant honnête mais curieux. L’objectif est de protéger la confidentialité de l’une et l’autre matrice, ainsi que le résultat final, et ce pour tous les participants (propriétaires des matrices, nœuds de calcul, utilisateur souhaitant calculer le résultat). D’autre part, nous exploitons également l’algorithme de multiplication de matrices de Strassen-Winograd, dont la complexité asymptotique est O(n^log2(7)) soit environ O(n^2.81) ce qui est une amélioration par rapport à la multiplication matricielle standard. Une nouvelle version de cet algorithme adaptée au paradigme MapReduce est proposée. L’hypothèse de sécurité adoptée ici est limitée à la non-collusion entre le cloud et l’utilisateur final. La version sécurisée utilise comme pour la multiplication standard l’algorithme de chiffrement Paillier. La seconde partie de cette thèse porte sur la protection des données lorsque des opérations d’algèbre relationnelle sont déléguées à un serveur public de cloud qui implémente à nouveau le paradigme MapReduce. En particulier, nous présentons une solution d’intersection sécurisée qui permet à un utilisateur du cloud d’obtenir l’intersection de n > 1 relations appartenant à n propriétaires de données. Dans cette solution, tous les propriétaires de données partagent une clé et un propriétaire de données sélectionné partage une clé avec chacune des clés restantes. Par conséquent, alors que ce propriétaire de données spécifique stocke n clés, les autres propriétaires n’en stockent que deux. Le chiffrement du tuple de relation réelle consiste à combiner l’utilisation d’un chiffrement asymétrique avec une fonction pseudo-aléatoire. Une fois que les données sont stockées dans le cloud, chaque réducteur (Reducer) se voit attribuer une relation particulière. S’il existe n éléments différents, des opérations XOR sont effectuées. La solution proposée reste donc très efficace. Par la suite, nous décrivons les variantes des opérations de regroupement et d’agrégation préservant la confidentialité en termes de performance et de sécurité. Les solutions proposées associent l’utilisation de fonctions pseudo-aléatoires à celle du chiffrement homomorphe pour les opérations COUNT, SUM et AVG et à un chiffrement préservant l’ordre pour les opérations MIN et MAX. Enfin, nous proposons les versions sécurisées de deux protocoles de jointure (cascade et hypercube) adaptées au paradigme MapReduce. Les solutions consistent à utiliser des fonctions pseudo-aléatoires pour effectuer des contrôles d’égalité et ainsi permettre les opérations de jointure lorsque des composants communs sont détectés.(...)
In the age of social networks and connected objects, many and diverse data are produced at every moment. The analysis of these data has led to a new science called "Big Data". To best handle this constant flow of data, new calculation methods have emerged.This thesis focuses on cryptography applied to processing of large volumes of data, with the aim of protection of user data. In particular, we focus on securing algorithms using the distributed computing MapReduce paradigm to perform a number of primitives (or algorithms) essential for data processing, ranging from the calculation of graph metrics (e.g. PageRank) to SQL queries (i.e. set intersection, aggregation, natural join).In the first part of this thesis, we discuss the multiplication of matrices. We first describe a standard and secure matrix multiplication for the MapReduce architecture that is based on the Paillier’s additive encryption scheme to guarantee the confidentiality of the data. The proposed algorithms correspond to a specific security hypothesis: collusion or not of MapReduce cluster nodes, the general security model being honest-but-curious. The aim is to protect the confidentiality of both matrices, as well as the final result, and this for all participants (matrix owners, calculation nodes, user wishing to compute the result). On the other hand, we also use the matrix multiplication algorithm of Strassen-Winograd, whose asymptotic complexity is O(n^log2(7)) or about O(n^2.81) which is an improvement compared to the standard matrix multiplication. A new version of this algorithm adapted to the MapReduce paradigm is proposed. The safety assumption adopted here is limited to the non-collusion between the cloud and the end user. The version uses the Paillier’s encryption scheme.The second part of this thesis focuses on data protection when relational algebra operations are delegated to a public cloud server using the MapReduce paradigm. In particular, we present a secureintersection solution that allows a cloud user to obtain the intersection of n > 1 relations belonging to n data owners. In this solution, all data owners share a key and a selected data owner sharesa key with each of the remaining keys. Therefore, while this specific data owner stores n keys, the other owners only store two keys. The encryption of the real relation tuple consists in combining the use of asymmetric encryption with a pseudo-random function. Once the data is stored in the cloud, each reducer is assigned a specific relation. If there are n different elements, XOR operations are performed. The proposed solution is very effective. Next, we describe the variants of grouping and aggregation operations that preserve confidentiality in terms of performance and security. The proposed solutions combine the use of pseudo-random functions with the use of homomorphic encryption for COUNT, SUM and AVG operations and order preserving encryption for MIN and MAX operations. Finally, we offer secure versions of two protocols (cascade and hypercube) adapted to the MapReduce paradigm. The solutions consist in using pseudo-random functions to perform equality checks and thus allow joining operations when common components are detected. All the solutions described above are evaluated and their security proven
47

Canillas, Rémi. "Privacy and Security in a B2B environment : Focus on Supplier Impersonation Fraud Detection using Data Analysis." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La fraude au fournisseur (Supplier Impersonation Fraud, SIF) est un type de fraude se produisant dans un contexte Business-to-Business (B2B), où des entreprises et des commerces interagissent entre eux, plutôt qu'avec le consommateur. Une fraude au fournisseur est effectuée lorsqu'une entreprise (fournisseur) proposant des biens ou des services à une autre entreprise (client) a son identité usurpée par un fraudeur. Dans cette thèse, nous proposons, d'utiliser les techniques et outils récents en matière d'apprentissage machine (Machine Learning) afin de résoudre à ces différents points, en élaborant des systèmes de détection de fraudes se basant sur l'analyse de données. Deux systèmes de détection de fraude basés sur l'analyse de données sont proposés: ProbaSIF et GraphSIF. Ces deux systèmes se composent d'abord d'une phase d'entraînement où les transactions historiques sont utilisées pour calculer un modèle de données, puis d'une phase de test où la légitimité de chaque transaction considérée est déterminée. ProbaSIF est un système de détection de fraudes au fournisseur qui se base sur un modèle bayésien (Dirichlet-Multinomial). ProbaSIF utilise la probabilité d'un compte en banque à être utilisé dans une transaction future d'une entreprise pour déterminer sa fiabilité. GraphSIF, le second système de détection de fraude au fournisseur que nous proposons, a pour but d'analyser les propriétés relationnelles créées par l'échange de transactions entre une entreprise et ses fournisseurs. À cette fin, une séquence de différents graphes compilant tous les liens créés entre l'entreprise, ses fournisseurs, et les comptes en banque utilisés pour payer ces fournisseurs, appelés séquence de comportement, est générée. Une transaction est catégorisée en l'ajoutant au graphe le plus récent de la séquence et en analysant les motifs formés, et en les comparant à ceux précédemment trouvés dans la séquence de comportement.Ces deux systèmes sont comparés avec un jeu de données réelles afin d’examiner leurs performances
Supplier Impersonation Fraud (SIF) is a kind of fraud occuring in a Business-To-Business context (B2B), where a fraudster impersonates a supplier in order to trigger an illegitimate payment from a company. Most of the exisiting systems focus solely on a single, "intra-company" approach in order to detect such kind of fraud. However, the companies are part of an ecosystem where multiple agents interacts, and such interaction hav yet to be integrated as a part of the existing detection techniques. In this thesis we propose to use state-of-the-art techniques in Machine Learning in order to build a detection system for such frauds, based on the elaboration of a model using historical transactions from both the targeted companies and the relevant other companies in the ecosystem (contextual data). We perform detection of anomalous transactions when significant change in the payment behavior of a company is detected. Two ML-based systems are proposed in this work: ProbaSIF and GraphSIF. ProbaSIF uses a probabilistic approach (urn model) in order to asert the probability of occurrence of the account used in the transaction in order to assert its legitimacy. We use this approach to assert the differences yielded by the integration of contextual data to the analysis. GraphSIF uses a graph-based approach to model the interaction between client and supplier companies as graphs, and then uses these graph as training data in a Self-Organizing Map-Clustering model. The distance between a new transaction and the center of the cluster is used to detect changes in the behavior of a client company. These two systems are compared with a real-life fraud detection system in order to assert their performance
48

Kolonia, Alexandra, and Rebecka Forsberg. "Preserving Security and Privacy: a WiFi Analyzer Application based on Authentication and Tor." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281764.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Numerous mobile applications have the potential to collect and share userspecific information on top of the essential data handling. This is made possible through poor application design and its improper implementation. The lack of security and privacy in an application is of main concern since the spread of sensitive and personal information can cause both physical and emotional harm, if it is being shared with unauthorized people. This thesis investigates how to confidentially transfer user information in such a way that the user remains anonymous and untraceable in a mobile application. In order to achieve that, the user will first authenticate itself to a third party, which provides the user with certificates or random generated tokens. The user can then use this as its communication credentials towards the server, which will be made through the Tor network. Further, when the connection is established, the WiFi details are sent periodically to the server without the user initiating the action. The results show that it is possible to establish connection, both with random tokens and certificates. The random tokens took less time to generate compared to the certificate, however the certificate took less time to verify, which balances off the whole performance of the system. Moreover, the results show that the implementation of Tor is working since it is possible for the system to hide the real IP address, and provide a random IP address instead. However, the communication is slower when Tor is used which is the cost for achieving anonymity and improving the privacy of the user. Conclusively, this thesis proves that combining proper implementation and good application design improves the security in the application thereby protecting the users’  privacy.
Många mobilapplikationer har möjlighet att samla in och dela användarspecifik information, utöver den väsentliga datahanteringen. Det här problemet möjliggörs genom dålig applikationsdesign och felaktig implementering. Bristen på säkerhet och integritet i en applikation är därför kritisk, eftersom spridning av känslig och personlig information kan orsaka både fysisk och emotionell skada, om den delas med obehöriga personer. Denna avhandling undersöker hur man konfidentiellt kan överföra användarinformation på ett sätt som tillåter användaren av mobilapplikationen att förbli både anonym och icke spårbar. För att uppnå detta kommer användaren först att behöva autentisera sig till en tredje part, vilket förser användaren med slumpmässigt genererade tecken eller med ett certifikat. Användaren kan sedan använda dessa till att kommunicera med servern, vilket kommer att göras över ett Tor-nätverk. Slutligen när anslutningen upprättats, kommer WiFi-detaljerna att skickas över periodvis till servern, detta sker automatiskt utan att användaren initierar överföringen. Resultatet visar att det är möjligt att skapa en anslutning både med ett certifikat eller med slumpmässiga tecken. Att generera de slumpmässiga tecknen tog mindre tid jämfört med certifikaten, däremot tog certifikaten mindre tid att verifiera än tecknen. Detta resulterade i att de båda metoderna hade en jämn prestanda om man ser över hela systemet. Resultatet visar vidare att det implementeringen av Tor fungerar då det är möjligt för systemet att dölja den verkliga IPadressen och att istället tillhandahålla en slumpmässig IP-adress. Kommunikationen genom Tor gör dock systemet långsammare, vilket är kostnaden för att förbättra användarens integritet och uppnå anonymitet. Sammanfattningsvis visar denna avhandling att genom att kombinera korrekt implementering och bra applikationsdesign kan man förbättra säkerheten i applikationen och därmed skydda användarnas integritet.
49

Biondi, Alessandro. "Tutela della privacy in Android ed educazione alla mobile privacy." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25784/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Il progetto di tesi vede come oggetto lo sviluppo di un'applicazione mobile che sia in grado di mitigare la data collection da parte delle app installate nel dispositivo e fornire all'utente gli strumenti per comprendere i meccanismi che adottano le app per raccogliere i propri dati e la gravità che ne comporta. Inoltre, si vuole approfondire la gestione dei dispositivi mobile anche in un contesto aziendale. Negli ultimi anni si ha avuto modo di assistere alle conseguenze spiacievoli causate dalla raccolta di dati e metadata generati dagli utenti di internet e smartphone. Dopo una panoramica dei processi di controllo che vengono eseguiti per definire un'app sicura e le tecnologie addottate in contesti aziendali per la protezione dei propri dati si analizzano ulteriori criteri orientati alla privacy e alla trasparenza delle software house. Per l'implementazione dell'applicazione Android vengono impiegate tecniche di language model per la valutazione dei permessi delle app installate, che andranno confrontati con quelli presenti in una base di dati che raccoglie esempi già analizzati di applicazioni Android, valutate con ausilio di piattaforme di audit. Inoltre si vuole valutare l'utilizzo e la convenienza dell'impiego di tecnologie per la protezione dei dati aziendali in contesto mobile (EMM).
50

Alhussein, Nawras. "Privacy by Design & Internet of Things: managing privacy." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Personlig integritet motsvarar det engelska begreppet privacy, som kan uttryckas som rätten att få bli lämnad ifred. Det har ifrågasatts många gånger om personlig integritet verkligen finns på internet, speciellt i Internet of Things-system eller smarta system som de också kallas. Fler frågor ställs i samband med att den nya allmänna dataskyddsförordningen inom europeiska unionen börjar gälla i maj. I detta arbete studeras privacy by design-arbetssättet som den allmänna dataskyddsförordningen (GDPR) bland annat kommer med. I studien besvaras om privacy by design kommer kunna öka skyddet av den personliga integriteten i Internet of Things-system. För- och nackdelar tas upp och hur företag och vanliga användare påverkas. Genom en litteraturstudie och två intervjuer har frågan kunnat besvaras. Det visade sig att en stor del av problematiken inom Internet of Things avseende personlig integritet kan lösas genom att styra data. I privacy by design-arbetssättet ingår att skydda data i alla tillstånd genom olika metoder som kryptering. På det sättet bidrar privacy by design till ökad säkerhet inom Internet of Things-system.
Privacy means the right to be left alone. It has been questioned many times if privacy really exists on the internet, especially in Internet of Things systems or smart systems as they are also called. More questions occur when the new general data protection regulation (GDPR) within the European Union applies in May. In this paper privacy by design that the general data protection regulation comes with is being studied. This study answers whether privacy by design will be able to increase the protection of privacy in Internet of Things systems. Advantages and disadvantages are also addressed and how companies and common users are affected by the implementation of privacy by design. The question has been answered by a literature review and two interviews. It turned out that a significant part of the problems in Internet of Things regarding privacy may be solved by data management. The privacy by design includes protection of data in all states through different methods such as encryption. In this way, privacy by design contributes to increased security within Internet of Things system.

To the bibliography