Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Private Data Analysis.

Дисертації з теми "Private Data Analysis"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Private Data Analysis".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Puglisi, Silvia. "Analysis, modelling and protection of online private data." Doctoral thesis, Universitat Politècnica de Catalunya, 2017. http://hdl.handle.net/10803/456205.

Повний текст джерела
Анотація:
Online communications generate a consistent amount of data flowing between users, services and applications. This information results from the interactions among different parties, and once collected, it is used for a variety of purposes, from marketing profiling to product recommendations, from news filtering to relationship suggestions. Understanding how data is shared and used by services on behalf of users is the motivation behind this work. When a user creates a new account on a certain platform, this creates a logical container that will be used to store the user's activity. The service aims to profile the user. Therefore, every time some data is created, shared or accessed, information about the user’s behaviour and interests is collected and analysed. Users produce this data but are unaware of how it will be handled by the service, and of whom it will be shared with. More importantly, once aggregated, this data could reveal more over time that the same users initially intended. Information revealed by one profile could be used to obtain access to another account, or during social engineering attacks. The main focus of this dissertation is modelling and analysing how user data flows among different applications and how this represents an important threat for privacy. A framework defining privacy violation is used to classify threats and identify issues where user data is effectively mishandled. User data is modelled as categorised events, and aggregated as histograms of relative frequencies of online activity along predefined categories of interests. Furthermore, a paradigm based on hypermedia to model online footprints is introduced. This emphasises the interactions between different user-generated events and their effects on the user’s measured privacy risk. Finally, the lessons learnt from applying the paradigm to different scenarios are discussed.
Las comunicaciones en línea generan una cantidad constante de datos que fluyen entre usuarios, servicios y aplicaciones. Esta información es el resultado de las interacciones entre diferentes partes y, una vez recolectada, se utiliza para una gran variedad de propósitos, desde perfiles de marketing hasta recomendaciones de productos, pasando por filtros de noticias y sugerencias de relaciones. La motivación detrás de este trabajo es entender cómo los datos son compartidos y utilizados por los servicios en nombre de los usuarios. Cuando un usuario crea una nueva cuenta en una determinada plataforma, ello crea un contenedor lógico que se utilizará para almacenar la actividad del propio usuario. El servicio tiene como objetivo perfilar al usuario. Por lo tanto, cada vez que se crean, se comparten o se accede a los datos, se recopila y analiza información sobre el comportamiento y los intereses del usuario. Los usuarios producen estos datos, pero desconocen cómo serán manejados por el servicio, o con quién se compartirán. O lo que es más importante, una vez agregados, estos datos podrían revelar, con el tiempo, más información de la que los mismos usuarios habían previsto inicialmente. La información revelada por un perfil podría utilizarse para obtener acceso a otra cuenta o durante ataques de ingeniería social. El objetivo principal de esta tesis es modelar y analizar cómo fluyen los datos de los usuarios entre diferentes aplicaciones y cómo esto representa una amenaza importante para la privacidad. Con el propósito de definir las violaciones de privacidad, se utilizan patrones que permiten clasificar las amenazas e identificar los problemas en los que los datos de los usuarios son mal gestionados. Los datos de los usuarios se modelan como eventos categorizados y se agregan como histogramas de frecuencias relativas de actividad en línea en categorías predefinidas de intereses. Además, se introduce un paradigma basado en hipermedia para modelar las huellas en línea. Esto enfatiza la interacción entre los diferentes eventos generados por el usuario y sus efectos sobre el riesgo medido de privacidad del usuario. Finalmente, se discuten las lecciones aprendidas de la aplicación del paradigma a diferentes escenarios.
Les comunicacions en línia generen una quantitat constant de dades que flueixen entre usuaris, serveis i aplicacions. Aquesta informació és el resultat de les interaccions entre diferents parts i, un cop recol·lectada, s’utilitza per a una gran varietat de propòsits, des de perfils de màrqueting fins a recomanacions de productes, passant per filtres de notícies i suggeriments de relacions. La motivació darrere d’aquest treball és entendre com les dades són compartides i utilitzades pels serveis en nom dels usuaris. Quan un usuari crea un nou compte en una determinada plataforma, això crea un contenidor lògic que s’utilitzarà per emmagatzemar l’activitat del propi usuari. El servei té com a objectiu perfilar a l’usuari. Per tant, cada vegada que es creen, es comparteixen o s’accedeix a les dades, es recopila i analitza informació sobre el comportament i els interessos de l’usuari. Els usuaris produeixen aquestes dades però desconeixen com seran gestionades pel servei, o amb qui es compartiran. O el que és més important, un cop agregades, aquestes dades podrien revelar, amb el temps, més informació de la que els mateixos usuaris havien previst inicialment. La informació revelada per un perfil podria utilitzar-se per accedir a un altre compte o durant atacs d’enginyeria social. L’objectiu principal d’aquesta tesi és modelar i analitzar com flueixen les dades dels usuaris entre diferents aplicacions i com això representa una amenaça important per a la privacitat. Amb el propòsit de definir les violacions de privacitat, s’utilitzen patrons que permeten classificar les amenaces i identificar els problemes en què les dades dels usuaris són mal gestionades. Les dades dels usuaris es modelen com esdeveniments categoritzats i s’agreguen com histogrames de freqüències relatives d’activitat en línia en categories predefinides d’interessos. A més, s’introdueix un paradigma basat en hipermèdia per modelar les petjades en línia. Això emfatitza la interacció entre els diferents esdeveniments generats per l’usuari i els seus efectes sobre el risc mesurat de privacitat de l’usuari. Finalment, es discuteixen les lliçons apreses de l’aplicació del paradigma a diferents escenaris.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Alborch, escobar Ferran. "Private Data Analysis over Encrypted Databases : Mixing Functional Encryption with Computational Differential Privacy." Electronic Thesis or Diss., Institut polytechnique de Paris, 2025. http://www.theses.fr/2025IPPAT003.

Повний текст джерела
Анотація:
Dans l'actuelle société numérique, les données dominent le monde. Associées la plupart du temps à des individus, leur exploitation doit respecter la vie privée de ces derniers. Cette contrainte a donné naissance au paradigme de confidentialité différentielle, qui permet de protéger les individus lors de requêtes sur des bases contenant des données les concernant. Mais avec l'émergence du "cloud computing'', il devient nécessaire de prendre en compte la confidentialité du stockage de ces dernières dans le cloud, en utilisant du chiffrement. Cette thèse étudie comment assurer à la fois la confidentialité et le respect de la vie privée de ces bases de données externalisées en combinant deux primitives : la confidentialité différentielle calculatoire et le chiffrement fonctionnel. Dans un premier temps, nous étudions les liens entre la confidentialité différentielle calculatoire et le chiffrement fonctionnel pour des fonctions aléatoires d'un point de vue générique. Nous analysons la confidentialité dans un cadre où un analyste malicieux peut accéder aux données chiffrées stockés dans un serveur soit par corruption soit par brèche de sécurité, et nous prouvons qu'un schéma de chiffrement fonctionnel aléatoire sûr et pour la famille de fonctions appropriée garantie la confidentialité différentielle calculatoire du système. Dans second temps, nous construisons des schémas de chiffrement fonctionnel aléatoire pour certaines familles de fonctions utiles, et nous les prouvons sûrs dans le modèle standard sous des hypothèses très étudiées. Les familles de fonctions que nous étudions sont les fonctions linéaires, utilisées par exemple pour requêtes de comptage, des histogrammes et régressions linéaires, et les fonctions quadratiques, utilisées par exemple pour régressions quadratiques et tests d'hypothèses. Les schémas proposés sont utilisés avec le premier résultat pour construire des bases des données chiffrées pour fonctions linéaires et quadratiques respectivement. Finalement, nous implémentons les deux schémas de chiffrement fonctionnel pour analyser leur efficacité. Cela montre que nos constructions sont pratiques pour des bases de données avec 1 000 000 entrées pour des requêtes linéaires et des bases de données avec 10 000 entrées pour des requêtes quadratiques
In our current digitalized society, data is ruling the world. But as it is most of the time related to individuals, its exploitation should respect the privacy of the latter. This issue has raised the differential privacy paradigm, which permits to protect individuals when querying databases containing data about them. But with the emergence of cloud computing, it is becoming increasingly necessary to also consider the confidentiality of "on-cloud'' storage confidentiality of such vast databases, using encryption techniques. This thesis studies how to provide both privacy and confidentiality of such outsourced databases by mixing two primitives: computational differential privacy and functional encryption. First, we study the relationship between computational differential privacy and functional encryption for randomized functions in a generic way. We analyze the privacy of the setting where a malicious analyst may access the encrypted data stored in a server, either by corrupting or breaching it, and prove that a secure randomized functional encryption scheme supporting the appropriate family of functions guarantees the computational differential privacy of the system. Second, we construct efficient randomized functional encryption schemes for certain useful families of functions, and we prove them secure in the standard model under well-known assumptions. The families of functions considered are linear functions, used for example in counting queries, histograms and linear regressions, and quadratic functions, used for example in quadratic regressions and hypothesis testing. The schemes built are then used together with the first result to construct encrypted databases for their corresponding family of queries. Finally, we implement both randomized functional encryption schemes to analyze their efficiency. This shows that our constructions are practical for databases with up to 1 000 000 entries in the case of linear queries and databases with up to 10 000 database entries in the case of quadratic queries
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Amirbekyan, Artak. "Protocols and Data Structures for Knowledge Discovery on Distributed Private Databases." Thesis, Griffith University, 2007. http://hdl.handle.net/10072/367447.

Повний текст джерела
Анотація:
Data mining has developed many techniques for automatic analysis of today’s rapidly collected data. Yahoo collects 12 TB daily of query logs and this is a quarter of what Google collects. For many important problems, the data is actually collected in distributed format by different institutions and organisations, and it can relate to businesses and individuals. The accuracy of knowledge that data mining brings for decision making depends on considering the collective datasets that describe a phenomenon. But privacy, confidentiality and trust emerge as major issues in the analysis of partitioned datasets among competitors, governments and other data holders that have conflicts of interest. Managing privacy is of the utmost importance in the emergent applications of data mining. For example, data mining has been identified as one of the most useful tools for the global collective fight on terror and crime [80]. Parties holding partitions of the database are very interested in the results, but may not trust the others with their data, or may be reluctant to release their data freely without some assurances regarding privacy. Data mining technology that reveals patterns in large databases could compromise the information that an individual or an organisation regards as private. The aim is to find the right balance between maximising analysis results (that are useful for each party) and keeping the inferences that disclose private information about organisation or individuals at a minimum. We address two core data analysis tasks, namely clustering and regression. For these to be solvable in the privacy context, we focus on the protocol’s efficiency and practicality. Because associative queries are central to clustering (and to many other data mining tasks), we provide protocols for privacy-preserving knear neighbour (k-NN) queries. Our methods improve previous methods for k-NN queries in privacy-preserving data-mining (which are based on Fagin’s A0 algorithm) because we do leak at least an order of magnitude less candidates and we achieve logarithmic performance on average. The foundations of our methods for k-NN queries are two pillars, firstly data structures and secondly, metrics. This thesis provides protocols for privacy-preserving computation of various common metrics and for construction of necessary data structures. We present here new algorithms for secure-multiparty-computation of some basic operations (like a new solution for Yao’s comparison problem and new protocols to perform linear algebra, in particular the scalar product). These algorithms will be used for the construction of protocols for different metrics (we provide protocols for all Minkowski metrics, the cosine metrics and the chessboard metric) and for performing associative queries in the privacy context. In order to be efficient, our protocols for associative queries are supported by specific data structures. Thus, we present the construction of privacy-preserving data structures like R-Trees [42, 7], KD-Trees [8, 53, 33] and the SASH [8, 60]. We demonstrate the use of all these tools, and we provide a new version of the well known clustering algorithm DBSCAN [42, 7]. This new version is now suitable for applications that demand privacy. Similarly, we apply our machinery and provide new multi-linear regression protocols that are now suitable for privacy applications. Our algorithms are more efficient than earlier methods and protocols. In particular, the cost associated with ensuring privacy provides only a linear-cost overhead for most of the protocols presented here. That is, our methods are essentially as costly as concentrating all the data in one site, performing the data-mining task, and disregarding privacy. However, in some cases we make use of a third-trusted party. This is not a problem when more than two parties are involved, since there is always one party that can act as the third.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Information and Communication Technology
Science, Environment, Engineering and Technology
Full Text
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Nguyen, Mai Phuong. "Contribution of private healthcare to universal health coverage: an investigation of private over public health service utilisation in Vietnam." Thesis, Queensland University of Technology, 2021. https://eprints.qut.edu.au/225903/1/Mai%20Phuong_Nguyen_Thesis.pdf.

Повний текст джерела
Анотація:
Achievement of Universal Health Coverage (UHC) is a desirable goal for all countries. Complementary public and private services are essential. This study examined factors that influence consumer choice for private and public health care services in Vietnam. Thirty senior healthcare professionals were interviewed and secondary data on over 35,000 episodes of healthcare gathered during national health surveys in households were analyzed. For Vietnam and similar low and middle-income countries to achieve UHC, it is necessary to overcome incomplete social health insurance coverage, variable quality of private and public health services, unregulated quality in advertising and inefficient competition between sectors.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Karim, Martia. "Determinants of Venture Capital Investments : A panel data analysis across regions in the United Kingdom." Thesis, Internationella Handelshögskolan, Högskolan i Jönköping, IHH, Nationalekonomi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-40179.

Повний текст джерела
Анотація:
Venture Capital is an important theme in economic research as a growing intermediate in the financing of new or growing young firms. In Europe, it is the United Kingdom that is the leading country with the highest amount of venture capital activity. However, there is a wide spatial distribution of venture capital across the regions of the United Kingdom where London and the South East alone pulled nearly 60% of venture capital in 2013. This paper focuses on a cross-regional study with the selected regions of the United Kingdom: Scotland, England, Wales, and Northern Ireland. The purpose to investigate the relationship between economic growth, research & development expenditure, and population density with total venture capital investments during the time period 2006 – 2016. The aim is to contribute to existing literature on determinants of venture capital with evidence from the United Kingdom. Using a fixed effect model, we can establish a positive relationship between population density and total venture capital invested. Economic growth and gross expenditure on research & development did not yield any significant result.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Habibovic, Sanel. "VIRTUAL PRIVATE NETWORKS : An Analysis of the Performance in State-of-the-Art Virtual Private Network solutions in Unreliable Network Conditions." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-17844.

Повний текст джерела
Анотація:
This study aimed to identify the differences between state-of-the-art VPN solutions on different operating systems. It was done because a novel VPN protocol is in the early stages of release and a comparison of it, to other current VPN solutions is interesting. It is interesting because current VPN solutions are well established and have existed for a while and the new protocol stirs the pot in the VPN field. Therefore a contemporary comparison between them could aid system administrators when choosing which VPN to implement. To choose the right VPN solution for the occasion could increase performance for the users and save costs for organizations who wish to deploy VPNs. With the remote workforce increasing issues of network reliability also increases, due to wireless connections and networks beyond the control of companies. This demands an answer to the question how do VPN solutions differ in performance with stable and unstable networks? This work attempted to answer this question. This study is generally concerning VPN performance but mainly how the specific solutions perform under unreliable network conditions.It was achieved by researching past comparisons of VPN solutions to identify what metrics to analyze and which VPN solutions have been recommended. Then a test bed was created in a lab network to control the network when testing, so the different VPN implementations and operating systems have the same premise. To establish baseline results, performance testing was done on the network without VPNs, then the VPNs were tested under reliable network conditions and then with unreliable network conditions. The results of that were compared and analyzed. The results show a difference in the performance of the different VPNs, also there is a difference on what operating system is used and there are also differences between the VPNs with the unreliability aspects switched on. The novel VPN protocol looks promising as it has overall good results, but it is not conclusive as the current VPN solutions can be configured based on what operating system and settings are chosen. With this set-up, VPNs on Linux performed much better under unreliable network conditions when compared to setups using other operating systems. The outcome of this work is that there is a possibility that the novel VPN protocol is performing better and that certain combinations of VPN implementation and OS are better performing than others when using the default configuration. This works also pointed out how to improve the testing and what aspects to consider when comparing VPN implementations.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Ciccarelli, Armand. "An analysis of the impact of wireless technology on public vs. private traffic data collection, dissemination and use." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/8817.

Повний текст джерела
Анотація:
Thesis (M.C.P. and S.M.)--Massachusetts Institute of Technology, Dept. of Urban Studies and Planning, 2001.
Includes bibliographical references (leaves 151-154).
The collection of data concerning traffic conditions (e.g., incidents, travel times, average speed, traffic volumes, etc.) on roadways has traditionally been carried out by those public entities charged with managing traffic flow, responding to incidents, and maintaining the surface of the roadway. Pursuant to this task, public agencies have employed inductive loop detectors, closed circuit television cameras, technology for tracking electronic toll tags, and other surveillance devices, in an effort to monitor conditions on roads within their jurisdictions. The high cost of deploying and maintaining this surveillance equipment has precluded most agencies from collecting data on roads other than freeways and important arterials. In addition, the "point" nature of most commonly utilized surveillance equipment limits both the variety of data available for analysis, as well as its overall accuracy. Consequently, these problems have limited the usefulness of this traffic data, both to the public agencies collecting it, as well as private entities who would like to use it as a resource from which they can generate fee-based traveler information services. Recent Federal Communications Commission (FCC) mandates concerning E-911 have led to the development of new technologies for tracking wireless devices (i.e., cellular phones). Although developed to assist mobile phone companies in meeting the FCC's E-911 mandate, a great deal of interest has arisen concerning their application to the collection of traffic data. That said, the goal of this thesis has been to compare traditional traffic surveillance technologies' capabilities and effectiveness with that of the wireless tracking systems currently under development. Our technical research indicates that these newly developed tracking technologies will eventually be able to provide wider geographic surveillance of roads at less expense than traditional surveillance equipment, as well as collect traffic information that is currently unavailable. Even so, our overall conclusions suggest that due to budgetary, institutional, and/or political constraints, some organizations may find themselves unable to procure this high quality data. Moreover, we believe that even those organizations (both public and private) that find themselves in a position to procure data collected via wireless tracking technology should first consider the needs of their "customers," the strength of the local market for traffic data, and their organization's overall mission, prior to making a final decision.
by Armand J. Ciccarelli, III.
M.C.P.and S.M.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Aronsson, Arvid, and Daniel Falkenström. "The Effects of Capital Income Taxation on Consumption : Panel data analysis of the OECD countries." Thesis, Jönköping University, IHH, Nationalekonomi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-52920.

Повний текст джерела
Анотація:
This thesis investigates if the tax rate on dividend income has a significant effect on private consumption expenditure. This is done through a panel study on 36 OECD countries during the period 2000-2019. Regressions using differenced data and several control variables are used. The results are to some extent in line with previous empirical work studying the effects of tax changes on consumption. The results indicate that the taxation of capital income in the form of the overall tax rate on dividend income does not have a significant effect on private consumption expenditure. The theoretical mechanism deemed most likely to be in effect is tax planning since contradictory results are obtained regarding the effects of other tax rates in the form of taxes on labour income and VAT on private consumption expenditure.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Shimada, Hideki. "Econometric Analysis of Social Interactions and Economic Incentives in Conservation Schemes." Doctoral thesis, Kyoto University, 2021. http://hdl.handle.net/2433/263702.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Lanjouw, Jean Olson. "The private value of patent rights : a dynamic programming and game theoretic analysis of West German patent renewal data, 1953-1988." Thesis, London School of Economics and Political Science (University of London), 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.527825.

Повний текст джерела
Анотація:
Empirical estimates of the private value of patent protection are found for four technology areas - computers, textiles, combustion engines and pharmaceuticals - using new patent renewal data of West German patents collected for the period 1953-1988. In Germany, a patentee must pay an annual renewal fee to keep his patent in force. Two dynamic discrete choice models of optimal renewal decisions are developed and used. in conjunction with observed hazard proportions and renewal fee schedules, to estimate the returns to protection. Differences in value across technology, nationality of inventor and time are explored both non-parametrically and parametrically within a deterministic framework. A stochastic formulation of the model, which allows both for learning about the innovation and market and for the possibility of infringements, is estimated using a minimum distance simulation estimator. The evolution of the distribution of returns over the life of a group of patents is calculated for each technology. Results indicate that learning is completed after 6 years, that obsolescence is rapid, and that the distributions of patent value are very skewed. Research and development (R&D) expenditures for each technology area are calculated and patent protection as an implicit subsidy to investment in R&D is discussed. Patent protection is valuable only when there are potential competitors for the use of an innovation. Patent rights must be defended. A game theoretic analysis of litigation explores how these facts influence the decision whether to apply for and keep a patent in force and, in tum, the relationship between the distribution of patent value and that of the underlying innovation. Implications for renewal behaviour are derived from the analysis and the data suggests that the level of potential competition does affect the value of protection. Consideration is given to how these findings bear on the interpretation of empirical estimates of patent value as indicators of innovation.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Kilcrease, Patrick N. "Employing a secure Virtual Private Network (VPN) infrastructure as a global command and control gateway to dynamically connect and disconnect diverse forces on a task-force-by-task-force basis." Thesis, Monterey, California : Naval Postgraduate School, 2009. http://edocs.nps.edu/npspubs/scholarly/theses/2009/Sep/09Sep%5FKilcrease.pdf.

Повний текст джерела
Анотація:
Thesis (M.S. in Information Technology Management)--Naval Postgraduate School, September 2009.
Thesis Advisor(s): Barreto, Albert. "September 2009." Description based on title screen as viewed on 6 November 2009. Author(s) subject terms: Virtual Private Network, GHOSTNet, maritime interdiction operations, internet protocol security, encapsulating security protocol, data encryption standard. Includes bibliographical references (p. 83-84). Also available in print.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Reis, Alda Maria dos Santos. "Modelos de governação e parcerias público-privadas (PPP):o caso dos Clusters em Portugal." Master's thesis, Instituto Superior de Ciências Sociais e Políticas, 2012. http://hdl.handle.net/10400.5/4940.

Повний текст джерела
Анотація:
Dissertação de Mestrado em Gestão e Políticas Públicas
O modelo de governação do Estado regulador e a reforma da gestão pública baseada no New Public Management, sustentando o melhor posicionamento do sector privado relativamente ao sector público em termos de competência, flexibilidade e eficiência, conduziram ao crescimento da regulação estatutária por agências independentes, que tem vindo a ser adoptada nos países ocidentais como instrumento preferencial na implementação de políticas públicas, designadamente quando o Estado pretende melhorar a eficiência dos mercados através da alavancagem económica. Neste contexto, no âmbito da Agenda de Competitividade Económica do QREN 2007-2013, foi implementado um instrumento de política inovador em Portugal, denominado Estratégias de Eficiência Colectiva (EEC), destinado a financiar iniciativas geradoras de externalidades positivas, nomeadamente a clusterização industrial, através da contratualização com agências independentes constituídas em PPP Intersectoriais. Este trabalho tem por objectivo estudar a implementação da Política de Clusters em Portugal e avaliar os resultados dos dezanove clusters aprovados em termos de eficiência, a meio percurso do período contratual, através do recurso ao modelo de Análise da Envolvente de Dados (DEA). Tal constitui um contributo para a reflexão dos responsáveis governamentais sobre o futuro desta política e para as entidades gestoras dos clusters, em termos de melhoria da sua performance.
The rise of regulatory state and public management reforms based on New Public Management theories sustaining the better position of private sector compared to the public administration in terms of competencies, flexibility and efficiency, have contributed to the growth of statutory regulation by independent agencies, that are being adopted by western governments as a preferred instrument in the implementation of economic public policies to remove market failures, improve market efficiency and enforcing economic competition. The Agenda of Economic Competitiveness of the QREN 2007-2013 has created an innovative public policy program in Portugal, named Collective Efficiency Strategies (EEC), aimed at financing initiatives generating positive externalities, like industrial clusters, through the contract with independent agencies established by statute as cross-sector public private partnerships. The central aim of this research is to study the regulation of Clusters Public Policy in Portugal and to evaluate the efficiency of the nineteen clusters approved within the program of EEC, in the mid-term of the contract, using the Data Envelopment Analysis (DEA) quantitative method. This analysis is a contribution for a more accurate reflexion of the future of this policy by the government authorities and for enhancing the performance of some inefficient clusters and contractual agencies.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Silva, Diogo Miguel Santos Graça Nunes da. "A eficiência das PPP no sector hospitalar em Portugal." Master's thesis, Instituto Superior de Economia e Gestão, 2019. http://hdl.handle.net/10400.5/19941.

Повний текст джерела
Анотація:
Mestrado em Ciências Empresariais
Nas últimas décadas, temos assistido à proliferação de novos instrumentos de gestão pública, nomeadamente das parcerias público-privadas (PPP). A saúde tem sido uma das principais áreas-alvo deste modelo de gestão, mais concretamente a nível hospitalar. Em Portugal, cabe ao parceiro privado, para além da gestão da infraestrutura, a gestão dos serviços clínicos e a prestação dos cuidados de saúde, tornando estas parcerias ainda mais complexas e multifacetadas. Apesar da utilização das PPP no sector da saúde, existe alguma controvérsia sobre se este modelo é realmente mais eficiente que o público. Neste sentido, o presente estudo tem como objectivo a comparação da eficiência entre a gestão dos hospitais em regime de PPP em Portugal - Braga, Vila Franca de Xira, Loures e Cascais e a dos hospitais públicos, no período entre 2013 e 2017. Para este efeito, foi selecionado um grupo homogéneo de hospitais comparáveis que incluiu 30 hospitais públicos e os 4 hospitais PPP. Para a avaliação da eficiência, foram utilizadas duas abordagens - Econométrica e Análise Envoltória de Dados (DEA). Em ambas as metodologias, testou-se o efeito do tipo de gestão na eficiência dos hospitais analisados. A eficiência hospitalar foi traduzida por rácios e scores no âmbito da metodologia econométrica e da análise DEA, respectivamente. Os resultados obtidos demonstraram que os hospitais PPP foram, em média, mais eficientes que os públicos no período analisado.
Over the last decades, we have witnessed the proliferation of new public management models, such as public-private partnerships (PPP). Health has been one of the main target areas of this management model, specifically at the hospital level. In Portugal, the private partner is also responsible for the clinical service management and health care delivery, in addition to infrastructure management, making these partnerships even more complex and multifaceted. Albeit the popularity of PPP in healthcare, there is still some controversy whether this model is more efficient than the public one. In this context, the present study aims to compare the efficiency between the management of the 4 PPP hospitals in Portugal - Braga, Vila Franca de Xira, Loures and Cascais and the public hospitals, between 2013 and 2017. For this purpose, a homogeneous group of comparable hospitals was selected, including 30 public hospitals and the 4 PPP hospitals. For efficiency evaluation, two approaches were used - Econometric and Data Envelopment Analysis (DEA). In both methodologies, the effect of management type on hospital efficiency was tested. Hospital efficiency was explained by ratios and scores within the econometric methodology and DEA analysis, respectively. The results showed that PPP hospitals were, on average, more efficient than public hospitals over the analyzed period.
info:eu-repo/semantics/publishedVersion
Стилі APA, Harvard, Vancouver, ISO та ін.
14

CECCATO, RICCARDO. "Switching intentions towards car sharing - Analysis of the relationship with traditional transport modes." Doctoral thesis, Politecnico di Torino, 2020. http://hdl.handle.net/11583/2840371.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Wang, Ting. "Data analytics for networked and possibly private sources." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39598.

Повний текст джерела
Анотація:
This thesis focuses on two grand challenges facing data analytical system designers and operators nowadays. First, how to fuse information from multiple autonomous, yet correlated sources and to provide consistent views of underlying phenomena? Second, how to respect externally imposed constraints (privacy concerns in particular) without compromising the efficacy of analysis? To address the first challenge, we apply a general correlation network model to capture the relationships among data sources, and propose Network-Aware Analysis (NAA), a library of novel inference models, to capture (i) how the correlation of the underlying sources is reflected as the spatial and/or temporal relevance of the collected data, and (ii) how to track causality in the data caused by the dependency of the data sources. We have also developed a set of space-time efficient algorithms to address (i) how to correlate relevant data and (ii) how to forecast future data. To address the second challenge, we further extend the concept of correlation network to encode the semantic (possibly virtual) dependencies and constraints among entities in question (e.g., medical records). We show through a set of concrete cases that correlation networks convey significant utility for intended applications, and meanwhile are often used as the steppingstone by adversaries to perform inference attacks. Using correlation networks as the pivot for analyzing privacy-utility trade-offs, we propose Privacy-Aware Analysis (PAA), a general design paradigm of constructing analytical solutions with theoretical backing for both privacy and utility.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Simmons, Sean Kenneth. "Preserving patient privacy in biomedical data analysis." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/101821.

Повний текст джерела
Анотація:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 147-154).
The growing number of large biomedical databases and electronic health records promise to be an invaluable resource for biomedical researchers. Recent work, however, has shown that sharing this data- even when aggregated to produce p-values, regression coefficients, count queries, and minor allele frequencies (MAFs)- may compromise patient privacy. This raises a fundamental question: how do we protect patient privacy while still making the most out of their data? In this thesis, we develop various methods to perform privacy preserving analysis on biomedical data, with an eye towards genomic data. We begin by introducing a model based measure, PrivMAF, that allows us to decide when it is safe to release MAFs. We modify this measure to deal with perturbed data, and show that we are able to achieve privacy guarantees while adding less noise (and thus preserving more useful information) than previous methods. We also consider using differentially private methods to preserve patient privacy. Motivated by cohort selection in medical studies, we develop an improved method for releasing differentially private medical count queries. We then turn our eyes towards differentially private genome wide association studies (GWAS). We improve the runtime and utility of various privacy preserving methods for genome analysis, bringing these methods much closer to real world applicability. Building off this result, we develop differentially private versions of more powerful statistics based off linear mixed models.
by Sean Kenneth Simmons.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Smith, Tanshanika Turner. "Examining Data Privacy Breaches in Healthcare." ScholarWorks, 2016. https://scholarworks.waldenu.edu/dissertations/2623.

Повний текст джерела
Анотація:
Healthcare data can contain sensitive, personal, and confidential information that should remain secure. Despite the efforts to protect patient data, security breaches occur and may result in fraud, identity theft, and other damages. Grounded in the theoretical backdrop of integrated system theory, the purpose of this study was to determine the association between data privacy breaches, data storage locations, business associates, covered entities, and number of individuals affected. Study data consisted of secondary breach information retrieved from the Department of Health and Human Services Office of Civil Rights. Loglinear analytical procedures were used to examine U.S. healthcare breach incidents and to derive a 4-way loglinear model. Loglinear analysis procedures included in the model yielded a significance value of 0.000, p > .05 for the both the likelihood ratio and Pearson chi-square statistics indicating that an association among the variables existed. Results showed that over 70% of breaches involve healthcare providers and revealed that security incidents often consist of electronic or other digital information. Findings revealed that threats are evolving and showed that likely factors other than data loss and theft contribute to security events, unwanted exposure, and breach incidents. Research results may impact social change by providing security professionals with a broader understanding of data breaches required to design and implement more secure and effective information security prevention programs. Healthcare leaders might affect social change by utilizing findings to further the security dialogue needed to minimize security risk factors, protect sensitive healthcare data, and reduce breach mitigation and incident response costs.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

DeYoung, Mark E. "Privacy Preserving Network Security Data Analytics." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/82909.

Повний текст джерела
Анотація:
The problem of revealing accurate statistics about a population while maintaining privacy of individuals is extensively studied in several related disciplines. Statisticians, information security experts, and computational theory researchers, to name a few, have produced extensive bodies of work regarding privacy preservation. Still the need to improve our ability to control the dissemination of potentially private information is driven home by an incessant rhythm of data breaches, data leaks, and privacy exposure. History has shown that both public and private sector organizations are not immune to loss of control over data due to lax handling, incidental leakage, or adversarial breaches. Prudent organizations should consider the sensitive nature of network security data and network operations performance data recorded as logged events. These logged events often contain data elements that are directly correlated with sensitive information about people and their activities -- often at the same level of detail as sensor data. Privacy preserving data publication has the potential to support reproducibility and exploration of new analytic techniques for network security. Providing sanitized data sets de-couples privacy protection efforts from analytic research. De-coupling privacy protections from analytical capabilities enables specialists to tease out the information and knowledge hidden in high dimensional data, while, at the same time, providing some degree of assurance that people's private information is not exposed unnecessarily. In this research we propose methods that support a risk based approach to privacy preserving data publication for network security data. Our main research objective is the design and implementation of technical methods to support the appropriate release of network security data so it can be utilized to develop new analytic methods in an ethical manner. Our intent is to produce a database which holds network security data representative of a contextualized network and people's interaction with the network mid-points and end-points without the problems of identifiability.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Cui, Yingjie, and 崔英杰. "A study on privacy-preserving clustering." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B4357225X.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Huang, Zhengli. "Privacy and utility analysis of the randomization approach in Privacy-Preserving Data Publishing." Related electronic resource: Current Research at SU : database of SU dissertations, recent titles available full text, 2008. http://wwwlib.umi.com/cr/syr/main.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Sobati, Moghadam Somayeh. "Contributions to Data Privacy in Cloud Data Warehouses." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE2020.

Повний текст джерела
Анотація:
Actuellement, les scénarios d’externalisation de données deviennent de plus en plus courants avec l’avènement de l’infonuagique. L’infonuagique attire les entreprises et les organisations en raison d’une grande variété d’avantages fonctionnels et économiques.De plus, l’infonuagique offre une haute disponibilité, le passage d’échelle et une reprise après panne efficace. L’un des services plus notables est la base de données en tant que service (Database-as-a-Service), où les particuliers et les organisations externalisent les données, le stockage et la gestion `a un fournisseur de services. Ces services permettent de stocker un entrepôt de données chez un fournisseur distant et d’exécuter des analysesen ligne (OLAP).Bien que l’infonuagique offre de nombreux avantages, elle induit aussi des problèmes de s´sécurité et de confidentialité. La solution usuelle pour garantir la confidentialité des données consiste à chiffrer les données localement avant de les envoyer à un serveur externe. Les systèmes de gestion de base de données sécurisés utilisent diverses méthodes de cryptage, mais ils induisent un surcoût considérable de calcul et de stockage ou révèlent des informations sur les données.Dans cette thèse, nous proposons une nouvelle méthode de chiffrement (S4) inspirée du partage secret de Shamir. S4 est un système homomorphique additif : des additions peuvent être directement calculées sur les données cryptées. S4 trait les points faibles des systèmes existants en réduisant les coûts tout en maintenant un niveau raisonnable de confidentialité. S4 est efficace en termes de stockage et de calcul, ce qui est adéquat pour les scénarios d’externalisation de données qui considèrent que l’utilisateur dispose de ressources de calcul et de stockage limitées. Nos résultats expérimentaux confirment l’efficacité de S4 en termes de surcoût de calcul et de stockage par rapport aux solutions existantes.Nous proposons également de nouveaux schémas d’indexation qui préservent l’ordre des données, OPI et waOPI. Nous nous concentrons sur le problème de l’exécution des requêtes exacts et d’intervalle sur des données chiffrées. Contrairement aux solutions existantes, nos systèmes empêchent toute analyse statistique par un adversaire. Tout en assurant la confidentialité des données, les schémas proposés présentent de bonnes performances et entraînent un changement minimal dans les logiciels existants
Nowadays, data outsourcing scenarios are ever more common with the advent of cloud computing. Cloud computing appeals businesses and organizations because of a wide variety of benefits such as cost savings and service benefits. Moreover, cloud computing provides higher availability, scalability, and more effective disaster recovery rather than in-house operations. One of the most notable cloud outsourcing services is database outsourcing (Database-as-a-Service), where individuals and organizations outsource data storage and management to a Cloud Service Provider (CSP). Naturally, such services allow storing a data warehouse (DW) on a remote, untrusted CSP and running on-line analytical processing (OLAP).Although cloud data outsourcing induces many benefits, it also brings out security and in particular privacy concerns. A typical solution to preserve data privacy is encrypting data locally before sending them to an external server. Secure database management systems use various encryption schemes, but they either induce computational and storage overhead or reveal some information about data, which jeopardizes privacy.In this thesis, we propose a new secure secret splitting scheme (S4) inspired by Shamir’s secret sharing. S4 implements an additive homomorphic scheme, i.e., additions can be directly computed over encrypted data. S4 addresses the shortcomings of existing approaches by reducing storage and computational overhead while still enforcing a reasonable level of privacy. S4 is efficient both in terms of storage and computing, which is ideal for data outsourcing scenarios that consider the user has limited computation and storage resources. Experimental results confirm the efficiency of S4 in terms of computation and storage overhead with respect to existing solutions.Moreover, we also present new order-preserving schemes, order-preserving indexing (OPI) and wrap-around order-preserving indexing (waOPI), which are practical on cloud outsourced DWs. We focus on the problem of performing range and exact match queries over encrypted data. In contrast to existing solutions, our schemes prevent performing statistical and frequency analysis by an adversary. While providing data privacy, the proposed schemes bear good performance and lead to minimal change for existing software
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Sallaku, Redlon <1994&gt. "Privacy and Protecting Privacy: Using Static Analysis for legal compliance. General Data Protection Regulation." Master's Degree Thesis, Università Ca' Foscari Venezia, 2019. http://hdl.handle.net/10579/14682.

Повний текст джерела
Анотація:
The main purpose of the thesis is to study Privacy and how protecting Privacy, including the new regulation framework proposed by EU the GDPR, investigating how static analysis could help GDPR enforcement, and develop a new static analysis prototype to fulfill this task in practice. GDPR (General Data Protection Regulation) is a recent European regulation to harmonize and enforce data privacy laws across Europe, to protect and empower all EU citizens data privacy, and to reshape the way organizations deal with sensitive data. This regulation has been enforced starting from May 2018. While it is already clear that there is no unique solution to deal with the whole spectrum of GDPR, it is still unclear how static analysis might help enterprises to fulfill the constraints imposed by this regulation.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Andersson-Sunna, Josefin. "Large Scale Privacy-Centric Data Collection, Processing, and Presentation." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-84930.

Повний текст джерела
Анотація:
It has become an important part of business development to collect statistical data from online sources. Information about users and how they interact with an online source can help improving the user experience and increasing sales of products. Collecting data about users has many benefits for the business owner, but it also raises privacy issues since more and more information about users are spread over the internet. Tools that collect statistical data from online sources exists, but using such tools gives away the control over the data collected. If a business implements its own analytics system, it is easier to make it more privacy centric and the control over the data collected is kept.  This thesis examines what techniques that are most suitable for a system whose purpose is to collect, store, process, and present large-scale privacy centric data. Research about what technique to use for collecting data and how to keep track of unique users in a privacy centric way has been made as well as research about what database to use that can handle many write requests and store large scale data. A prototype was implemented based on the research, where JavaScript tagging is used to collect data from several online sources and cookies is used to keep track of unique users. Cassandra was chosen as database for the prototype because of its high scalability and speed at write requests. Two versions of the processing of raw data into statistical reports was implemented to be able to evaluate if the data should be preprocessed or if the reports could be created when the user asks for it.   To evaluate the techniques used in the prototype, load tests of the prototype was made where the results showed that a bottleneck was reached after 45 seconds on a workload of 600 write requests per second. The tests also showed that the prototype managed to keep its performance at a workload of 500 write requests per second for one hour, where it completed 1 799 953 requests. Latency tests when processing raw data into statistical reports was also made to evaluate if the data should be preprocessed or processed when the user asks for the report. The result showed that it took around 30 seconds to process 1 200 000 rows of data from the database which is too long for a user to wait for the report. When investigating what part of the processing that increased the latency the most it showed that it was the retrieval of data from the database that increased the latency. It took around 25 seconds to retrieve the data and only around 5 seconds to process it into statistical reports. The tests showed that Cassandra is slow when retrieving many rows of data, but fast when writing data which is more important in this prototype.
Det har blivit en viktig del av affärsutvecklingen hos företag att samla in statistiska data från deras online-källor. Information om användare och hur de interagerar med en online-källa kan hjälpa till att förbättra användarupplevelsen och öka försäljningen av produkter. Att samla in data om användare har många fördelar för företagsägaren, men det väcker också integritetsfrågor eftersom mer och mer information om användare sprids över internet. Det finns redan verktyg som kan samla in statistiska data från online-källor, men när sådana verktyg används förloras kontrollen över den insamlade informationen. Om ett företag implementerar sitt eget analyssystem är det lättare att göra det mer integritetscentrerat och kontrollen över den insamlade informationen behålls. Detta arbete undersöker vilka tekniker som är mest lämpliga för ett system vars syfte är att samla in, lagra, bearbeta och presentera storskalig integritetscentrerad information. Teorier har undersökts om vilken teknik som ska användas för att samla in data och hur man kan hålla koll på unika användare på ett integritetscentrerat sätt, samt om vilken databas som ska användas som kan hantera många skrivförfrågningar och lagra storskaligdata. En prototyp implementerades baserat på teorierna, där JavaScript-taggning används som metod för att samla in data från flera online källor och cookies används för att hålla reda på unika användare. Cassandra valdes som databas för prototypen på grund av dess höga skalbarhet och snabbhet vid skrivförfrågningar. Två versioner av bearbetning av rådata till statistiska rapporter implementerades för att kunna utvärdera om data skulle bearbetas i förhand eller om rapporterna kunde skapas när användaren ber om den. För att utvärdera teknikerna som användes i prototypen gjordes belastningstester av prototypen där resultaten visade att en flaskhals nåddes efter 45 sekunder på en arbetsbelastning på 600 skrivförfrågningar per sekund. Testerna visade också att prototypen lyckades hålla prestandan med en arbetsbelastning på 500 skrivförfrågningar per sekund i en timme, där den slutförde 1 799 953 förfrågningar. Latenstest vid bearbetning av rådata till statistiska rapporter gjordes också för att utvärdera om data ska förbehandlas eller bearbetas när användaren ber om rapporten. Resultatet visade att det tog cirka 30 sekunder att bearbeta 1 200 000 rader med data från databasen vilket är för lång tid för en användare att vänta på rapporten. Vid undersökningar om vilken del av bearbetningen som ökade latensen mest visade det att det var hämtningen av data från databasen som ökade latensen. Det tog cirka 25 sekunder att hämta data och endast cirka 5 sekunder att bearbeta dem till statistiska rapporter. Testerna visade att Cassandra är långsam när man hämtar ut många rader med data, men är snabb på att skriva data vilket är viktigare i denna prototyp.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Katsikouli, Panagiota. "Distributed and privacy preserving algorithms for mobility information processing." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31110.

Повний текст джерела
Анотація:
Smart-phones, wearables and mobile devices in general are the sensors of our modern world. Their sensing capabilities offer the means to analyze and interpret our behaviour and surroundings. When it comes to human behaviour, perhaps the most informative feature is our location and mobility habits. Insights from human mobility are useful in a number of everyday practical applications, such as the improvement of transportation and road network infrastructure, ride-sharing services, activity recognition, mobile data pre-fetching, analysis of the social behaviour of humans, etc. In this dissertation, we develop algorithms for processing mobility data. The analysis of mobility data is a non trivial task as it involves managing large quantities of location information, usually spread out spatially and temporally across many tracking sensors. An additional challenge in processing mobility information is to publish the data and the results of its analysis without jeopardizing the privacy of the involved individuals or the quality of the data. We look into a series of problems on processing mobility data from individuals and from a population. Our mission is to design algorithms with provable properties that allow for the fast and reliable extraction of insights. We present efficient solutions - in terms of storage and computation requirements - , with a focus on distributed computation, online processing and privacy preservation.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Cho, Hyunghoon. "Biomedical data sharing and analysis at scale : privacy, compaction, and integration." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122727.

Повний текст джерела
Анотація:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from PDF version of thesis. Page 307 blank.
Includes bibliographical references (pages 279-306).
Recent advances in high-throughput experimental technologies have led to the exponential growth of biomedical datasets, including personal genomes, single-cell sequencing experiments, and molecular interaction networks. The unprecedented scale, variety, and distributed ownership of emerging biomedical datasets present key computational challenges for sharing and analyzing these data to uncover new scientific insights. This thesis introduces a range of computational methods that overcome these challenges to enable scalable sharing and analysis of massive datasets in a range of biomedical domains. First, we introduce scalable privacy-preserving analysis pipelines built upon modern cryptographic tools to enable large amounts of sensitive biomedical data to be securely pooled from multiple entities for collaborative science. Second, we introduce efficient computational techniques for analyzing emerging large-scale sequencing datasets of millions of cells that leverage a compact summary of the data to speedup various analysis tasks while maintaining the accuracy of results. Third, we introduce integrative approaches to analyzing a growing variety of molecular interaction networks from heterogeneous data sources to facilitate functional characterization of poorly-understood genes. The computational techniques we introduce for scaling essential biomedical analysis tasks to the large volume of data being generated are broadly applicable to other data science domains.
by Hyunghoon Cho.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Miracle, Jacob M. "De-Anonymization Attack Anatomy and Analysis of Ohio Nursing Workforce Data Anonymization." Wright State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=wright1482825210051101.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

FERRO, VALERIA. "HR Analytics: evoluzione e composizione del conflitto tra datore di lavoro e lavoratore. Un caso aziendale." Doctoral thesis, Università degli studi di Bergamo, 2021. http://hdl.handle.net/10446/181278.

Повний текст джерела
Анотація:
Il mondo sta scoprendo gli effetti della straordinaria crescita nella produzione dei dati e negli strumenti tecnologici atti ad analizzarli. L’analisi dei big data ha già portato a degli stravolgimenti nel mercato, nelle tecniche di produzione e nella società. Ci si interroga ora su quali effetti potrà avere l’utilizzo della big data analytics applicata ai contesti di lavoro e ai dati personali della forza lavoro. Parte della letteratura incoraggia con forza l’adozione della data analytics da parte della funzione Human Resourse, sottolineando come la materia potrà portare all’identificazione di elementi fondamentali riguardanti la performance lavorativa, utili a prendere delle decisioni strategiche nella gestione di impresa. I giuristi però richiamano l’attenzione alla necessità che questi strumenti vengano adottati nel rispetto dei diritti dei lavoratori e non peggiorino le condizioni lavorative degli stessi, attraverso l’esacerbazione della posizione dominante del datore di lavoro. Con questa ricerca si è voluto approfondire il dibattito sull’argomento ed ingaggiare un confronto tra la letteratura e la casistica reale di utilizzo dei dati dei lavoratori nel contesto aziendale. A tal fine si è condotta una ricerca all’interno di una realtà industriale italiana attraverso le tecniche dell’osservazione partecipante. Attraverso la ricerca si è tentato di comprendere in che modo, e se, sia possibile realizzare degli strumenti di analytics che concilino gli interessi e le aspettative aziendali con la tutela dei lavoratori. Alla luce dello stato evolutivo della normativa attuale, la ricerca ha portato a ritenere che sia possibile comporre i diversi interessi per la realizzazione di HR Analytics attraverso un’attività di compliance completa e svolta all’interno dell’azienda, che permetta di intervenire preventivamente sui diversi profili di rischio identificati anche attraverso l’introduzione di una documentazione atta a certificare la validità tecnica e giuridica dell’analisi. È stato inoltre osservato come sia necessario operare un processo di reskilling dei professionisti HR per consentire che siano in grado di gestire pienamente e con la massima affidabilità i sistemi in oggetto di studio.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Parameswaran, Rupa. "A Robust Data Obfuscation Technique for Privacy Preserving Collaborative Filtering." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/11459.

Повний текст джерела
Анотація:
Privacy is defined as the freedom from unauthorized intrusion. The availability of personal information through online databases, such as government records, medical records, and voters and #146; lists, pose a threat to personal privacy. The concern over individual privacy has led to the development of legal codes for safeguarding privacy in several countries. However, the ignorance of individuals as well as loopholes in the systems, have led to information breaches even in the presence of such rules and regulations. Protection against data privacy requires modification of the data itself. The term {em data obfuscation} is used to refer to the class of algorithms that modify the values of the data items without distorting the usefulness of the data. The main goal of this thesis is the development of a data obfuscation technique that provides robust privacy protection with minimal loss in usability of the data. Although medical and financial services are two of the major areas where information privacy is a concern, privacy breaches are not restricted to these domains. One of the areas where the concern over data privacy is of growing interest is collaborative filtering. Collaborative filtering systems are being widely used in E-commerce applications to provide recommendations to users regarding products that might be of interest to them. The prediction accuracy of these systems is dependent on the size and accuracy of the data provided by users. However, the lack of sufficient guidelines governing the use and distribution of user data raises concerns over individual privacy. Users often provide the minimal information that is required for accessing these E-commerce services. The lack of rules governing the use and distribution of data disallows sharing of data among different communities for collaborative filtering. The goals of this thesis are (a) the definition of a standard for classifying DO techniques, (b) the development of a robust cluster preserving data obfuscation algorithm, and (c) the design and implementation of a privacy-preserving shared collaborative filtering framework using the data obfuscation algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Sahlstedt, Andreas. "A competition policy for the digital age : An analysis of the challenges posed by data-driven business models to EU competition law." Thesis, Uppsala universitet, Juridiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-389650.

Повний текст джерела
Анотація:
The increasing volume and value of data in online markets along with tendencies of market concentration makes it an interesting research topic in the field of competition law. The purpose of this thesis is to evaluate how EU competition law could adapt to the challenges brought on by big data, particularly in relation to Art. 102 TFEU and the EUMR. Furthermore, this thesis analyses the intersection between privacy regulations and competition law. The characteristics pertaining to online markets and data are presented in this thesis in order to accurately describe the specific challenges which arise in online markets. By analysing previous case law of the ECJ as well as the Bundeskartellamt’s Facebook investigation, this thesis concludes that privacy concerns could potentially be addressed within a EU competition law procedure. Such an approach might be particularly warranted in markets where privacy is a key parameter of competition. However, a departure from the traditionally price-centric enforcement of competition law is required in order to adequately address privacy concerns. The research presented in this thesis demonstrates the decreasing importance of market shares in the assessment of a dominant position in online markets, due to the dynamic character of such markets. An increased focus on entry barriers appears to be necessary, of which data can constitute an important barrier. Additionally, consumer behaviour constitutes a source of market power in online markets, which warrants a shift towards behavioural economic analysis. The turnover thresholds of the EUMR do not appear to adequately address data-driven mergers, which is illustrated by the Facebook/WhatsApp merger. Therefore, thresholds based on other parameters are necessary. The value of data also increases the potential anticompetitive effects of vertical and conglomerate mergers, warranting an increased focus on such mergers.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Floderus, Sebastian, and Vincent Tewolde. "Analysing privacy concerns in smartcameras : in correlation with GDPR and Privacy by Design." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21980.

Повний текст джерела
Анотація:
Background. The right to privacy is every persons right, data regulation laws suchas the GDPR and privacy preserving concepts like Privacy by Design (PbD) aid inthis matter. IoT devices are highly vulnerable to attacks because of their limitedstorage and processing capabilities, even more so for internet connected cameras.With the use of security auditing techniques and privacy analysis methods it ispossible to identify security and privacy issues for Internet of Things (IoT) devices. Objectives. The research aims to evaluate three selected IoT cameras’ ability toprotect privacy of their consumers. As well as investigating the role GDPR and PbDhas in the design and operation of each device. Methods. A literature review was performed in order to gain valuable knowledgeof how to design a case study that would evaluate privacy issues of IoT devices incorrelation with GDPR and PbD. The case study consists of 14 cases designed toexplore security and privacy related issues. They were executed in a monitored andcontrolled network environment to detect data flow between devices. Results. There was a noticeable difference in the security and privacy enhancingtechnologies used between some manufactures. Furthermore, there was a distinctdisparity of how transparent each system was with the processed data, which is acrucial part of both GDPR and PbD. Conclusions. All three companies had taken GDPR and PbD into considerationin the design on the IoT systems, however to different extents. One of the IoTmanufactures could benefit from incorporating PbD more thoroughly into the designand operation of their product. Also the GDPR could benefit from having referencesto security standards and frameworks in order simplify the process for companies tosecure their systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Möller, Carolin. "The evolution of data protection and privacy in the public security context : an institutional analysis of three EU data retention and access regimes." Thesis, Queen Mary, University of London, 2017. http://qmro.qmul.ac.uk/xmlui/handle/123456789/25911.

Повний текст джерела
Анотація:
Since nearly two decades threats to public security through events such as 9/11, the Madrid (2004) and London (2005) bombings and more recently the Paris attacks (2015) resulted in the adoption of a plethora of national and EU measures aiming at fighting terrorism and serious crime. In addition, the Snowden revelations brought the privacy and data protection implications of these public security measures into the spotlight. In this highly contentious context, three EU data retention and access measures have been introduced for the purpose of fighting serious crime and terrorism: The Data Retention Directive (DRD), the EU-US PNR Agreement and the EU-US SWIFT Agreement. All three regimes went through several revisions (SWIFT, PNR) or have been annulled (DRD) exemplifying the difficulty of determining how privacy and data protection ought to be protected in the context of public security. The trigger for this research is to understand the underlying causes of these difficulties by examining the problem from different angles. The thesis applies the theory of 'New Institutionalism' (NI) which allows both a political and legal analysis of privacy and data protection in the public security context. According to NI, 'institutions' are defined as the operational framework in which actors interact and they steer the behaviours of the latter in the policy-making cycle. By focusing on the three data retention and access regimes, the aim of this thesis is to examine how the EU 'institutional framework' shapes data protection and privacy in regard to data retention and access measures in the public security context. Answering this research question the thesis puts forward three main hypotheses: (i) privacy and data protection in the Area of Freedom, Security and Justice (AFSJ) is an institutional framework in transition where historic and new features determine how Articles 7 and 8 of the Charter of Fundamental Rights of the European Union (CFREU) are shaped; (ii) policy outcomes on Articles 7 and 8 CFREU are influenced by actors' strategic preferences pursued in the legislation-making process; and (iii) privacy and data protection are framed by the evolution of the Court of Justice of the European Union (CJEU) from a 'legal basis arbiter' to a political actor in its own right as a result of the constitutional changes brought by the Lisbon Treaty.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Berthold, Stefan. "Linkability of communication contents : Keeping track of disclosed data using Formal Concept Analysis." Thesis, Karlstad University, Faculty of Economic Sciences, Communication and IT, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-369.

Повний текст джерела
Анотація:

A person who is communication about (the data subject) has to keep track of all of his revealed data in order to protect his right of informational self-determination. This is important when data is going to be processed in an automatic manner and, in particular, in case of automatic inquiries. A data subject should, therefore, be enabled to recognize useful decisions with respect to data disclosure, only by using data which is available to him.

For the scope of this thesis, we assume that a data subject is able to protect his communication contents and the corresponding communication context against a third party by using end-to-end encryption and Mix cascades. The objective is to develop a model for analyzing the linkability of communication contents by using Formal Concept Analysis. In contrast to previous work, only the knowledge of a data subject is used for this analysis instead of a global view on the entire communication contents and context.

As a first step, the relation between disclosed data is explored. It is shown how data can be grouped by types and data implications can be represented. As a second step, behavior, i. e. actions and reactions, of the data subject and his communication partners is included in this analysis in order to find critical data sets which can be used to identify the data subject.

Typical examples are used to verify this analysis, followed by a conclusion about pros and cons of this method for anonymity and linkability measurement. Results can be used, later on, in order to develop a similarity measure for human-computer interfaces.

Стилі APA, Harvard, Vancouver, ISO та ін.
33

Magnusson, Ulf. "A tool for visual analysis of permission-based data access on Android phones." Thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-72631.

Повний текст джерела
Анотація:
Privacy is a topic of ever-increasing interest in the modern, connected world. With the advent of smart phones, the boundary between internet and the personal sphere has become less distinct. Most users of smart phones have very vague ideas of how various apps intrude on their privacy. At Karlstad University, one of the research groups at the Department of Computer Science is dedicated to research on privacy and protection thereof in general, and one research project is about increasing knowledge of how apps collect personal information about its users. This master thesis is about the development of a visualization tool that processes data collected from Android devices by a surveillance app developed within the aforementioned research project. The app keeps record of the usage of what in Android is termed Dangerous Permissions. The info collected is when such an event occurs, which app has requested the permission, the permission requested as well as the geographical location at the time of the event. Over time, more than 2 million such events has been recorded and collected in this manner. Previously, two student projects have developed different web based tools for visualizing the data collected. With this thesis work, a desktop application was developed; a visualization tool that imports the aforementioned data into a database connected to the tool. The graphical user interface of the visualization tool allows an analyst or scientific researcher to do detailed and fine-grained searches in that database and present the result in various charts, thereby visualizing how the apps ask for information that can be used for identifying and surveying individuals, thereby intruding on their privacy. The visualization tool is carefully designed with the aim for it to be scalable and extendable, through an architecture that allows for continuous development of further visualizations as well as other analysis functionality.
Frågan om personlig integritet får allt större betydelse den moderna, uppkopplade världen. Med smartmobilernas intåg har gränsen mellan internet och den privata sfären blivit allt mindre distinkt. Det stora flertalet användare av smartmobiler har mycket vaga begrepp om hur olika appar inkräktar på den personliga integriteten. Vid Karlstads Universitet och Avdelningen för Datavetenskap fokuserar forskningsgruppen PriSec – Privacy and Security – bl.a. på att förbättra den personliga integriteten. Ett av forskningsprojekten syftar till att öka medvetenheten om hur appar i smartmobiler och liknande, samlar in information om dess användare. Denna masteruppsats beskriver utvecklingen av ett verktyg för visualisering av data som insamlats från smartmobiler, läsplattor, etc., med operativsystemet Android. Detta har skett medelst en övervakningsapp som utvecklats inom det ovan nämnda forskningsprojektet. Appen i fråga håller reda på användningen av det som i Android kallas ”Dangerous Permissions” (eller på svenska: farliga privilegier). Den information som samlas in är vilka privilegier det gäller, vilka appar som använder dessa farliga privilegier, när detta sker och var mobilen befinner sig vid det aktuella tillfället. Mer än 2 miljoner sådana händelser har registrerats och samlats in. Tidigare har två studentprojekt utvecklat olika web-baserade verktyg för att visualisera det data som insamlats på detta sätt. I detta uppsatsarbete har en desktopapplikation utvecklats – ett verktyg för visualisering som importerar den nyss nämnda datan till en databas ansluten till verktyget. Via verktygets grafiska användargränssnitt kan analytiker och forskare göra precisionssökningar i databasen och presentera resultatet i olika diagram, på så sätt visualiserande hur apparna använder information som kan användas för att identifiera och kartlägga den person som använder smartmobilen i fråga, vilket inkräktar på deras personliga integritet. Visualiseringsverktyget är noggrant designat med målet att det skall vara skalbart och utbyggbart, genom en arkitektur som tillåter fortgående utveckling – såväl av ytterligare visualiseringar som annan funktionalitet för analys av innehållet i databasen.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Canillas, Rémi. "Privacy and Security in a B2B environment : Focus on Supplier Impersonation Fraud Detection using Data Analysis." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI118.

Повний текст джерела
Анотація:
La fraude au fournisseur (Supplier Impersonation Fraud, SIF) est un type de fraude se produisant dans un contexte Business-to-Business (B2B), où des entreprises et des commerces interagissent entre eux, plutôt qu'avec le consommateur. Une fraude au fournisseur est effectuée lorsqu'une entreprise (fournisseur) proposant des biens ou des services à une autre entreprise (client) a son identité usurpée par un fraudeur. Dans cette thèse, nous proposons, d'utiliser les techniques et outils récents en matière d'apprentissage machine (Machine Learning) afin de résoudre à ces différents points, en élaborant des systèmes de détection de fraudes se basant sur l'analyse de données. Deux systèmes de détection de fraude basés sur l'analyse de données sont proposés: ProbaSIF et GraphSIF. Ces deux systèmes se composent d'abord d'une phase d'entraînement où les transactions historiques sont utilisées pour calculer un modèle de données, puis d'une phase de test où la légitimité de chaque transaction considérée est déterminée. ProbaSIF est un système de détection de fraudes au fournisseur qui se base sur un modèle bayésien (Dirichlet-Multinomial). ProbaSIF utilise la probabilité d'un compte en banque à être utilisé dans une transaction future d'une entreprise pour déterminer sa fiabilité. GraphSIF, le second système de détection de fraude au fournisseur que nous proposons, a pour but d'analyser les propriétés relationnelles créées par l'échange de transactions entre une entreprise et ses fournisseurs. À cette fin, une séquence de différents graphes compilant tous les liens créés entre l'entreprise, ses fournisseurs, et les comptes en banque utilisés pour payer ces fournisseurs, appelés séquence de comportement, est générée. Une transaction est catégorisée en l'ajoutant au graphe le plus récent de la séquence et en analysant les motifs formés, et en les comparant à ceux précédemment trouvés dans la séquence de comportement.Ces deux systèmes sont comparés avec un jeu de données réelles afin d’examiner leurs performances
Supplier Impersonation Fraud (SIF) is a kind of fraud occuring in a Business-To-Business context (B2B), where a fraudster impersonates a supplier in order to trigger an illegitimate payment from a company. Most of the exisiting systems focus solely on a single, "intra-company" approach in order to detect such kind of fraud. However, the companies are part of an ecosystem where multiple agents interacts, and such interaction hav yet to be integrated as a part of the existing detection techniques. In this thesis we propose to use state-of-the-art techniques in Machine Learning in order to build a detection system for such frauds, based on the elaboration of a model using historical transactions from both the targeted companies and the relevant other companies in the ecosystem (contextual data). We perform detection of anomalous transactions when significant change in the payment behavior of a company is detected. Two ML-based systems are proposed in this work: ProbaSIF and GraphSIF. ProbaSIF uses a probabilistic approach (urn model) in order to asert the probability of occurrence of the account used in the transaction in order to assert its legitimacy. We use this approach to assert the differences yielded by the integration of contextual data to the analysis. GraphSIF uses a graph-based approach to model the interaction between client and supplier companies as graphs, and then uses these graph as training data in a Self-Organizing Map-Clustering model. The distance between a new transaction and the center of the cluster is used to detect changes in the behavior of a client company. These two systems are compared with a real-life fraud detection system in order to assert their performance
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Hammoud, Khodor. "Trust in online data : privacy in text, and semantic-based author verification in micro-messages." Electronic Thesis or Diss., Université Paris Cité, 2021. http://www.theses.fr/2021UNIP5203.

Повний текст джерела
Анотація:
De nombreux problèmes émanent de la diffusion et l'utilisation des données sur les réseaux sociaux. Il est nécessaire de promouvoir la confiance sur les plateformes sociales, quant au partage et l’utilisation des données. Les données en ligne sont principalement sous forme textuelle, ce qui pose des problèmes aux solutions d'automatisation en raison de la richesse du langage naturel. De plus, l'utilisation des micro-messages comme principal moyen de communication sur les médias sociaux rend le problème beaucoup plus difficile en raison de la rareté des fonctionnalités à analyser par corps de texte. Nos expériences montrent que les solutions d'anonymat des données ne peuvent pas préserver l'anonymat des utilisateurs sans sacrifier la qualité des données. De plus, dans le domaine de la vérification d'auteur, étant donné un ensemble de documents dont l'auteur est connu, nous avons constaté très peu de travaux de recherche travaillant sur les micro-messages. Nous avons également remarqué que l'état de l'art ne prend pas en considération la sémantique des textes, les rendant vulnérables aux attaques par usurpation d'identité. Motivés par ces résultats, nous consacrons cette thèse pour aborder les tâches de (1) identifier les problèmes actuels avec l'anonymat des données utilisateur dans le texte, et fournir une première approche sémantique originale pour résoudre ce problème, (2) étudier la vérification de l'auteur en micro -messages, et développer une nouvelle approche basée sur la sémantique pour résoudre ces défis, et (3) étudier l'effet de l'inclusion de la sémantique dans la gestion des attaques de manipulation, (4) étudier l'effet temporel des données, où les auteurs pourraient avoir changer d'avis au fil du temps. La première partie de la thèse se concentre sur l'anonymat des utilisateurs dans les données textuelles sur les réseaux sociaux, dans le but d'anonymiser les informations personnelles des données des utilisateurs en ligne pour une analyse sécurisée des données sans compromettre la confidentialité des utilisateurs. Nous présentons une première approche basée sur la sémantique, qui peut être personnalisée pour équilibrer la préservation de la qualité des données et la maximisation de l'anonymat de l'utilisateur en fonction de l'application à portée de main. Dans la deuxième partie, nous étudions la vérification d'auteur dans les micro-messages sur les réseaux sociaux. Nous confirmons le manque de recherche en vérification d'auteur sur les micro-messages, et nous montrons que l'état de l'art ne fonctionne pas bien lorsqu'il est appliqué sur des micro-messages. Ensuite, nous présentons une nouvelle approche basée sur la sémantique qui utilise des inclusions de mots et une analyse des sentiments pour collecter l'historique des opinions de l'auteur afin de déterminer l'exactitude de la revendication de paternité et montrer ses performances concurrentielles sur les micro-messages. Nous utilisons ces résultats dans la troisième partie de la thèse pour améliorer encore notre approche. Nous construisons un ensemble de données composé des tweets des 88 influenceurs Twitter les plus suivis. Nous l'utilisons pour montrer que l'état de l'art n'est pas capable de gérer les attaques d'usurpation d'identité, modifiant le message derrière le tweet, tandis que le modèle d'écriture est préservé. D'autre part, puisque notre approche est consciente de la sémantique du texte, elle est capable de détecter les manipulations de texte avec une précision supérieure à 90%. Et dans la quatrième partie de la thèse, nous analysons l'effet temporel des données sur notre approche de vérification d'auteur.Nous étudions l'évolution des opinions des auteurs au fil du temps et comment s'en accommoder dans notre approche. Nous étudions les tendances des sentiments d'un auteur pour un sujet spécifique sur une période de temps et prédisons les fausses allégations de paternité en fonction de la période dans laquelle se situe la revendication
Many Problems surround the spread and use of data on social media. There is a need to promote trust on social platforms, regarding the sharing and consumption of data. Data online is mostly in textual form which poses challenges for automation solutions because of the richness of natural language. In addition, the use of micro-messages as the main means of communication on social media makes the problem much more challenging because of the scarceness of features to analyze per body of text. Our experiments show that data anonymity solutions cannot preserve user anonymity without sacrificing data quality. In addition, in the field of author verification, which is the problem of determining if a body of text was written by a specific person or not, given a set of documents known to be authored by them, we found a lack of research working with micro-messages. We also noticed that the state-of-the-art does not take text semantics into consideration, making them vulnerable to impersonation attacks. Motivated by these findings, we devote this thesis to tackle the tasks of (1) identifying the current problems with user data anonymity in text, and provide an initial novel semantic-based approach to tackle this problem, (2) study author verification in micro-messages and identify the challenges in this field, and develop a novel semantics-based approach to solve these challenges, and (3) study the effect of including semantics in handling manipulation attacks, and the temporal effect of data, where the authors might have changing opinions over time. The first part of the thesis focuses on user anonymity in textual data, with the aim to anonymize personal information from online user data for safe data analysis without compromising users’ privacy. We present an initial novel semantic-based approach, which can be customized to balance between preserving data quality and maximizing user anonymity depending on the application at hand. In the second part, we study author verification in micro-messages on social media. We confirm the lack of research in author verification on micro-messages, and we show that the state-of-the-art, which primarily handles long and medium-sized texts, does not perform well when applied on micro-messages. Then we present a semantics-based novel approach which uses word embeddings and sentiment analysis to collect the author’s opinion history to determine the correctness of the claim of authorship, and show its competitive performance on micro-messages. We use these results in the third part of the thesis to further improve upon our approach. We construct a dataset consisting of the tweets of the 88 most followed twitter influencers. We use it to show that the state-of-the-art is not able to handle impersonation attacks, where the content of a tweet is altered, changing the message behind the tweet, while the writing pattern is preserved. On the other hand, since our approach is aware of the text’s semantics, it is able to detect text manipulations with an accuracy above 90%. And in the fourth part of the thesis, we analyze the temporal effect of data on our approach for author verification. We study the change of authors’ opinions over time, and how to accommodate for that in our approach. We study trends of sentiments of an author per a specific topic over a period of time, and predict false authorship claims depending on what timeframe does the claim of authorship fall in
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Ganzaroli, Ludovica <1994&gt. "Un modello Risk Based per la Data Privacy Impact Analysis ai sensi del Regolamento UE 679/2016." Master's Degree Thesis, Università Ca' Foscari Venezia, 2018. http://hdl.handle.net/10579/13649.

Повний текст джерела
Анотація:
L’oggetto di questo elaborato è la creazione di un modello concernente una Valutazione d’impatto ai sensi del principio di responsabilizzazione introdotto dal Regolamento Europeo n. 679/2016, entrato in vigore il 25 maggio 2018, che ha portato importanti cambiamenti riguardo la protezione dei dati personali delle persone fisiche. Il lavoro inizia con una panoramica dei pericoli connessi all’illecito utilizzo dei dati personali, derivante dall’impiego esponenziale delle nuove tecnologie e di come esse hanno portato alla necessità di assicurare una tutela dei diritti e delle libertà delle persone in riferimento ai dati personali. Si procede successivamente all’esame del Codice in materia di protezione dei dati personali e del Regolamento UE n. 679/2016, ponendoli a confronto e mettendo in evidenza le novità e gli obblighi introdotti da quest’ultimo. In conclusione si procede alla creazione di due modelli distinti: il primo è di supporto alla valutazione dei rischi derivanti dal trattamento di dati personali, che ,rispecchiando le disposizioni della Privacy by Design e Privacy by Default, aiuta il Titolare del trattamento nel calcolo del rischio potenziale ed inerente al trattamento e nella comprensione della necessità di svolgere una valutazione d’impatto; il secondo è di ausilio a una valutazione d’impatto vera e propria, ed è stato creato seguendo gli standard forniti sia dal Regolamento europeo, sia dal WP 29.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Teltzrow, Maximilian. "A quantitative analysis of e-commerce." Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2005. http://dx.doi.org/10.18452/15297.

Повний текст джерела
Анотація:
Die Rolle und Wahrnehmung des World Wide Web in seinen unterschiedlichen Nutzungskontexten ändert sich zunehmend – von einem frühen Fokus auf reine Web-Interaktion mit Kunden, Informationssuchern und anderen Nutzern hin zum Web als eine Komponente in einer mehrkanaligen Informations- und Kommunikationsstrategie. Diese zentrale Entwicklung ermöglicht Firmen, eine wachsende Menge digitaler Konsumenteninformationen zu sammeln, zu analysieren und zu verwerten. Während Firmen von diesen Daten profitieren (z.B. für Marketingzwecke und zur Verbesserung der Bedienungsfreundlichkeit), hat die Analyse und Nutzung von Onlinedaten zu einem signifikanten Anstieg der Datenschutzbedenken bei Konsumenten geführt, was wiederum ein Haupthindernis für erfolgreichen E-Commerce ist. Die Implikationen für eine Firma sind, dass Datenschutzerfordernisse bei der Datenanalyse und -nutzung berücksichtigt und Datenschutzpraktiken effizient nach außen kommuniziert werden müssen. Diese Dissertation erforscht den Grenzbereich zwischen den scheinbar konkurrierenden Interessen von Onlinekonsumenten und Firmen. Datenschutz im Internet wird aus einer Konsumentenperspektive untersucht und Datenschutzanforderungen werden spezifiziert. Eine Gruppe von Geschäftsanalytiken für Webseiten wird präsentiert und es wird verdeutlicht, wie Datenschutzanforderungen in den Analyseprozess integriert werden können. Ein Design zur besseren Kommunikation von Datenschutz wird vorgestellt, um damit eine effizientere Kommunikation der Datenschutzpraktiken einer Firma gegenüber Konsumenten zu ermöglichen. Die vorgeschlagenen Lösungsansätze gestatten den beiden Gegenparteien, widerstreitende Interessen zwischen Datennutzung und Datenschutz auszugleichen. Ein besonderer Fokus dieser Forschungsarbeit liegt auf Mehrkanalhändlern, die den E-Commerce Markt derzeit dominieren. Die Beiträge dieser Arbeit sind im Einzelnen: * Messung von Vorbedingungen für Vertrauen im Mehrkanalhandel Der Erfolg des Mehrkanalhandels und die Bedeutung von Datenschutz werden aus einer Konsumentenperspektive dargestellt. Ein Strukturgleichungsmodell zur Erklärung von Konsumentenvertrauen in einen Mehrkanalhändler wird präsentiert. Vertrauen ist eine zentrale Vorbedingung für die Kaufbereitschaft. Ein signifikanter Einfluss der wahrgenommenen Reputation und Größe physischer Filialen auf das Vertrauen in einen Onlineshop wurde festgestellt. Dieses Resultat bestätigt unsere Hypothese, dass kanalübergreifende Effekte zwischen dem physischen Filialnetzwerk und einem Onlineshop existieren. Der wahrgenommene Datenschutz hat im Vergleich den stärksten Einfluss auf das Vertrauen. Die Resultate legen nahe, Distributionskanäle weiter zu integrieren und die Kommunikation des Datenschutzes zu verbessern. * Design und Test eines Web-Analyse-Systems Der Forschungsbeitrag zu Konsumentenwahrnehmungen im Mehrkanalhandel motiviert die weitere Untersuchung der Erfolgsfaktoren im Internet. Wir präsentieren ein Kennzahlensystem mit 82 Kennzahlen zur Messung des Onlineerfolges von Webseiten. Neue Konversionsmetriken und Kundensegmentierungsansätze werden vorgestellt. Ein Schwerpunkt liegt auf der Entwicklung von Kennzahlen für Mehrkanalhändler. Das Kennzahlensystem wird auf Daten eines Mehrkanalhändlers und einer Informationswebseite geprüft. * Prototypische Entwicklung eines datenschutzwahrenden Web Analyse Services Die Analyse von Webdaten erfordert die Wahrung von Datenschutzrestriktionen. Der Einfluss von Datenschutzbestimmungen auf das Kennzahlensystem wird diskutiert. Wir präsentieren einen datenschutzwahrenden Web Analyse Service, der die Kennzahlen unseres Web-Analyse-Systems berechnet und zudem anzeigt, wenn eine Kennzahl im Konflikt mit Datenschutzbestimmungen steht. Eine syntaktische Erweiterung eines etablierten Datenschutzstandards wird vorgeschlagen. * Erweiterung der Analyse von Datenschutzbedürfnissen aus Kundensicht Eine wichtige Anwendung, die Resultate des beschriebenen Web Analyse Services nutzt, sind Personalisierungssysteme. Diese Systeme verbessern ihre Effizienz mit zunehmenden Informationen über die Nutzer. Daher sind die Datenschutzbedenken von Webnutzern besonders hoch bei Personalisierungssystemen. Konsumentendatenschutzbedenken werden in einer Meta-Studie von 30 Datenschutzumfragen kategorisiert und der Einfluss auf Personalisierungssysteme wird beschrieben. Forschungsansätze zur datensschutzwahrenden Personalisierung werden diskutiert. * Entwicklung eines Datenschutz-Kommunikationsdesigns Eine Firma muss nicht nur Datenschutzanforderungen bei Web-Analyse- und Datennutzungspraktiken berücksichtigen. Sie muss diese Datenschutzvorkehrungen auch effektiv gegenüber den Seitenbesuchern kommunizieren. Wir präsentieren ein neuartiges Nutzer-Interface-Design, bei dem Datenschutzpraktiken kontextualisiert erklärt werden, und der Kundennutzen der Datenübermittlung klar erläutert wird. Ein Nutzerexperiment wurde durchgeführt, das zwei Versionen eines personalisierten Web-Shops vergleicht. Teilnehmer, die mit unserem Interface-Design interagierten, waren signifikant häufiger bereit, persönliche Daten mitzuteilen, bewerteten die Datenschutzpraktiken und den Nutzen der Datenpreisgabe höher und kauften wesentlich häufiger.
The aim of this thesis is to explore the border between the competing interests of online consumers and companies. Privacy on the Internet is investigated from a consumer perspective and recommendations for better privacy management for companies are suggested. The proposed solutions allow the resolution of conflicting goals between companies’ data usage practices and consumers’ privacy concerns. The research is carried out with special emphasis on retailers operating multiple distribution channels. These retailers have become the dominant player in e-commerce. The thesis presents a set of business analyses for measuring online success of Web sites. New conversion metrics and customer segmentation approaches have been introduced. The analysis framework has been tested on Web data from a large multi-channel retailer and an information site. The analysis of Web data requires that privacy restrictions must be adhered to. Thus the impact of legislative and self-imposed privacy requirements on our analysis framework is also discussed. We propose a privacy-preserving Web analysis service that calculates our set of business analyses and indicates when an analysis is not compliant with privacy requirements. A syntactical extension of a privacy standard is proposed. Moreover, an overview of consumer privacy concerns and their particular impact on personalization systems is provided, that is summarized in a meta-study of 30 privacy surveys. A company must not only respect privacy requirements in its Web analysis and usage purposes but it must also effectively communicate these privacy practices to its site visitors. A privacy communication design is presented, which allows more efficient communication of a Web site’s privacy practices directed towards the users. Subjects who interacted with our new interface design were significantly more willing to share personal data with the Web site. They rated its privacy practices and the perceived benefit higher and made considerably more purchases.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Gorra, Andrea. "An analysis of the relationship between individuals' perceptions of privacy and mobile phone location data : a grounded theory study." Thesis, Leeds Beckett University, 2007. http://eprints.leedsbeckett.ac.uk/1554/.

Повний текст джерела
Анотація:
The mobile phone is a ubiquitous tool in today’s society, a daily companion for the majority of British citizens. The ability to trace a mobile phone’s geographic position at all times via mobile phone networks generates potentially sensitive data that can be stored and shared for significant lengths of time, particularly for the purpose of crime and terrorism investigations. This thesis examines the implications of the storage and use of mobile phone location data on individuals’ perceptions of privacy. The grounded theory methodology has been used to illustrate patterns and themes that are useful in understanding the broader discourses concerning location data relating to privacy, technology and policy-setting. The main contribution of this thesis is the development of a substantive theory grounded in empirical data from interviews, mobile phone location tracking and a survey. This theory is specific to a particular area, as it maps the relationship between mobile phone location data and perceptions of privacy within the UK. The theory confirms some arguments in the literature that argue that the concept of privacy is changing with individuals' increased dependence on electronic communications technologies in day-to-day life. However, whilst individuals tend to hold a rather traditional picture of privacy, not influenced by technology and solely related to their own personal lives, scholars paint a picture of privacy that is affected by technology and relates to society as a whole. Digital mass data collections, such as communications data retention, are not perceived as privacy invasive by individuals. Mobile phone location data is not seen as related to a citizen's daily life but instead primarily as a crime investigation tool. A recognition and understanding of the divergence between the perceptions and definitions of privacy between individuals and the academic literature in relation to mobile phone location data is of relevance, as it should impact on future policies regulating the gathering, storage and analysis of personal data.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Kim, Dae Wook. "Data-Driven Network-Centric Threat Assessment." Wright State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=wright1495191891086814.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Enríquez, Luis. "Personal data breaches : towards a deep integration between information security risks and GDPR compliance risks." Electronic Thesis or Diss., Université de Lille (2022-....), 2024. http://www.theses.fr/2024ULILD016.

Повний текст джерела
Анотація:
La sécurité de l'information est étroitement liée au droit de protection des données, car une mise en œuvre inefficace de la sécurité peut entraîner des violations de données à caractère personnel. Le RGPD repose sur la gestion de risques pour la protection des droits et libertés des personnes concernées, ce qui signifie que la gestion de risques est le mécanisme de protection des droits fondamentaux. Cependant, l'état de l'art en matière de gestion des risques liés à la sécurité de l'information et de gestion des risques juridiques sont encore immatures. Malheureusement, l'état actuel de l'art n'évalue pas la multidimensionnalité des risques liés à la protection des données, et il n'a pas tenu compte de l'objectif principal d'une approche basée sur les risques, à savoir mesurer les risques pour prendre des décisions éclairées. Le monde juridique doit comprendre que la gestion des risques ne fonctionne pas par défaut et plusieurs fois nécessite des méthodes scientifiques appliquées d'analyse des risques. Cette thèse propose un changement d'état d'esprit sur la gestion des risques liés à la protection des données, avec une approche holistique qui fusionne les risques opérationnels, financiers et juridiques. Le concept de valeur à risque des données personnelles est présenté comme le résultat de plusieurs stratégies quantitatives basées sur la modélisation des risques, la jurimétrie, et l'analyse de la protection des données à la lumière de l'apprentissage automatique. Les idées présentées ici contribueront également à la mise en conformité avec les prochaines réglementations basées sur le risque qui reposent sur la protection des données, telles que l'intelligence artificielle. La transformation au risque peut sembler difficile, mais elle est obligatoire pour l'évolution de la protection des données
Information security is deeply linked to data protection law, because an ineffective security implementation can lead to personal data breaches. The GDPR is based on a risk-based approach for the protection of the rights and freedoms of the data subjects, meaning that risk management is the mechanism for protecting fundamental rights. However, the state of the art of information security risk management and legal risk management are still immature. Unfortunately, the current state of the art does not assess the multi-dimensionality of data protection risks, and it has skipped the main purpose of a risk-based approach, measuring risk for taking informed decisions. The legal world shall understand that risk management does not work by default, and it often requires applied-scientific methods for assessing risks. This thesis proposes a mindset change with the aim of fixing data protection risk management, with a holistic data protection approach that merges operational, financial, and legal risks. The concept of a Personal Data Value at Risk is introduced as the outcome of several quantitative strategies based on risk modeling, jurimetrics, and data protection analytics. The ideas presented here shall also contribute to comply with upcoming risk-based regulations that rely on data protection, such as artificial intelligence. The risk transformation may appear difficult, but it is compulsory for the evolution of data protection
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Stocco, Francesca A. "The internet of toys: Working towards best practice in digital governance and the recognition of children’s rights in mediated contexts." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2023. https://ro.ecu.edu.au/theses/2752.

Повний текст джерела
Анотація:
The Internet of Toys (IoToys) is a catch-all concept. Defined as “where toys not only relate one-on-one to children but are wirelessly connected to other toys and/or database data” (Holloway & Green, 2016, p. 506), children’s connected toys have been known to foster educational, social and interaction benefits. The benefits of IoToys are counterbalanced, however, with potential data privacy and security issues of children’s connected toys that have been raised by commentators and parents. These critiques have been widely circulated in the public sphere. In the lead up to the Christmas period (2018–2019) the candidate helped conduct a content analysis of media, policy and commercial discourses (n=~300+). Discussions around data privacy and security led to the identification of three children’s connected toys for particular attention in this thesis. These were: My Friend Cayla, a cloud-based interactive toy doll which has been withdrawn from the market; Parker Bear, an Augmented reality toy; and Fitbit Ace 2, a children’s fitness tracker. A step-by-step walkthrough following recommendations by Light et al., (2018), was applied to parents’ perspective of registering an account for their child in order to inspect the transparency of account creation for accessing toys’ companion apps. The vagueness protocols of Reidenberg et al., (2016) and Bhatia (2019) were amalgamated with the rhetorical language perspective of Pollach (2007), to inform a Constant Comparative Analysis (CCA) audit of the toys’ Terms of Service (ToS) documents, especially privacy policies. This CCA audit adopts an overarching linguistics perspective to explore the potential use of vague and ambiguous terms which companies could choose to address if they wished to adopt best practice in communicating privacy provisions. A case study methodology incorporates the CCA audit to explore IoToys companies’ compliance with the Children’s Online Privacy Protection Act (COPPA, US) and General Data Protection Act (GDPR, EU) relating to Parker Bear (US) and Fitbit Ace 2 (EU). This thesis concentrates upon these two latter toys to advance policy/regulatory development within a e-privacy context while embracing a children’s rights-based perspective.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Da, Silva Sébastien. "Fouille de données spatiales et modélisation de linéaires de paysages agricoles." Electronic Thesis or Diss., Université de Lorraine, 2014. http://docnum.univ-lorraine.fr/prive/DDOC_T_2014_0156_DA_SILVA.pdf.

Повний текст джерела
Анотація:
Cette thèse s'inscrit dans un partenariat entre l'INRA et l'INRIA et dans le champs de l'extraction de connaissances à partir de bases de données spatiales. La problématique porte sur la caractérisation et la simulation de paysages agricoles. Plus précisément, nous nous concentrons sur des lignes qui structurent le paysage agricole, telles que les routes, les fossés d'irrigation et les haies. Notre objectif est de modéliser les haies en raison de leur rôle dans de nombreux processus écologiques et environnementaux. Nous étudions les moyens de caractériser les structures de haies sur deux paysages agricoles contrastés, l'un situé dans le sud-Est de la France (majoritairement composé de vergers) et le second en Bretagne (Ouest de la France, de type bocage). Nous déterminons également si, et dans quelles circonstances, la répartition spatiale des haies est structurée par la position des éléments linéaires plus pérennes du paysage tels que les routes et les fossés et l'échelle de ces structures. La démarche d'extraction de connaissances à partir de base de données (ECBD) mise en place comporte différentes étapes de prétraitement et de fouille de données, alliant des méthodes mathématiques et informatiques. La première partie du travail de thèse se concentre sur la création d'un indice spatial statistique, fondé sur une notion géométrique de voisinage et permettant la caractérisation des structures de haies. Celui-Ci a permis de décrire les structures de haies dans le paysage et les résultats montrent qu'elles dépendent des éléments plus pérennes à courte distance et que le voisinage des haies est uniforme au-Delà de 150 mètres. En outre différentes structures de voisinage ont été mises en évidence selon les principales orientations de haies dans le sud-Est de la France, mais pas en Bretagne. La seconde partie du travail de thèse a exploré l'intérêt du couplage de méthodes de linéarisation avec des méthodes de Markov. Les méthodes de linéarisation ont été introduites avec l'utilisation d'une variante des courbes de Hilbert : les chemins de Hilbert adaptatifs. Les données spatiales linéaires ainsi construites ont ensuite été traitées avec les méthodes de Markov. Ces dernières ont l'avantage de pouvoir servir à la fois pour l'apprentissage sur les données réelles et pour la génération de données, dans le cadre, par exemple, de la simulation d'un paysage. Les résultats montrent que ces méthodes couplées permettant un apprentissage et une génération automatique qui capte des caractéristiques des différents paysages. Les premières simulations sont encourageantes malgré le besoin d'un post-Traitement. Finalement, ce travail de thèse a permis la création d'une méthode d'exploration de données spatiales basée sur différents outils et prenant en charge toutes les étapes de l'ECBD classique, depuis la sélection des données jusqu'à la visualisation des résultats. De plus, la construction de cette méthode est telle qu'elle peut servir à son tour à la génération de données, volet nécessaire pour la simulation de paysage
This thesis is part of a partnership between INRA and INRIA in the field of knowledge extraction from spatial databases. The study focuses on the characterization and simulation of agricultural landscapes. More specifically, we focus on linears that structure the agricultural landscape, such as roads, irrigation ditches and hedgerows. Our goal is to model the spatial distribution of hedgerows because of their role in many ecological and environmental processes. We more specifically study how to characterize the spatial structure of hedgerows in two contrasting agricultural landscapes, one located in south-Eastern France (mainly composed of orchards) and the second in Brittany (western France, \emph{bocage}-Type). We determine if the spatial distribution of hedgerows is structured by the position of the more perennial linear landscape features, such as roads and ditches, or not. In such a case, we also detect the circumstances under which this spatial distribution is structured and the scale of these structures. The implementation of the process of Knowledge Discovery in Databases (KDD) is comprised of different preprocessing steps and data mining algorithms which combine mathematical and computational methods. The first part of the thesis focuses on the creation of a statistical spatial index, based on a geometric neighborhood concept and allowing the characterization of structures of hedgerows. Spatial index allows to describe the structures of hedgerows in the landscape. The results show that hedgerows depend on more permanent linear elements at short distances, and that their neighborhood is uniform beyond 150 meters. In addition different neighborhood structures have been identified depending on the orientation of hedgerows in the South-East of France but not in Brittany. The second part of the thesis explores the potential of coupling linearization methods with Markov methods. The linearization methods are based on the use of alternative Hilbert curves: Hilbert adaptive paths. The linearized spatial data thus constructed were then treated with Markov methods. These methods have the advantage of being able to serve both for the machine learning and for the generation of new data, for example in the context of the simulation of a landscape. The results show that the combination of these methods for learning and automatic generation of hedgerows captures some characteristics of the different study landscapes. The first simulations are encouraging despite the need for post-Processing. Finally, this work has enabled the creation of a spatial data mining method based on different tools that support all stages of a classic KDD, from the selection of data to the visualization of results. Furthermore, this method was constructed in such a way that it can also be used for data generation, a component necessary for the simulation of landscapes
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Greenstein, Stanley. "Our Humanity Exposed : Predictive Modelling in a Legal Context." Doctoral thesis, Stockholms universitet, Juridiska institutionen, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-141657.

Повний текст джерела
Анотація:
This thesis examines predictive modelling from the legal perspective. Predictive modelling is a technology based on applied statistics, mathematics, machine learning and artificial intelligence that uses algorithms to analyse big data collections, and identify patterns that are invisible to human beings. The accumulated knowledge is incorporated into computer models, which are then used to identify and predict human activity in new circumstances, allowing for the manipulation of human behaviour. Predictive models use big data to represent people. Big data is a term used to describe the large amounts of data produced in the digital environment. It is growing rapidly due mainly to the fact that individuals are spending an increasing portion of their lives within the on-line environment, spurred by the internet and social media. As individuals make use of the on-line environment, they part with information about themselves. This information may concern their actions but may also reveal their personality traits. Predictive modelling is a powerful tool, which private companies are increasingly using to identify business risks and opportunities. They are incorporated into on-line commercial decision-making systems, determining, among other things, the music people listen to, the news feeds they receive, the content people see and whether they will be granted credit. This results in a number of potential harms to the individual, especially in relation to personal autonomy. This thesis examines the harms resulting from predictive modelling, some of which are recognized by traditional law. Using the European legal context as a point of departure, this study ascertains to what extent legal regimes address the use of predictive models and the threats to personal autonomy. In particular, it analyses Article 8 of the European Convention on Human Rights (ECHR) and the forthcoming General Data Protection Regulation (GDPR) adopted by the European Union (EU). Considering the shortcomings of traditional legal instruments, a strategy entitled ‘empowerment’ is suggested. It comprises components of a legal and technical nature, aimed at levelling the playing field between companies and individuals in the commercial setting. Is there a way to strengthen humanity as predictive modelling continues to develop?
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Attasena, Varunya. "Secret sharing approaches for secure data warehousing and on-line analysis in the cloud." Thesis, Lyon 2, 2015. http://www.theses.fr/2015LYO22014/document.

Повний текст джерела
Анотація:
Les systèmes d’information décisionnels dans le cloud Computing sont des solutions de plus en plus répandues. En effet, ces dernières offrent des capacités pour l’aide à la décision via l’élasticité des ressources pay-per-use du Cloud. Toutefois, les questions de sécurité des données demeurent une des principales préoccupations notamment lorsqu'il s’agit de traiter des données sensibles de l’entreprise. Beaucoup de questions de sécurité sont soulevées en terme de stockage, de protection, de disponibilité, d'intégrité, de sauvegarde et de récupération des données ainsi que des transferts des données dans un Cloud public. Les risques de sécurité peuvent provenir non seulement des fournisseurs de services de cloud computing mais aussi d’intrus malveillants. Les entrepôts de données dans les nuages devraient contenir des données sécurisées afin de permettre à la fois le traitement d'analyse en ligne hautement protégé et efficacement rafraîchi. Et ceci à plus faibles coûts de stockage et d'accès avec le modèle de paiement à la demande. Dans cette thèse, nous proposons deux nouvelles approches pour la sécurisation des entrepôts de données dans les nuages basées respectivement sur le partage vérifiable de clé secrète (bpVSS) et le partage vérifiable et flexible de clé secrète (fVSS). L’objectif du partage de clé cryptée et la distribution des données auprès de plusieurs fournisseurs du cloud permet de garantir la confidentialité et la disponibilité des données. bpVSS et fVSS abordent cinq lacunes des approches existantes traitant de partage de clés secrètes. Tout d'abord, ils permettent le traitement de l’analyse en ligne. Deuxièmement, ils garantissent l'intégrité des données à l'aide de deux signatures interne et externe. Troisièmement, ils aident les utilisateurs à minimiser le coût de l’entreposage du cloud en limitant le volume global de données cryptées. Sachant que fVSS fait la répartition des volumes des données cryptées en fonction des tarifs des fournisseurs. Quatrièmement, fVSS améliore la sécurité basée sur le partage de clé secrète en imposant une nouvelle contrainte : aucun groupe de fournisseurs de service ne peut contenir suffisamment de volume de données cryptées pour reconstruire ou casser le secret. Et cinquièmement, fVSS permet l'actualisation de l'entrepôt de données, même si certains fournisseurs de services sont défaillants. Pour évaluer l'efficacité de bpVSS et fVSS, nous étudions théoriquement les facteurs qui influent sur nos approches en matière de sécurité, de complexité et de coût financier dans le modèle de paiement à la demande. Nous validons également expérimentalement la pertinence de nos approches avec le Benchmark schéma en étoile afin de démontrer son efficacité par rapport aux méthodes existantes
Cloud business intelligence is an increasingly popular solution to deliver decision support capabilities via elastic, pay-per-use resources. However, data security issues are one of the top concerns when dealing with sensitive data. Many security issues are raised by data storage in a public cloud, including data privacy, data availability, data integrity, data backup and recovery, and data transfer safety. Moreover, security risks may come from both cloud service providers and intruders, while cloud data warehouses should be both highly protected and effectively refreshed and analyzed through on-line analysis processing. Hence, users seek secure data warehouses at the lowest possible storage and access costs within the pay-as-you-go paradigm.In this thesis, we propose two novel approaches for securing cloud data warehouses by base-p verifiable secret sharing (bpVSS) and flexible verifiable secret sharing (fVSS), respectively. Secret sharing encrypts and distributes data over several cloud service providers, thus enforcing data privacy and availability. bpVSS and fVSS address five shortcomings in existing secret sharing-based approaches. First, they allow on-line analysis processing. Second, they enforce data integrity with the help of both inner and outer signatures. Third, they help users minimize the cost of cloud warehousing by limiting global share volume. Moreover, fVSS balances the load among service providers with respect to their pricing policies. Fourth, fVSS improves secret sharing security by imposing a new constraint: no cloud service provide group can hold enough shares to reconstruct or break the secret. Five, fVSS allows refreshing the data warehouse even when some service providers fail. To evaluate bpVSS' and fVSS' efficiency, we theoretically study the factors that impact our approaches with respect to security, complexity and monetary cost in the pay-as-you-go paradigm. Moreover, we also validate the relevance of our approaches experimentally with the Star Schema Benchmark and demonstrate its superiority to related, existing methods
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Aldà, Francesco [Verfasser], Hans Ulrich [Gutachter] Simon, and Alexander [Gutachter] May. "On the trade-off between privacy and utility in statistical data analysis / Francesco Aldà ; Gutachter: Hans Ulrich Simon, Alexander May ; Fakultät für Mathematik." Bochum : Ruhr-Universität Bochum, 2018. http://d-nb.info/1161942416/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Mohamad, Hashim Haswira Nor. "Enabling open access to and re-use of publicly funded research data in Malaysian public universities : a legal and policy analysis." Thesis, Queensland University of Technology, 2012. https://eprints.qut.edu.au/63944/1/Haswira_Mohamad_Hashim_Thesis.pdf.

Повний текст джерела
Анотація:
Numerous statements and declarations have been made over recent decades in support of open access to research data. The growing recognition of the importance of open access to research data has been accompanied by calls on public research funding agencies and universities to facilitate better access to publicly funded research data so that it can be re-used and redistributed as public goods. International and inter-governmental bodies such as the ICSU/CODATA, the OECD and the European Union are strong supporters of open access to and re-use of publicly funded research data. This thesis focuses on the research data created by university researchers in Malaysian public universities whose research activities are funded by the Federal Government of Malaysia. Malaysia, like many countries, has not yet formulated a policy on open access to and re-use of publicly funded research data. Therefore, the aim of this thesis is to develop a policy to support the objective of enabling open access to and re-use of publicly funded research data in Malaysian public universities. Policy development is very important if the objective of enabling open access to and re-use of publicly funded research data is to be successfully achieved. In developing the policy, this thesis identifies a myriad of legal impediments arising from intellectual property rights, confidentiality, privacy and national security laws, novelty requirements in patent law and lack of a legal duty to ensure data quality. Legal impediments such as these have the effect of restricting, obstructing, hindering or slowing down the objective of enabling open access to and re-use of publicly funded research data. A key focus in the formulation of the policy was the need to resolve the various legal impediments that have been identified. This thesis analyses the existing policies and guidelines of Malaysian public universities to ascertain to what extent the legal impediments have been resolved. An international perspective is adopted by making a comparative analysis of the policies of public research funding agencies and universities in the United Kingdom, the United States and Australia to understand how they have dealt with the identified legal impediments. These countries have led the way in introducing policies which support open access to and re-use of publicly funded research data. As well as proposing a policy supporting open access to and re-use of publicly funded research data in Malaysian public universities, this thesis provides procedures for the implementation of the policy and guidelines for addressing the legal impediments to open access and re-use.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Sharman, Killaine K. "The theory and practice of risk in private infrastructure projects, an analysis of the cida industrial cooperation program's experience to date and policy recommendations for tomorrow." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ32353.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Li, Yidong. "Preserving privacy in data publishing and analysis." Thesis, 2011. http://hdl.handle.net/2440/68556.

Повний текст джерела
Анотація:
As data collection and storage techniques being greatly improved, data analysis is becoming an increasingly important issue in many business and academic collaborations that enhances their productivity and competitiveness. Multiple techniques for data analysis, such as data mining, business intelligence, statistical analysis and predictive analytics, have been developed in different science, commerce and social science domains. To ensure quality data analysis, effective information sharing between organizations becomes a vital requirement in today’s society. However, the shared data often contains person-specific and sensitive information like medical records. As more and more realworld datasets are released publicly, there is a growing concern about privacy breaches for the entities involved. To respond to this challenge, this thesis discusses the problem of eliminating privacy threats while, at the same time, preserving useful information in the released database for data analysis. The first part of this thesis discuss the problem of privacy preservation on relational data. Due to the inherent drawbacks of applying equi-depth data swapping in distancebased data analysis, we study efficient swapping algorithms based on equi-width partitioning for relational data publishing. We develop effective methods for both univariate and multivariate data swapping. With extensive theoretical analysis and experimental validation, we show that, Equi-Width Swapping (EWS) can achieve a similar performance in privacy preservation to that of Equi-Depth Swapping (EDS) if the number of partitions is sufficiently large (e.g. ≳ √n, where n is the size of dataset). In addition, our analysis shows that the multivariate EWS algorithm has much lower computational complexity O(n) than that of the multivariate EDS (which is O(n³) basically), while it still provides good protection for sensitive information. The second part of this thesis focuses on solving the problem of privacy preservation on graphs, which has increasing significance as more and more real-world graphs modelling complex systems such as social networks are released publicly, . We point out that the real labels of a large portion of nodes can be easily re-identified with some weight-related attacks in a weighted graph, even the graph is perturbed with weight-independent invariants like degree. Two concrete attacks have been identified based on the following elementary weight invariants: 1) volume: the sum of adjacent weights for a vertex; and 2) histogram: the neighborhood weight distribution of a vertex. In order to protect a graph from these attacks, we formalize a general model for weighted graph anonymization and provide efficient methods with respect to a two-step framework including property anonymization and graph reconstruction. Moreover, we theoretically prove the histogram anonymization problem is NP-hard in the general case, and present an efficient heuristic algorithm for this problem running in near-quadratic time on graph size. The final part of this thesis turns to exploring efficient privacy preserving techniques for hypergraphs, meanwhile, maintaining the quality of community detection. We first model a background knowledge attack based on so-called rank, which is one of the important properties of hyperedges. Then, we show empirically how high the disclosure risk is with the attack to breach the real-world data. We formalize a general model for rank-based hypergraph anonymization, and justify its hardness. As a solution, we extend the two-step framework for graph anonymization into our new problem and propose efficient algorithms that perform well on preserving data privacy. Also, we explore the issue of constructing a hypergraph with a specified rank set in the first place so far as we know. The proposed construction algorithm also has the characteristics of minimizing the bias of community detection on the original and the perturbed hypergraphs. In addition, we consider two de-anonymizing schemes that may be used to attack an anonymizied hypergraph and verify that both schemes fail in breaching the privacy of a hypergraph with rank anonymity in the real-world case.
Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 2011
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Hay, Michael G. "Enabling accurate analysis of private network data." 2010. https://scholarworks.umass.edu/dissertations/AAI3427533.

Повний текст джерела
Анотація:
This dissertation addresses the challenge of enabling accurate analysis of network data while ensuring the protection of network participants' privacy. This is an important problem: massive amounts of data are being collected (facebook activity, email correspondence, cell phone records), there is huge interest in analyzing the data, but the data is not being shared due to concerns about privacy. Despite much research in privacy-preserving data analysis, existing technologies fail to provide a solution because they were designed for tables, not networks, and cannot be easily adapted to handle the complexities of network data. We develop several technologies that advance us toward our goal. First, we develop a framework for assessing the risk of publishing a network that has been “anonymized.” Using this framework, we show that only a small amount of background knowledge about local network structure is needed to re-identify an “anonymous” individual. This motivates our second contribution: an algorithm that transforms the structure of the network to provably lower re-identification risk. In comparison with other algorithms, we show that our approach more accurately preserves important features of the network topology. Finally, we consider an alternative paradigm, in which the analyst can analyze private data through a carefully controlled query interface. We show that the degree sequence of a network can be accurately estimated under strong guarantees of privacy.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Hay, Michael. "Enabling Accurate Analysis of Private Network Data." 2010. https://scholarworks.umass.edu/open_access_dissertations/319.

Повний текст джерела
Анотація:
This dissertation addresses the challenge of enabling accurate analysis of network data while ensuring the protection of network participants' privacy. This is an important problem: massive amounts of data are being collected (facebook activity, email correspondence, cell phone records), there is huge interest in analyzing the data, but the data is not being shared due to concerns about privacy. Despite much research in privacy-preserving data analysis, existing technologies fail to provide a solution because they were designed for tables, not networks, and cannot be easily adapted to handle the complexities of network data. We develop several technologies that advance us toward our goal. First, we develop a framework for assessing the risk of publishing a network that has been "anonymized." Using this framework, we show that only a small amount of background knowledge about local network structure is needed to re-identify an "anonymous" individual. This motivates our second contribution: an algorithm that transforms the structure of the network to provably lower re-identification risk. In comparison with other algorithms, we show that our approach more accurately preserves important features of the network topology. Finally, we consider an alternative paradigm, in which the analyst can analyze private data through a carefully controlled query interface. We show that the degree sequence of a network can be accurately estimated under strong guarantees of privacy.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії