Littérature scientifique sur le sujet « K-Anonymisation »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « K-Anonymisation ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "K-Anonymisation"
Loukides, Grigorios, et Jian-Hua Shao. « An Efficient Clustering Algorithm for k-Anonymisation ». Journal of Computer Science and Technology 23, no 2 (mars 2008) : 188–202. http://dx.doi.org/10.1007/s11390-008-9121-3.
Texte intégralNatwichai, Juggapong, Xue Li et Asanee Kawtrkul. « Incremental processing and indexing for (k, e)-anonymisation ». International Journal of Information and Computer Security 5, no 3 (2013) : 151. http://dx.doi.org/10.1504/ijics.2013.055836.
Texte intégralStark, Konrad, Johann Eder et Kurt Zatloukal. « Achieving k-anonymity in DataMarts used for gene expressions exploitation ». Journal of Integrative Bioinformatics 4, no 1 (1 mars 2007) : 132–44. http://dx.doi.org/10.1515/jib-2007-58.
Texte intégralde Haro-Olmo, Francisco José, Ángel Jesús Varela-Vaca et José Antonio Álvarez-Bermejo. « Blockchain from the Perspective of Privacy and Anonymisation : A Systematic Literature Review ». Sensors 20, no 24 (14 décembre 2020) : 7171. http://dx.doi.org/10.3390/s20247171.
Texte intégralZhang, Yuliang, Tinghuai Ma, Jie Cao et Meili Tang. « K-anonymisation of social network by vertex and edge modification ». International Journal of Embedded Systems 8, no 2/3 (2016) : 206. http://dx.doi.org/10.1504/ijes.2016.076114.
Texte intégralGanabathi, G. Chitra, et P. Uma Maheswari. « Efficient clustering technique for k-anonymisation with aid of optimal KFCM ». International Journal of Business Intelligence and Data Mining 15, no 4 (2019) : 430. http://dx.doi.org/10.1504/ijbidm.2019.102809.
Texte intégralSingh, Amardeep, Monika Singh, Divya Bansal et Sanjeev Sofat. « Optimised K-anonymisation technique to deal with mutual friends and degree attacks ». International Journal of Information and Computer Security 14, no 3/4 (2021) : 281. http://dx.doi.org/10.1504/ijics.2021.114706.
Texte intégralSofat, Sanjeev, Divya Bansal, Monika Singh et Amardeep Singh. « Optimised K-anonymisation technique to deal with mutual friends and degree attacks ». International Journal of Information and Computer Security 14, no 3/4 (2021) : 281. http://dx.doi.org/10.1504/ijics.2021.10037248.
Texte intégralYaji, Sharath, et B. Neelima. « Parallel computing for preserving privacy using k-anonymisation algorithms from big data ». International Journal of Big Data Intelligence 5, no 3 (2018) : 191. http://dx.doi.org/10.1504/ijbdi.2018.092659.
Texte intégralYaji, Sharath, et B. Neelima.B. « Parallel computing for preserving privacy using k-anonymisation algorithms from big data ». International Journal of Big Data Intelligence 5, no 3 (2018) : 191. http://dx.doi.org/10.1504/ijbdi.2018.10008733.
Texte intégralThèses sur le sujet "K-Anonymisation"
Mauger, Clémence. « Optimisation de l'utilité des données lors d'un processus de k-anonymisation ». Electronic Thesis or Diss., Amiens, 2021. http://www.theses.fr/2021AMIE0076.
Texte intégralSo that providing privacy guarantees to anonymized databases, anonymization models have emerged few decades ago. Among them, you can find k-anonymity, l-diversity, t-proximity or differential confidentiality. In this thesis, we mainly focused on the k-anonymity model through an in-depth analysis of the ways to produce databases that meet these confidentiality criteria while optimizing data utility. From a table, you can consider the set of its k-anonymous versions, which can be of exponential cardinality according to k. In a vacuum, these k-anonymous versions can be scored thanks to the amount of data modification that is correlated to the data utility. Thus, this work proposes a study of how to optimize the data utility during the process of k-anonymizing a database.First, we studied information loss metrics to estimate the amount of information lost in a table during a k-anonymization process. The metrics were used within a k-anonymization algorithm to guide equivalence class mergers leading to the production of a k-anonymous table. We tried to identify from this study characteristics in the definitions of information loss metrics allowing the production of good quality k-anonymous tables with regard to several criteria.Second, we were interested in the distribution of sensitive data into k-anonymous tables by using l-diversity and t- proximity models. More specifically, we proposed optimization strategies combining information loss metrics, l-diversity and t-proximity to be used during a k-anonymization process. The aim was then to preserv good levels of l-diversity and t-proximity of the k-anonymous tables produced, and this without sacrificing the data utility.Third, we tackled the question of the formulation of the problem of k-anonymization of a table. We relied on the original notion of generalization groups, to state the problem of k-anonymization of a table according to the incidence matrix of its associated hypergraph. Thanks to this new representation, we proposed an original procedure, declined to five algorithms, allowing to build a k-anonymous table by partitioning the equivalence classes of a k’-anonymous table with k′ >= k. Experiments carried out on two public tables have shown that the proposed algorithms outperfom the k-anonymization algorithm used previously in terms of information preservation
Verster, Cornelis Thomas. « On supporting K-anonymisation and L-diversity of crime databases with genetic algorithms in a resource constrained environment ». Master's thesis, University of Cape Town, 2015. http://hdl.handle.net/11427/20016.
Texte intégralSondeck, Louis-Philippe. « Privacy and utility assessment within statistical data bases ». Thesis, Evry, Institut national des télécommunications, 2017. http://www.theses.fr/2017TELE0023/document.
Texte intégralPersonal data promise relevant improvements in almost every economy sectors thanks to all the knowledge that can be extracted from it. As a proof of it, some of the biggest companies in the world, Google, Amazon, Facebook and Apple (GAFA) rely on this resource for providing their services. However, although personal data can be very useful for improvement and development of services, they can also, intentionally or not, harm data respondent’s privacy. Indeed, many studies have shown how data that were intended to protect respondents’ personal data were finally used to leak private information. Therefore, it becomes necessary to provide methods for protecting respondent’s privacy while ensuring utility of data for services. For this purpose, Europe has established a new regulation (The General Data Protection Regulation) (EU, 2016) that aims to protect European citizens’ personal data. However, the regulation only targets one side of the main goal as it focuses on privacy of citizens while the goal is about the best trade-off between privacy and utility. Indeed, privacy and utility are usually inversely proportional and the greater the privacy, the lower the data utility. One of the main approaches for addressing the trade-off between privacy and utility is data anonymization. In the literature, anonymization refers either to anonymization mechanisms or anonymization metrics. While the mechanisms are useful for anonymizing data, metrics are necessary to validate whether or not the best trade-off has been reached. However, existing metrics have several flaws including the lack of accuracy and the complexity of implementation. Moreover existing metrics are intended to assess either privacy or utility, this adds difficulties when assessing the trade-off between privacy and utility. In this thesis, we propose a novel approach for assessing both utility and privacy called Discrimination Rate (DR). The DR is an information theoretical approach which provides practical and fine grained measurements. The DR measures the capability of attributes to refine a set of respondents with measurements scaled between 0 and 1, the best refinement leading to single respondents. For example an identifier has a DR equals to 1 as it completely refines a set of respondents. We are therefore able to provide fine grained assessments and comparison of anonymization mechanisms (whether different instantiations of the same mechanism or different anonymization mechanisms) in terms of utility and privacy. Moreover, thanks to the DR, we provide formal definitions of identifiers (Personally Identifying Information) which has been recognized as one of the main concern of privacy regulations. The DR can therefore be used both by companies and regulators for tackling the personal data protection issues
Sondeck, Louis-Philippe. « Privacy and utility assessment within statistical data bases ». Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2017. http://www.theses.fr/2017TELE0023.
Texte intégralPersonal data promise relevant improvements in almost every economy sectors thanks to all the knowledge that can be extracted from it. As a proof of it, some of the biggest companies in the world, Google, Amazon, Facebook and Apple (GAFA) rely on this resource for providing their services. However, although personal data can be very useful for improvement and development of services, they can also, intentionally or not, harm data respondent’s privacy. Indeed, many studies have shown how data that were intended to protect respondents’ personal data were finally used to leak private information. Therefore, it becomes necessary to provide methods for protecting respondent’s privacy while ensuring utility of data for services. For this purpose, Europe has established a new regulation (The General Data Protection Regulation) (EU, 2016) that aims to protect European citizens’ personal data. However, the regulation only targets one side of the main goal as it focuses on privacy of citizens while the goal is about the best trade-off between privacy and utility. Indeed, privacy and utility are usually inversely proportional and the greater the privacy, the lower the data utility. One of the main approaches for addressing the trade-off between privacy and utility is data anonymization. In the literature, anonymization refers either to anonymization mechanisms or anonymization metrics. While the mechanisms are useful for anonymizing data, metrics are necessary to validate whether or not the best trade-off has been reached. However, existing metrics have several flaws including the lack of accuracy and the complexity of implementation. Moreover existing metrics are intended to assess either privacy or utility, this adds difficulties when assessing the trade-off between privacy and utility. In this thesis, we propose a novel approach for assessing both utility and privacy called Discrimination Rate (DR). The DR is an information theoretical approach which provides practical and fine grained measurements. The DR measures the capability of attributes to refine a set of respondents with measurements scaled between 0 and 1, the best refinement leading to single respondents. For example an identifier has a DR equals to 1 as it completely refines a set of respondents. We are therefore able to provide fine grained assessments and comparison of anonymization mechanisms (whether different instantiations of the same mechanism or different anonymization mechanisms) in terms of utility and privacy. Moreover, thanks to the DR, we provide formal definitions of identifiers (Personally Identifying Information) which has been recognized as one of the main concern of privacy regulations. The DR can therefore be used both by companies and regulators for tackling the personal data protection issues
Chapitres de livres sur le sujet "K-Anonymisation"
Loukides, Grigorios, Achilles Tziatzios et Jianhua Shao. « Towards Preference-Constrained k-Anonymisation ». Dans Database Systems for Advanced Applications, 231–45. Berlin, Heidelberg : Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04205-8_20.
Texte intégral« Graph Modification Approaches ». Dans Security, Privacy, and Anonymization in Social Networks, 86–115. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5158-4.ch005.
Texte intégralActes de conférences sur le sujet "K-Anonymisation"
Loukides, Grigorios, et Jianhua Shao. « Capturing data usefulness and privacy protection in K-anonymisation ». Dans the 2007 ACM symposium. New York, New York, USA : ACM Press, 2007. http://dx.doi.org/10.1145/1244002.1244091.
Texte intégralLoukides, Grigorios, et Jianhua Shao. « Greedy Clustering with Sample-Based Heuristics for K-Anonymisation ». Dans The First International Symposium on Data, Privacy, and E-Commerce (ISDPE 2007). IEEE, 2007. http://dx.doi.org/10.1109/isdpe.2007.102.
Texte intégralLoukides, Grigorios, et Jianhua Shao. « Towards Balancing Data Usefulness and Privacy Protection in K-Anonymisation ». Dans The Sixth IEEE International Conference on Computer and Information Technology (CIT'06). IEEE, 2006. http://dx.doi.org/10.1109/cit.2006.184.
Texte intégralLoukides, Grigorios, et Jianhua Shao. « Data utility and privacy protection trade-off in k-anonymisation ». Dans the 2008 international workshop. New York, New York, USA : ACM Press, 2008. http://dx.doi.org/10.1145/1379287.1379296.
Texte intégralTripathy, B. K., et Anirban Mitra. « An algorithm to achieve k-anonymity and l-diversity anonymisation in social networks ». Dans 2012 Fourth International Conference on Computational Aspects of Social Networks (CASoN). IEEE, 2012. http://dx.doi.org/10.1109/cason.2012.6412390.
Texte intégralde Reus, Pepijn, Ana Oprescu et Koen van Elsen. « Energy Cost and Machine Learning Accuracy Impact of k-Anonymisation and Synthetic Data Techniques ». Dans 2023 International Conference on ICT for Sustainability (ICT4S). IEEE, 2023. http://dx.doi.org/10.1109/ict4s58814.2023.00015.
Texte intégral