Gotowa bibliografia na temat „K-Anonymisation”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „K-Anonymisation”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "K-Anonymisation"
Loukides, Grigorios, i Jian-Hua Shao. "An Efficient Clustering Algorithm for k-Anonymisation". Journal of Computer Science and Technology 23, nr 2 (marzec 2008): 188–202. http://dx.doi.org/10.1007/s11390-008-9121-3.
Pełny tekst źródłaNatwichai, Juggapong, Xue Li i Asanee Kawtrkul. "Incremental processing and indexing for (k, e)-anonymisation". International Journal of Information and Computer Security 5, nr 3 (2013): 151. http://dx.doi.org/10.1504/ijics.2013.055836.
Pełny tekst źródłaStark, Konrad, Johann Eder i Kurt Zatloukal. "Achieving k-anonymity in DataMarts used for gene expressions exploitation". Journal of Integrative Bioinformatics 4, nr 1 (1.03.2007): 132–44. http://dx.doi.org/10.1515/jib-2007-58.
Pełny tekst źródłade Haro-Olmo, Francisco José, Ángel Jesús Varela-Vaca i José Antonio Álvarez-Bermejo. "Blockchain from the Perspective of Privacy and Anonymisation: A Systematic Literature Review". Sensors 20, nr 24 (14.12.2020): 7171. http://dx.doi.org/10.3390/s20247171.
Pełny tekst źródłaZhang, Yuliang, Tinghuai Ma, Jie Cao i Meili Tang. "K-anonymisation of social network by vertex and edge modification". International Journal of Embedded Systems 8, nr 2/3 (2016): 206. http://dx.doi.org/10.1504/ijes.2016.076114.
Pełny tekst źródłaGanabathi, G. Chitra, i P. Uma Maheswari. "Efficient clustering technique for k-anonymisation with aid of optimal KFCM". International Journal of Business Intelligence and Data Mining 15, nr 4 (2019): 430. http://dx.doi.org/10.1504/ijbidm.2019.102809.
Pełny tekst źródłaSingh, Amardeep, Monika Singh, Divya Bansal i Sanjeev Sofat. "Optimised K-anonymisation technique to deal with mutual friends and degree attacks". International Journal of Information and Computer Security 14, nr 3/4 (2021): 281. http://dx.doi.org/10.1504/ijics.2021.114706.
Pełny tekst źródłaSofat, Sanjeev, Divya Bansal, Monika Singh i Amardeep Singh. "Optimised K-anonymisation technique to deal with mutual friends and degree attacks". International Journal of Information and Computer Security 14, nr 3/4 (2021): 281. http://dx.doi.org/10.1504/ijics.2021.10037248.
Pełny tekst źródłaYaji, Sharath, i B. Neelima. "Parallel computing for preserving privacy using k-anonymisation algorithms from big data". International Journal of Big Data Intelligence 5, nr 3 (2018): 191. http://dx.doi.org/10.1504/ijbdi.2018.092659.
Pełny tekst źródłaYaji, Sharath, i B. Neelima.B. "Parallel computing for preserving privacy using k-anonymisation algorithms from big data". International Journal of Big Data Intelligence 5, nr 3 (2018): 191. http://dx.doi.org/10.1504/ijbdi.2018.10008733.
Pełny tekst źródłaRozprawy doktorskie na temat "K-Anonymisation"
Mauger, Clémence. "Optimisation de l'utilité des données lors d'un processus de k-anonymisation". Electronic Thesis or Diss., Amiens, 2021. http://www.theses.fr/2021AMIE0076.
Pełny tekst źródłaSo that providing privacy guarantees to anonymized databases, anonymization models have emerged few decades ago. Among them, you can find k-anonymity, l-diversity, t-proximity or differential confidentiality. In this thesis, we mainly focused on the k-anonymity model through an in-depth analysis of the ways to produce databases that meet these confidentiality criteria while optimizing data utility. From a table, you can consider the set of its k-anonymous versions, which can be of exponential cardinality according to k. In a vacuum, these k-anonymous versions can be scored thanks to the amount of data modification that is correlated to the data utility. Thus, this work proposes a study of how to optimize the data utility during the process of k-anonymizing a database.First, we studied information loss metrics to estimate the amount of information lost in a table during a k-anonymization process. The metrics were used within a k-anonymization algorithm to guide equivalence class mergers leading to the production of a k-anonymous table. We tried to identify from this study characteristics in the definitions of information loss metrics allowing the production of good quality k-anonymous tables with regard to several criteria.Second, we were interested in the distribution of sensitive data into k-anonymous tables by using l-diversity and t- proximity models. More specifically, we proposed optimization strategies combining information loss metrics, l-diversity and t-proximity to be used during a k-anonymization process. The aim was then to preserv good levels of l-diversity and t-proximity of the k-anonymous tables produced, and this without sacrificing the data utility.Third, we tackled the question of the formulation of the problem of k-anonymization of a table. We relied on the original notion of generalization groups, to state the problem of k-anonymization of a table according to the incidence matrix of its associated hypergraph. Thanks to this new representation, we proposed an original procedure, declined to five algorithms, allowing to build a k-anonymous table by partitioning the equivalence classes of a k’-anonymous table with k′ >= k. Experiments carried out on two public tables have shown that the proposed algorithms outperfom the k-anonymization algorithm used previously in terms of information preservation
Verster, Cornelis Thomas. "On supporting K-anonymisation and L-diversity of crime databases with genetic algorithms in a resource constrained environment". Master's thesis, University of Cape Town, 2015. http://hdl.handle.net/11427/20016.
Pełny tekst źródłaSondeck, Louis-Philippe. "Privacy and utility assessment within statistical data bases". Thesis, Evry, Institut national des télécommunications, 2017. http://www.theses.fr/2017TELE0023/document.
Pełny tekst źródłaPersonal data promise relevant improvements in almost every economy sectors thanks to all the knowledge that can be extracted from it. As a proof of it, some of the biggest companies in the world, Google, Amazon, Facebook and Apple (GAFA) rely on this resource for providing their services. However, although personal data can be very useful for improvement and development of services, they can also, intentionally or not, harm data respondent’s privacy. Indeed, many studies have shown how data that were intended to protect respondents’ personal data were finally used to leak private information. Therefore, it becomes necessary to provide methods for protecting respondent’s privacy while ensuring utility of data for services. For this purpose, Europe has established a new regulation (The General Data Protection Regulation) (EU, 2016) that aims to protect European citizens’ personal data. However, the regulation only targets one side of the main goal as it focuses on privacy of citizens while the goal is about the best trade-off between privacy and utility. Indeed, privacy and utility are usually inversely proportional and the greater the privacy, the lower the data utility. One of the main approaches for addressing the trade-off between privacy and utility is data anonymization. In the literature, anonymization refers either to anonymization mechanisms or anonymization metrics. While the mechanisms are useful for anonymizing data, metrics are necessary to validate whether or not the best trade-off has been reached. However, existing metrics have several flaws including the lack of accuracy and the complexity of implementation. Moreover existing metrics are intended to assess either privacy or utility, this adds difficulties when assessing the trade-off between privacy and utility. In this thesis, we propose a novel approach for assessing both utility and privacy called Discrimination Rate (DR). The DR is an information theoretical approach which provides practical and fine grained measurements. The DR measures the capability of attributes to refine a set of respondents with measurements scaled between 0 and 1, the best refinement leading to single respondents. For example an identifier has a DR equals to 1 as it completely refines a set of respondents. We are therefore able to provide fine grained assessments and comparison of anonymization mechanisms (whether different instantiations of the same mechanism or different anonymization mechanisms) in terms of utility and privacy. Moreover, thanks to the DR, we provide formal definitions of identifiers (Personally Identifying Information) which has been recognized as one of the main concern of privacy regulations. The DR can therefore be used both by companies and regulators for tackling the personal data protection issues
Sondeck, Louis-Philippe. "Privacy and utility assessment within statistical data bases". Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2017. http://www.theses.fr/2017TELE0023.
Pełny tekst źródłaPersonal data promise relevant improvements in almost every economy sectors thanks to all the knowledge that can be extracted from it. As a proof of it, some of the biggest companies in the world, Google, Amazon, Facebook and Apple (GAFA) rely on this resource for providing their services. However, although personal data can be very useful for improvement and development of services, they can also, intentionally or not, harm data respondent’s privacy. Indeed, many studies have shown how data that were intended to protect respondents’ personal data were finally used to leak private information. Therefore, it becomes necessary to provide methods for protecting respondent’s privacy while ensuring utility of data for services. For this purpose, Europe has established a new regulation (The General Data Protection Regulation) (EU, 2016) that aims to protect European citizens’ personal data. However, the regulation only targets one side of the main goal as it focuses on privacy of citizens while the goal is about the best trade-off between privacy and utility. Indeed, privacy and utility are usually inversely proportional and the greater the privacy, the lower the data utility. One of the main approaches for addressing the trade-off between privacy and utility is data anonymization. In the literature, anonymization refers either to anonymization mechanisms or anonymization metrics. While the mechanisms are useful for anonymizing data, metrics are necessary to validate whether or not the best trade-off has been reached. However, existing metrics have several flaws including the lack of accuracy and the complexity of implementation. Moreover existing metrics are intended to assess either privacy or utility, this adds difficulties when assessing the trade-off between privacy and utility. In this thesis, we propose a novel approach for assessing both utility and privacy called Discrimination Rate (DR). The DR is an information theoretical approach which provides practical and fine grained measurements. The DR measures the capability of attributes to refine a set of respondents with measurements scaled between 0 and 1, the best refinement leading to single respondents. For example an identifier has a DR equals to 1 as it completely refines a set of respondents. We are therefore able to provide fine grained assessments and comparison of anonymization mechanisms (whether different instantiations of the same mechanism or different anonymization mechanisms) in terms of utility and privacy. Moreover, thanks to the DR, we provide formal definitions of identifiers (Personally Identifying Information) which has been recognized as one of the main concern of privacy regulations. The DR can therefore be used both by companies and regulators for tackling the personal data protection issues
Części książek na temat "K-Anonymisation"
Loukides, Grigorios, Achilles Tziatzios i Jianhua Shao. "Towards Preference-Constrained k-Anonymisation". W Database Systems for Advanced Applications, 231–45. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04205-8_20.
Pełny tekst źródła"Graph Modification Approaches". W Security, Privacy, and Anonymization in Social Networks, 86–115. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5158-4.ch005.
Pełny tekst źródłaStreszczenia konferencji na temat "K-Anonymisation"
Loukides, Grigorios, i Jianhua Shao. "Capturing data usefulness and privacy protection in K-anonymisation". W the 2007 ACM symposium. New York, New York, USA: ACM Press, 2007. http://dx.doi.org/10.1145/1244002.1244091.
Pełny tekst źródłaLoukides, Grigorios, i Jianhua Shao. "Greedy Clustering with Sample-Based Heuristics for K-Anonymisation". W The First International Symposium on Data, Privacy, and E-Commerce (ISDPE 2007). IEEE, 2007. http://dx.doi.org/10.1109/isdpe.2007.102.
Pełny tekst źródłaLoukides, Grigorios, i Jianhua Shao. "Towards Balancing Data Usefulness and Privacy Protection in K-Anonymisation". W The Sixth IEEE International Conference on Computer and Information Technology (CIT'06). IEEE, 2006. http://dx.doi.org/10.1109/cit.2006.184.
Pełny tekst źródłaLoukides, Grigorios, i Jianhua Shao. "Data utility and privacy protection trade-off in k-anonymisation". W the 2008 international workshop. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1379287.1379296.
Pełny tekst źródłaTripathy, B. K., i Anirban Mitra. "An algorithm to achieve k-anonymity and l-diversity anonymisation in social networks". W 2012 Fourth International Conference on Computational Aspects of Social Networks (CASoN). IEEE, 2012. http://dx.doi.org/10.1109/cason.2012.6412390.
Pełny tekst źródłade Reus, Pepijn, Ana Oprescu i Koen van Elsen. "Energy Cost and Machine Learning Accuracy Impact of k-Anonymisation and Synthetic Data Techniques". W 2023 International Conference on ICT for Sustainability (ICT4S). IEEE, 2023. http://dx.doi.org/10.1109/ict4s58814.2023.00015.
Pełny tekst źródła