Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Text privacy.

Zeitschriftenartikel zum Thema „Text privacy“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Text privacy" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Kanyar, Mohammad Naeem. „Differential Privacy “Working Towards Differential Privacy for Sensitive Text ““. International Journal of Engineering and Computer Science 12, Nr. 04 (02.04.2023): 25691–99. http://dx.doi.org/10.18535/ijecs/v12i04.4727.

Der volle Inhalt der Quelle
Annotation:
The differential-privacy idea states that maintaining privacy often includes adding noise to a data set to make it more challenging to identify data that corresponds to specific individuals. The accuracy of data analysis is typically decreased when noise is added, and differential privacy provides a technique to evaluate the accuracy-privacy trade-off. Although it may be more difficult to discern between analyses performed on somewhat dissimilar data sets, injecting random noise can also reduce the usefulness of the analysis. If not, enough noise is supplied to a very tiny data collection, analyses could become practically useless. The trade-off between value and privacy should, however, become more manageable as the size of the data set increase. Along these lines, in this paper, the fundamental ideas of sensitivity and privacy budget in differential privacy, the noise mechanisms utilized as a part of differential privacy, the composition properties, the ways through which it can be achieved and the developments in this field to date have been presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Shree, A. N. Ramya, und Kiran P. „Privacy Preserving Text Document Summarization“. Journal of Engineering Research and Sciences 1, Nr. 7 (Juli 2022): 7–14. http://dx.doi.org/10.55708/js0107002.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Bihani, Geetanjali. „Interpretable Privacy Preservation of Text Representations Using Vector Steganography“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 11 (28.06.2022): 12872–73. http://dx.doi.org/10.1609/aaai.v36i11.21573.

Der volle Inhalt der Quelle
Annotation:
Contextual word representations generated by language models learn spurious associations present in the training corpora. Adversaries can exploit these associations to reverse-engineer the private attributes of entities mentioned in the training corpora. These findings have led to efforts towards minimizing the privacy risks of language models. However, existing approaches lack interpretability, compromise on data utility and fail to provide privacy guarantees. Thus, the goal of my doctoral research is to develop interpretable approaches towards privacy preservation of text representations that maximize data utility retention and guarantee privacy. To this end, I aim to study and develop methods to incorporate steganographic modifications within the vector geometry to obfuscate underlying spurious associations and retain the distributional semantic properties learnt during training.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Dopierała, Renata. „Społeczne wyobrażenia prywatności“. Kultura i Społeczeństwo 50, Nr. 1-2 (30.03.2006): 307–19. http://dx.doi.org/10.35757/kis.2006.50.1-2.14.

Der volle Inhalt der Quelle
Annotation:
In the paper author considers social representations of privacy based on empirical research — The Unfinished Sentences Test. The main goals of the text are to begin defining such terms as: private life and private sphere, and to discuss functions of privacy, how it is experienced and its role in individual and social life. It also deals with the threats to privacy in contemporary societies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Liang, Zi, Pinghui Wang, Ruofei Zhang, Nuo Xu, Shuo Zhang, Lifeng Xing, Haitao Bai und Ziyang Zhou. „MERGE: Fast Private Text Generation“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 18 (24.03.2024): 19884–92. http://dx.doi.org/10.1609/aaai.v38i18.29964.

Der volle Inhalt der Quelle
Annotation:
The drastic increase in language models' parameters has led to a new trend of deploying models in cloud servers, raising growing concerns about private inference for Transformer-based models. Existing two-party privacy-preserving techniques, however, only take into account natural language understanding (NLU) scenarios. Private inference in natural language generation (NLG), crucial for applications like translation and code completion, remains underexplored. In addition, previous privacy-preserving techniques suffer from convergence issues during model training and exhibit poor inference speed when used with NLG models due to the neglect of time-consuming operations in auto-regressive generations. To address these issues, we propose a fast private text generation framework for Transformer-based language models, namely MERGE. MERGE reuses the output hidden state as the word embedding to bypass the embedding computation and reorganize the linear operations in the Transformer module to accelerate the forward procedure. Extensive experiments show that MERGE achieves a 26.5x speedup to the vanilla encrypted model under the sequence length 512, and reduces 80% communication cost, with an up to 10x speedup to state-of-the-art approximated models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Wunderlich, Dominik, Daniel Bernau, Francesco Aldà, Javier Parra-Arnau und Thorsten Strufe. „On the Privacy–Utility Trade-Off in Differentially Private Hierarchical Text Classification“. Applied Sciences 12, Nr. 21 (04.11.2022): 11177. http://dx.doi.org/10.3390/app122111177.

Der volle Inhalt der Quelle
Annotation:
Hierarchical text classification consists of classifying text documents into a hierarchy of classes and sub-classes. Although Artificial Neural Networks have proved useful to perform this task, unfortunately, they can leak training data information to adversaries due to training data memorization. Using differential privacy during model training can mitigate leakage attacks against trained models, enabling the models to be shared safely at the cost of reduced model accuracy. This work investigates the privacy–utility trade-off in hierarchical text classification with differential privacy guarantees, and it identifies neural network architectures that offer superior trade-offs. To this end, we use a white-box membership inference attack to empirically assess the information leakage of three widely used neural network architectures. We show that large differential privacy parameters already suffice to completely mitigate membership inference attacks, thus resulting only in a moderate decrease in model utility. More specifically, for large datasets with long texts, we observed Transformer-based models to achieve an overall favorable privacy–utility trade-off, while for smaller datasets with shorter texts, convolutional neural networks are preferable.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Pang, Hweehwa, Jialie Shen und Ramayya Krishnan. „Privacy-preserving similarity-based text retrieval“. ACM Transactions on Internet Technology 10, Nr. 1 (Februar 2010): 1–39. http://dx.doi.org/10.1145/1667067.1667071.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Zhu, You-wen, Liu-sheng Huang, Dong Li und Wei Yang. „Privacy-preserving Text Information Hiding Detecting Algorithm“. Journal of Electronics & Information Technology 33, Nr. 2 (04.03.2011): 278–83. http://dx.doi.org/10.3724/sp.j.1146.2010.00375.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Tejaswini, G. „Cipher Text Policy Privacy Attribute-Based Security“. International Journal of Reliable Information and Assurance 5, Nr. 1 (30.07.2017): 15–20. http://dx.doi.org/10.21742/ijria.2017.5.1.03.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Xiong, Xingxing, Shubo Liu, Dan Li, Jun Wang und Xiaoguang Niu. „Locally differentially private continuous location sharing with randomized response“. International Journal of Distributed Sensor Networks 15, Nr. 8 (August 2019): 155014771987037. http://dx.doi.org/10.1177/1550147719870379.

Der volle Inhalt der Quelle
Annotation:
With the growing popularity of fifth-generation-enabled Internet of Things devices with localization capabilities, as well as on-building fifth-generation mobile network, location privacy has been giving rise to more frequent and extensive privacy concerns. To continuously enjoy services of location-based applications, one needs to share his or her location information to the corresponding service providers. However, these continuously shared location information will give rise to significant privacy issues due to the temporal correlation between locations. In order to solve this, we consider applying practical local differential privacy to private continuous location sharing. First, we introduce a novel definition of [Formula: see text]-local differential privacy to capture the temporal correlations between locations. Second, we present a generalized randomized response mechanism to achieve [Formula: see text]-local differential privacy for location privacy preservation, which obtains the upper bound of error, and serve it as the basic building block to design a unified private continuous location sharing framework with an untrusted server. Finally, we conduct experiments on the real-world Geolife dataset to evaluate our framework. The results show that generalized randomized response significantly outperforms planar isotropic mechanism in the context of utility.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Pang, HweeHwa, Xuhua Ding und Xiaokui Xiao. „Embellishing text search queries to protect user privacy“. Proceedings of the VLDB Endowment 3, Nr. 1-2 (September 2010): 598–607. http://dx.doi.org/10.14778/1920841.1920918.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Witten, Ian H., und John G. Cleary. „On the privacy afforded by adaptive text compression“. Computers & Security 7, Nr. 4 (August 1988): 397–408. http://dx.doi.org/10.1016/0167-4048(88)90580-9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Khatamova, Kamola. „MARKETING PRIVACY AND USING TEXT ON ONLINE ADVERTISING“. International Journal of Word Art 1, Nr. 1 (10.01.2019): 108–14. http://dx.doi.org/10.26739/2181-9297-2019-1-16.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Liu, Peng, Yan Bai, Lie Wang und Xianxian Li. „Partial k-Anonymity for Privacy-Preserving Social Network Data Publishing“. International Journal of Software Engineering and Knowledge Engineering 27, Nr. 01 (Februar 2017): 71–90. http://dx.doi.org/10.1142/s0218194017500048.

Der volle Inhalt der Quelle
Annotation:
With the popularity of social networks, privacy issues with regard to publishing social network data have gained intensive focus from academia. We analyzed the current privacy-preserving techniques for publishing social network data and defined a privacy-preserving model with privacy guarantee [Formula: see text]. With our definitions, the existing privacy-preserving methods, [Formula: see text]-anonymity and randomization can be combined together to protect data privacy. We also considered the privacy threat with label information and modify the [Formula: see text]-anonymity technique of tabular data to protect the published data from being attacked by the combination of two types of background knowledge, the structural and label knowledge. We devised a partial [Formula: see text]-anonymity algorithm and implemented it in Python and open source packages. We compared the algorithm with related [Formula: see text]-anonymity and random techniques on three real-world datasets. The experimental results show that the partial [Formula: see text]-anonymity algorithm preserves more data utilities than the [Formula: see text]-anonymity and randomization algorithms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Fernández Barbudo, Carlos. „Privacidad (digital) = (Digital) Privacy“. EUNOMÍA. Revista en Cultura de la Legalidad, Nr. 17 (27.09.2019): 276. http://dx.doi.org/10.20318/eunomia.2019.5033.

Der volle Inhalt der Quelle
Annotation:
Resumen: El desarrollo de las tecnologías de la información, y en particular Internet, ha supuesto la aparición de nuevas preocupaciones sociales que plantean la imposibilidad de preservar la privacidad ―que no la intimidad― de la población en el ámbito digital. Esta contribución aborda, en perspectiva histórica, la formación de un nuevo concepto sociopolítico de privacidad que ha sustituido al de intimidad en el ámbito digital. A tal fin se presentan los principales elementos que diferencian a ambos y cuáles son las transformaciones sociotécnicas fundamentales que han posibilitado este cambio conceptual. El desarrollo del texto llevará a defender la idoneidad de una mirada política sobre la privacidad y finaliza con la presentación de algunas propuestas recientes que abogan por entender la privacidad como un problema colectivo.Palabras clave: Espacio público, derecho a la privacidad, intimidad, público/privado, ciberespacio.Abstract: The development of information technologies, and in particular the Internet, has led to the emergence of new social concerns that raise the impossibility of preserving privacy in the digital sphere. This contribution addresses, in historical perspective, the formation of a new socio-political concept of privacy that has replaced the previous one. To this end, the main elements that differentiate both are presented and what are the fundamental sociotechnical transformations that have enabled this conceptual change. The development of the text will lead to defend the suitability of a political view on privacy and ends with the presentation of some recent proposals that advocate understanding privacy as a collective problem.Keywords: Public space, right to privacy, private life, public/private, cyberspace.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Duan, Huabin, Jie Yang und Huanjun Yang. „A Blockchain-Based Privacy Protection Application for Logistics Big Data“. Journal of Cases on Information Technology 24, Nr. 5 (21.02.2022): 1–12. http://dx.doi.org/10.4018/jcit.295249.

Der volle Inhalt der Quelle
Annotation:
Logistics business is generally managed by logistics orders in plain text, and there is a risk of disclosure of customer privacy information in every business link. In order to solve the problem of privacy protection in logistics big data system, a new kind of logistics user privacy data protection scheme is proposed. First of all, an access rights management mechanism is designed by combining block chain and anonymous authentication to realize the control and management of users' access rights to private data. Then, the privacy and confidentiality protection between different services is realized by dividing and storing the data of different services. Finally, the participants of the intra-chain private data are specified by embedding fields in the logistics information. The blockchain node receiving the transaction is used as the transit node to synchronize the intra-chain privacy data, so as to improve the intra-chain privacy protection within the business. Experimental results show that the proposed method can satisfy the privacy requirements and ensure considerable performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Liu, Gan, Xiongtao Sun, Yiran Li, Hui Li, Shuchang Zhao und Zhen Guo. „An Automatic Privacy-Aware Framework for Text Data in Online Social Network Based on a Multi-Deep Learning Model“. International Journal of Intelligent Systems 2023 (08.11.2023): 1–23. http://dx.doi.org/10.1155/2023/1727285.

Der volle Inhalt der Quelle
Annotation:
With the increasing severity of user privacy leaks in online social networks (OSNs), existing privacy protection technologies have difficulty meeting the diverse privacy protection needs of users. Therefore, privacy-aware (PA) for the text data that users post on OSNs has become a current research focus. However, most existing PA algorithms for OSN users only provide the types of privacy disclosures rather than the specific locations of disclosures. Furthermore, although named entity recognition (NER) technology can extract specific locations of privacy text, it has poor recognition performance for nested and interest privacy. To address these issues, this paper proposes a PA framework based on the extraction of OSN privacy information content. The framework can automatically perceive the privacy information shared by users in OSNs and accurately locate which parts of the text are leaking sensitive information. Firstly, we combine the roformerBERT model, BI_LSTM model, and global_pointer algorithm to construct a direct privacy entity recognition (DPER) model for solving the specific privacy location recognition and entity nesting problems. Secondly, we use the roformerBERT model and UniLM framework to construct an interest privacy inference (IPI) model for interest recognition and to generate interpretable text that supports this interest. Finally, we constructed a dataset of 13,000 privacy-containing texts for experimentation. Experimental results show that the overall accuracy of the DPER model can reach 91.80%, while that of the IPI model can reach 98.3%. Simultaneously, we compare the proposed model with recent methods. The analysis of the results indicates that the proposed model exhibits better performance than previous methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Da Silva Perez, Natália. „Privacy and Social Spaces“. TSEG - The Low Countries Journal of Social and Economic History 18, Nr. 3 (29.11.2021): 5–16. http://dx.doi.org/10.52024/tseg.11040.

Der volle Inhalt der Quelle
Annotation:
In this introductory text to the special issue Regulating Access: Privacy and the Private in Early Modern Dutch Contexts, Natália da Silva Perez argues that privacy can be a productive analytical lens to examine the social history of the Dutch Republic. She starts by providing an overview of theoretical definitions of privacy and of the ‘private versus public’ dichotomy, highlighting their implications for the study of society. Next, she discusses the modern view of privacy as a legally protected right, explaining that we must adjust expectations when applying the concept to historical examination: in the early modern period, privacy was not yet fully incorporated within a legal framework, and yet, it was a widespread need across different echelons of society. She provides a historical overview of this widespread need for privacy through instances where people attempted to regulate access to their material and immaterial resources. Finally, she describes how the four articles in this special issue contribute to our understanding of the role of privacy in early modern Dutch life.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Ataei, Mehrnaz, Auriol Degbelo, Christian Kray und Vitor Santos. „Complying with Privacy Legislation: From Legal Text to Implementation of Privacy-Aware Location-Based Services“. ISPRS International Journal of Geo-Information 7, Nr. 11 (13.11.2018): 442. http://dx.doi.org/10.3390/ijgi7110442.

Der volle Inhalt der Quelle
Annotation:
An individual’s location data is very sensitive geoinformation. While its disclosure is necessary, e.g., to provide location-based services (LBS), it also facilitates deep insights into the lives of LBS users as well as various attacks on these users. Location privacy threats can be mitigated through privacy regulations such as the General Data Protection Regulation (GDPR), which was introduced recently and harmonises data privacy laws across Europe. While the GDPR is meant to protect users’ privacy, the main problem is that it does not provide explicit guidelines for designers and developers about how to build systems that comply with it. In order to bridge this gap, we systematically analysed the legal text, carried out expert interviews, and ran a nine-week-long take-home study with four developers. We particularly focused on user-facing issues, as these have received little attention compared to technical issues. Our main contributions are a list of aspects from the legal text of the GDPR that can be tackled at the user interface level and a set of guidelines on how to realise this. Our results can help service providers, designers and developers of applications dealing with location information from human users to comply with the GDPR.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Mosier, Gregory C. „Text messages: privacy in employee communications in the USA“. International Journal of Private Law 2, Nr. 3 (2009): 260. http://dx.doi.org/10.1504/ijpl.2009.024142.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Bracamonte, Vanessa, Sebastian Pape und Sascha Loebner. „“All apps do this”: Comparing Privacy Concerns Towards Privacy Tools and Non-Privacy Tools for Social Media Content“. Proceedings on Privacy Enhancing Technologies 2022, Nr. 3 (Juli 2022): 57–78. http://dx.doi.org/10.56553/popets-2022-0062.

Der volle Inhalt der Quelle
Annotation:
Users report that they have regretted accidentally sharing personal information on social media. There have been proposals to help protect the privacy of these users, by providing tools which analyze text or images and detect personal information or privacy disclosure with the objective to alert the user of a privacy risk and transform the content. However, these proposals rely on having access to users’ data and users have reported that they have privacy concerns about the tools themselves. In this study, we investigate whether these privacy concerns are unique to privacy tools or whether they are comparable to privacy concerns about non-privacy tools that also process personal information. We conduct a user experiment to compare the level of privacy concern towards privacy tools and nonprivacy tools for text and image content, qualitatively analyze the reason for those privacy concerns, and evaluate which assurances are perceived to reduce that concern. The results show privacy tools are at a disadvantage: participants have a higher level of privacy concern about being surveilled by the privacy tools, and the same level concern about intrusion and secondary use of their personal information compared to non-privacy tools. In addition, the reasons for these concerns and assurances that are perceived to reduce privacy concern are also similar. We discuss what these results mean for the development of privacy tools that process user content.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Libbi, Claudia Alessandra, Jan Trienes, Dolf Trieschnigg und Christin Seifert. „Generating Synthetic Training Data for Supervised De-Identification of Electronic Health Records“. Future Internet 13, Nr. 5 (20.05.2021): 136. http://dx.doi.org/10.3390/fi13050136.

Der volle Inhalt der Quelle
Annotation:
A major hurdle in the development of natural language processing (NLP) methods for Electronic Health Records (EHRs) is the lack of large, annotated datasets. Privacy concerns prevent the distribution of EHRs, and the annotation of data is known to be costly and cumbersome. Synthetic data presents a promising solution to the privacy concern, if synthetic data has comparable utility to real data and if it preserves the privacy of patients. However, the generation of synthetic text alone is not useful for NLP because of the lack of annotations. In this work, we propose the use of neural language models (LSTM and GPT-2) for generating artificial EHR text jointly with annotations for named-entity recognition. Our experiments show that artificial documents can be used to train a supervised named-entity recognition model for de-identification, which outperforms a state-of-the-art rule-based baseline. Moreover, we show that combining real data with synthetic data improves the recall of the method, without manual annotation effort. We conduct a user study to gain insights on the privacy of artificial text. We highlight privacy risks associated with language models to inform future research on privacy-preserving automated text generation and metrics for evaluating privacy-preservation during text generation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

D’Acunto, David, Serena Volo und Raffaele Filieri. „“Most Americans like their privacy.” Exploring privacy concerns through US guests’ reviews“. International Journal of Contemporary Hospitality Management 33, Nr. 8 (26.07.2021): 2773–98. http://dx.doi.org/10.1108/ijchm-11-2020-1329.

Der volle Inhalt der Quelle
Annotation:
Purpose This study aims to explore US hotel guests’ privacy concerns with a twofold aim as follows: to investigate the privacy categories, themes and attributes most commonly discussed by guests in their reviews and to examine the influence of cultural proximity on privacy concerns. Design/methodology/approach This study combined automated text analytics with content analysis. The database consisted of 68,000 hotel reviews written by US guests lodged in different types of hotels in five European cities. Linguistic Inquiry Word Count, Leximancer and SPSS software were used for data analysis. Automated text analytics and a validated privacy dictionary were used to investigate the reviews by exploring the categories, themes and attributes of privacy concerns. Content analysis was used to analyze the narratives and select representative snippets. Findings The findings revealed various categories, themes and concepts related to privacy concerns. The two most commonly discussed categories were privacy restriction and outcome state. The main themes discussed in association with privacy were “room,” “hotel,” “breakfast” and several concepts within each of these themes were identified. Furthermore, US guests showed the lowest levels of privacy concerns when staying at American hotel chains as opposed to non-American chains or independent hotels, highlighting the role of cultural proximity in privacy concerns. Practical implications Hotel managers can benefit from the results by improving their understanding of hotel and service attributes mostly associated with privacy concerns. Specific suggestions are provided to hoteliers on how to increase guests’ privacy and on how to manage issues related to cultural distance with guests. Originality/value This study contributes to the hospitality literature by investigating a neglected issue: on-site hotel guests’ privacy concerns. Using an unobtrusive method of data collection and text analytics, this study offers valuable insights into the categories of privacy, the most recurrent themes in hotel guests’ reviews and the potential relationship between cultural proximity and privacy concerns.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Ait-Mlouk, Addi, Sadi A. Alawadi, Salman Toor und Andreas Hellander. „FedQAS: Privacy-Aware Machine Reading Comprehension with Federated Learning“. Applied Sciences 12, Nr. 6 (18.03.2022): 3130. http://dx.doi.org/10.3390/app12063130.

Der volle Inhalt der Quelle
Annotation:
Machine reading comprehension (MRC) of text data is a challenging task in Natural Language Processing (NLP), with a lot of ongoing research fueled by the release of the Stanford Question Answering Dataset (SQuAD) and Conversational Question Answering (CoQA). It is considered to be an effort to teach computers how to “understand” a text, and then to be able to answer questions about it using deep learning. However, until now, large-scale training on private text data and knowledge sharing has been missing for this NLP task. Hence, we present FedQAS, a privacy-preserving machine reading system capable of leveraging large-scale private data without the need to pool those datasets in a central location. The proposed approach combines transformer models and federated learning technologies. The system is developed using the FEDn framework and deployed as a proof-of-concept alliance initiative. FedQAS is flexible, language-agnostic, and allows intuitive participation and execution of local model training. In addition, we present the architecture and implementation of the system, as well as provide a reference evaluation based on the SQuAD dataset, to showcase how it overcomes data privacy issues and enables knowledge sharing between alliance members in a Federated learning setting.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Ramanath, Rohan, Florian Schaub, Shomir Wilson, Fei Liu, Norman Sadeh und Noah Smith. „Identifying Relevant Text Fragments to Help Crowdsource Privacy Policy Annotations“. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 2 (05.09.2014): 54–55. http://dx.doi.org/10.1609/hcomp.v2i1.13179.

Der volle Inhalt der Quelle
Annotation:
In today's age of big data, websites are collecting an increasingly wide variety of information about their users. The texts of websites' privacy policies, which serve as legal agreements between service providers and users, are often long and difficult to understand. Automated analysis of those texts has the potential to help users better understand the implications of agreeing to such policies. In this work, we present a technique that combines machine learning and crowdsourcing to semi-automatically extract key aspects of website privacy policies that is scalable, fast, and cost-effective.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Shenigaram, Vidhya, Susheel Kumar Thakur, Choul Praveen Kumar und Laxman Maddikunta. „SECURE DATA GROUP SHARING AND CONDITIONAL DISSEMINATION WITH MULTI-OWNER IN CLOUD COMPUTING“. Turkish Journal of Computer and Mathematics Education (TURCOMAT) 9, Nr. 3 (17.12.2018): 1312–18. http://dx.doi.org/10.61841/turcomat.v9i3.14475.

Der volle Inhalt der Quelle
Annotation:
With the rapid development of cloud services, huge volume of data is shared via cloud computing. Although cryptographic techniques have been utilized to provide data confidentiality in cloud computing, current mechanisms cannot enforce privacy concerns over cipher text associated with multiple owners, which makes co-owners unable to appropriately control whether data disseminators can actually disseminate their data. In this paper, we propose a secure data group sharing and conditional dissemination scheme with multi-owner in cloud computing, in which data owner can share private data with a group of users via the cloud in a secure way, and data disseminator can disseminate the data to a new group of users if the attributes satisfy the access policies in the cipher text. We further present a multiparty access control mechanism over the disseminated cipher text, in which the data co-owners can append new access policies to the cipher text due to their privacy preferences. Moreover, three policy aggregation strategies, including full permit, owner priority and majority permit, are provided to solve the privacy conflicts problem caused by different access policies. The security analysis and experimental results show our scheme is practical and efficient for secure data sharing with multi-owner in cloud computing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Xu, Zifeng, Fucai Zhou, Yuxi Li, Jian Xu und Qiang Wang. „Privacy-Preserving Subgraph Matching Protocol for Two Parties“. International Journal of Foundations of Computer Science 30, Nr. 04 (Juni 2019): 571–88. http://dx.doi.org/10.1142/s0129054119400136.

Der volle Inhalt der Quelle
Annotation:
Graph data structure has been widely used across many application areas, such as web data, social network, and cheminformatics. The main benefit of storing data as graphs is there exists a rich set of graph algorithms and operations that can be used to solve various computing problems, including pattern matching, data mining, and image processing. Among these graph algorithms, the subgraph isomorphism problem is one of the most fundamental algorithms that can be utilized by many higher level applications. The subgraph isomorphism problem is defined as, given two graphs [Formula: see text] and [Formula: see text], whether [Formula: see text] contains a subgraph that is isomorphic to [Formula: see text]. In this paper, we consider a special case of the subgraph isomorphism problem called the subgraph matching problem, which tests whether [Formula: see text] is a subgraph of [Formula: see text]. We propose a protocol that solve the subgraph matching problem in a privacy-preserving manner. The protocol allows two parties to jointly compute whether one graph is a subgraph of the other, while protecting the private information about the input graphs. The protocol is secure under the semi-honest setting, where each party performs the protocol faithfully.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Wei, Weiming, Chunming Tang und Yucheng Chen. „Efficient Privacy-Preserving K-Means Clustering from Secret-Sharing-Based Secure Three-Party Computation“. Entropy 24, Nr. 8 (18.08.2022): 1145. http://dx.doi.org/10.3390/e24081145.

Der volle Inhalt der Quelle
Annotation:
Privacy-preserving machine learning has become an important study at present due to privacy policies. However, the efficiency gap between the plain-text algorithm and its privacy-preserving version still exists. In this paper, we focus on designing a novel secret-sharing-based K-means clustering algorithm. Particularly, we present an efficient privacy-preserving K-means clustering algorithm based on replicated secret sharing with honest-majority in the semi-honest model. More concretely, the clustering task is outsourced to three semi-honest computing servers. Theoretically, the proposed privacy-preserving scheme can be proven with full data privacy. Furthermore, the experimental results demonstrate that our proposed privacy version reaches the same accuracy as the plain-text one. Compared to the existing privacy-preserving scheme, our proposed protocol can achieve about 16.5×–25.2× faster computation and 63.8×–68.0× lower communication. Consequently, the proposed privacy-preserving scheme is suitable for secret-sharing-based secure outsourced computation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Boldt, Martin, und Kaavya Rekanar. „Analysis and Text Classification of Privacy Policies From Rogue and Top-100 Fortune Global Companies“. International Journal of Information Security and Privacy 13, Nr. 2 (April 2019): 47–66. http://dx.doi.org/10.4018/ijisp.2019040104.

Der volle Inhalt der Quelle
Annotation:
In the present article, the authors investigate to what extent supervised binary classification can be used to distinguish between legitimate and rogue privacy policies posted on web pages. 15 classification algorithms are evaluated using a data set that consists of 100 privacy policies from legitimate websites (belonging to companies that top the Fortune Global 500 list) as well as 67 policies from rogue websites. A manual analysis of all policy content was performed and clear statistical differences in terms of both length and adherence to seven general privacy principles are found. Privacy policies from legitimate companies have a 98% adherence to the seven privacy principles, which is significantly higher than the 45% associated with rogue companies. Out of the 15 evaluated classification algorithms, Naïve Bayes Multinomial is the most suitable candidate to solve the problem at hand. Its models show the best performance, with an AUC measure of 0.90 (0.08), which outperforms most of the other candidates in the statistical tests used.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Ning, Yichen, Na Wang, Aodi Liu und Xuehui du. „Deep Learning based Privacy Information Identification approach for Unstructured Text“. Journal of Physics: Conference Series 1848, Nr. 1 (01.04.2021): 012032. http://dx.doi.org/10.1088/1742-6596/1848/1/012032.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Resende, Amanda, Davis Railsback, Rafael Dowsley, Anderson C. A. Nascimento und Diego F. Aranha. „Fast Privacy-Preserving Text Classification Based on Secure Multiparty Computation“. IEEE Transactions on Information Forensics and Security 17 (2022): 428–42. http://dx.doi.org/10.1109/tifs.2022.3144007.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

M, Priya. „PRIVACY ANALYSIS OF COMMENT USING TEXT MINING IN OSN FRAMEWORK“. International Journal of Advanced Research in Computer Science 9, Nr. 2 (20.04.2018): 309–13. http://dx.doi.org/10.26483/ijarcs.v9i2.5765.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Zhan, Huixin, und Victor S. Sheng. „Privacy-Preserving Representation Learning for Text-Attributed Networks with Simplicial Complexes“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 13 (26.06.2023): 16143–44. http://dx.doi.org/10.1609/aaai.v37i13.26932.

Der volle Inhalt der Quelle
Annotation:
Although recent network representation learning (NRL) works in text-attributed networks demonstrated superior performance for various graph inference tasks, learning network representations could always raise privacy concerns when nodes represent people or human-related variables. Moreover, standard NRLs that leverage structural information from a graph proceed by first encoding pairwise relationships into learned representations and then analysing its properties. This approach is fundamentally misaligned with problems where the relationships involve multiple points, and topological structure must be encoded beyond pairwise interactions. Fortunately, the machinery of topological data analysis (TDA) and, in particular, simplicial neural networks (SNNs) offer a mathematically rigorous framework to evaluate not only higher-order interactions, but also global invariant features of the observed graph to systematically learn topological structures. It is critical to investigate if the representation outputs from SNNs are more vulnerable compared to regular representation outputs from graph neural networks (GNNs) via pairwise interactions. In my dissertation, I will first study learning the representations with text attributes for simplicial complexes (RT4SC) via SNNs. Then, I will conduct research on two potential attacks on the representation outputs from SNNs: (1) membership inference attack, which infers whether a certain node of a graph is inside the training data of the GNN model; and (2) graph reconstruction attacks, which infer the confidential edges of a text-attributed network. Finally, I will study a privacy-preserving deterministic differentially private alternating direction method of multiplier to learn secure representation outputs from SNNs that capture multi-scale relationships and facilitate the passage from local structure to global invariant features on text-attributed networks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Choi, Daeseon, Younho Lee, Seokhyun Kim und Pilsung Kang. „Private attribute inference from Facebook’s public text metadata: a case study of Korean users“. Industrial Management & Data Systems 117, Nr. 8 (11.09.2017): 1687–706. http://dx.doi.org/10.1108/imds-07-2016-0276.

Der volle Inhalt der Quelle
Annotation:
Purpose As the number of users on social network services (SNSs) continues to increase at a remarkable rate, privacy and security issues are consistently arising. Although users may not want to disclose their private attributes, these can be inferred from their public behavior on social media. In order to investigate the severity of the leakage of private information in this manner, the purpose of this paper is to present a method to infer undisclosed personal attributes of users based only on the data available on their public profiles on Facebook. Design/methodology/approach Facebook profile data consisting of 32 attributes were collected for 111,123 Korean users. Inferences were made for four private attributes (gender, age, marital status, and relationship status) based on five machine learning-based classification algorithms and three regression algorithms. Findings Experimental results showed that users’ gender can be inferred very accurately, whereas marital status and relationship status can be predicted more accurately with the authors’ algorithms than with a random model. Moreover, the average difference between the actual and predicted ages of users was only 0.5 years. The results show that some private attributes can be easily inferred from only a few pieces of user profile information, which can jeopardize personal information and may increase the risk to dignity. Research limitations/implications In this paper, the authors’ only utilized each user’s own profile data, especially text information. Since users in SNSs are directly or indirectly connected, inference performance can be improved if the profile data of the friends of a given user are additionally considered. Moreover, utilizing non-text profile information, such as profile images, can help increase inference accuracy. The authors’ can also provide a more generalized inference performance if a larger data set of Facebook users is available. Practical implications A private attribute leakage alarm system based on the inference model would be helpful for users not desirous of the disclosure of their private attributes on SNSs. SNS service providers can measure and monitor the risk of privacy leakage in their system to protect their users and optimize the target marketing based on the inferred information if users agree to use it. Originality/value This paper investigates whether private attributes of SNS users can be inferred with a few pieces of publicly available information although users are not willing to disclose them. The experimental results showed that gender, age, marital status, and relationship status, can be inferred by machine-learning algorithms. Based on these results, an early warning system was designed to help both service providers and users to protect the users’ privacy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Patergianakis, Antonios, und Konstantinos Limniotis. „Privacy Issues in Stylometric Methods“. Cryptography 6, Nr. 2 (07.04.2022): 17. http://dx.doi.org/10.3390/cryptography6020017.

Der volle Inhalt der Quelle
Annotation:
Stylometry is a well-known field, aiming to identify the author of a text, based only on the way she/he writes. Despite its obvious advantages in several areas, such as in historical research or for copyright purposes, it may also yield privacy and personal data protection issues if it is used in specific contexts, without the users being aware of it. It is, therefore, of importance to assess the potential use of stylometry methods, as well as the implications of their use for online privacy protection. This paper aims to present, through relevant experiments, the possibility of the automated identification of a person using stylometry. The ultimate goal is to analyse the risks regarding privacy and personal data protection stemming from the use of stylometric techniques to evaluate the effectiveness of a specific stylometric identification system, as well as to examine whether proper anonymisation techniques can be applied so as to ensure that the identity of an author of a text (e.g., a user in an anonymous social network) remains hidden, even if stylometric methods are to be applied for possible re-identification.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Wang, Qiaozhi, Hao Xue, Fengjun Li, Dongwon Lee und Bo Luo. „#DontTweetThis: Scoring Private Information in Social Networks“. Proceedings on Privacy Enhancing Technologies 2019, Nr. 4 (01.10.2019): 72–92. http://dx.doi.org/10.2478/popets-2019-0059.

Der volle Inhalt der Quelle
Annotation:
Abstract With the growing popularity of online social networks, a large amount of private or sensitive information has been posted online. In particular, studies show that users sometimes reveal too much information or unintentionally release regretful messages, especially when they are careless, emotional, or unaware of privacy risks. As such, there exist great needs to be able to identify potentially-sensitive online contents, so that users could be alerted with such findings. In this paper, we propose a context-aware, text-based quantitative model for private information assessment, namely PrivScore, which is expected to serve as the foundation of a privacy leakage alerting mechanism. We first solicit diverse opinions on the sensitiveness of private information from crowdsourcing workers, and examine the responses to discover a perceptual model behind the consensuses and disagreements. We then develop a computational scheme using deep neural networks to compute a context-free PrivScore (i.e., the “consensus” privacy score among average users). Finally, we integrate tweet histories, topic preferences and social contexts to generate a personalized context-aware PrivScore. This privacy scoring mechanism could be employed to identify potentially-private messages and alert users to think again before posting them to OSNs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Feng, Tao, Xudong Wang und Xinghua Li. „LBS privacy protection technology based on searchable encryption mechanism“. MATEC Web of Conferences 189 (2018): 10013. http://dx.doi.org/10.1051/matecconf/201818910013.

Der volle Inhalt der Quelle
Annotation:
Location based Service (the Location - -based Service, LBS) is a System is to transform the existing mobile communication network, wireless sensor networks, and Global Positioning System (Global Positioning System, GPS) with the combination of information Service mode, the general improvement in Positioning technology and the high popularity of mobile intelligent terminals, led to the growing market of LBS. This article from the perspective of LBS service privacy security, mainly studies the LBS location privacy protection scheme based on cipher text search, in LBS service location privacy and search information privacy issues, focus on to design the scheme, based on the cryptography in LBS service privacy protection issues in the process, this paper fully and secret cipher text search characteristics, design a new privacy protection of LBS service model, and expounds the system structure and working principle of model, defines the security properties of the privacy protection model and security model, Under the specific security assumptions, the new location privacy protection scheme based on lbspp-bse (LBS location privacy protection based on searchable encryption) is implemented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Al-Rabeeah, Abdullah Abdulabbas Nahi, und Mohammed Mahdi Hashim. „Social Network Privacy Models“. Cihan University-Erbil Scientific Journal 3, Nr. 2 (20.08.2019): 92–101. http://dx.doi.org/10.24086/cuesj.v3n2y2019.pp92-101.

Der volle Inhalt der Quelle
Annotation:
Privacy is a vital research field for social network (SN) sites (SNS), such as Facebook, Twitter, and Google+, where both the number of users and the number of SN applications are sharply growing. Recently, there has been an exponential increase in user-generated text content, mainly in terms of posts, tweets, reviews, and messages on SN. This increase in textual information introduces many problems related to privacy. Privacy is susceptible to personal behavior due to the shared online data structure of SNS. Therefore, this study will conduct a systematic literature review to identify and discuss the main privacy issues associated with SN, existing privacy models and the limitations and gaps in current research into SN privacy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Slobogin, Christopher. „The Sacred Fourth Amendment Text“. Michigan Law Review Online, Nr. 119 (2020): 17. http://dx.doi.org/10.36644/mlr.online.119.17.sacred.

Der volle Inhalt der Quelle
Annotation:
The Supreme Court’s jurisprudence governing the Fourth Amendment’s “threshold”—a word meant to refer to the types of police actions that trigger the amendment’s warrant and reasonableness requirements—has confounded scholars and students alike since Katz v. United States. Before that 1967 decision, the Court’s decisions on the topic were fairly straightforward, based primarily on whether the police trespassed on the target’s property or property over which the target had control. After that decision—which has come to stand for the proposition that a Fourth Amendment search occurs if police infringe an expectation of privacy that society is prepared to recognize as reasonable—scholars have attempted to define the Amendment’s threshold by reference to history, philosophy, linguistics, empirical surveys, and positive law. With the advent of technology that more easily records, aggregates, and accesses public activities and everyday transactions, the cacophony on the threshold issue has grown deafening—especially so after the Supreme Court’s decisions in United States v. Jones and Carpenter v. United States. In these decisions, the Court seemed to backtrack from its previously established notions that public travels and personal information held by third parties are not reasonably perceived as private and are therefore not protected by the Fourth Amendment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Wang, Yansheng, Yongxin Tong und Dingyuan Shi. „Federated Latent Dirichlet Allocation: A Local Differential Privacy Based Framework“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 04 (03.04.2020): 6283–90. http://dx.doi.org/10.1609/aaai.v34i04.6096.

Der volle Inhalt der Quelle
Annotation:
Latent Dirichlet Allocation (LDA) is a widely adopted topic model for industrial-grade text mining applications. However, its performance heavily relies on the collection of large amount of text data from users' everyday life for model training. Such data collection risks severe privacy leakage if the data collector is untrustworthy. To protect text data privacy while allowing accurate model training, we investigate federated learning of LDA models. That is, the model is collaboratively trained between an untrustworthy data collector and multiple users, where raw text data of each user are stored locally and not uploaded to the data collector. To this end, we propose FedLDA, a local differential privacy (LDP) based framework for federated learning of LDA models. Central in FedLDA is a novel LDP mechanism called Random Response with Priori (RRP), which provides theoretical guarantees on both data privacy and model accuracy. We also design techniques to reduce the communication cost between the data collector and the users during model training. Extensive experiments on three open datasets verified the effectiveness of our solution.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Hu, Zhao-Wei, und Jing Yang. „Trajectory Privacy Protection Based on Location Semantic Perception“. International Journal of Cooperative Information Systems 28, Nr. 03 (September 2019): 1950006. http://dx.doi.org/10.1142/s0218843019500060.

Der volle Inhalt der Quelle
Annotation:
A personalized trajectory privacy protection method based on location semantic perception to achieve the personalized goal of privacy protection parameter setting and policy selection is proposed. The concept of user perception is introduced and a set of security samples that the user feels safe and has no risk of privacy leakage is set by the user’s personal perception. In addition, global privacy protection parameters are determined by calculating the mean values of multiple privacy protection parameters in the sample set. The concept of location semantics is also introduced. By anonymizing the real user with [Formula: see text] collaborative users that satisfy the different semantic conditions, [Formula: see text] query requests which do not have the exact same query content and contain precise location information of the user and the collaborative user are sent to ensure the accuracy of the query results and avoid privacy-leaks caused by the query content and type. Information leakage and privacy level values are tested for qualitative analysis and quantitative calculation of privacy protection efficacy to find that the proposed method indeed safeguards the privacy of mobile users. Finally, the feasibility and effectiveness of the algorithm are verified by simulation experiments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Md, Abdul Quadir, Raghav V. Anand, Senthilkumar Mohan, Christy Jackson Joshua, Sabhari S. Girish, Anthra Devarajan und Celestine Iwendi. „Data-Driven Analysis of Privacy Policies Using LexRank and KL Summarizer for Environmental Sustainability“. Sustainability 15, Nr. 7 (29.03.2023): 5941. http://dx.doi.org/10.3390/su15075941.

Der volle Inhalt der Quelle
Annotation:
Natural language processing (NLP) is a field in machine learning that analyses and manipulate huge amounts of data and generates human language. There are a variety of applications of NLP such as sentiment analysis, text summarization, spam filtering, language translation, etc. Since privacy documents are important and legal, they play a vital part in any agreement. These documents are very long, but the important points still have to be read thoroughly. Customers might not have the necessary time or the knowledge to understand all the complexities of a privacy policy document. In this context, this paper proposes an optimal model to summarize the privacy policy in the best possible way. The methodology of text summarization is the process where the summaries from the original huge text are extracted without losing any vital information. Using the proposed idea of a common word reduction process combined with natural language processing algorithms, this paper extracts the sentences in the privacy policy document that hold high weightage and displays them to the customer, and it can save the customer’s time from reading through the entire policy while also providing the customers with only the important lines that they need to know before signing the document. The proposed method uses two different extractive text summarization algorithms, namely LexRank and Kullback Leibler (KL) Summarizer, to summarize the obtained text. According to the results, the summarized sentences obtained via the common word reduction process and text summarization algorithms were more significant than the raw privacy policy text. The introduction of this novel methodology helps to find certain important common words used in a particular sector to a greater depth, thus allowing more in-depth study of a privacy policy. Using the common word reduction process, the sentences were reduced by 14.63%, and by applying extractive NLP algorithms, significant sentences were obtained. The results after applying NLP algorithms showed a 191.52% increase in the repetition of common words in each sentence using the KL summarizer algorithm, while the LexRank algorithm showed a 361.01% increase in the repetition of common words. This implies that common words play a large role in determining a sector’s privacy policies, making our proposed method a real-world solution for environmental sustainability.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Ford, Elizabeth, Malcolm Oswald, Lamiece Hassan, Kyle Bozentko, Goran Nenadic und Jackie Cassell. „Should free-text data in electronic medical records be shared for research? A citizens’ jury study in the UK“. Journal of Medical Ethics 46, Nr. 6 (26.05.2020): 367–77. http://dx.doi.org/10.1136/medethics-2019-105472.

Der volle Inhalt der Quelle
Annotation:
BackgroundUse of routinely collected patient data for research and service planning is an explicit policy of the UK National Health Service and UK government. Much clinical information is recorded in free-text letters, reports and notes. These text data are generally lost to research, due to the increased privacy risk compared with structured data. We conducted a citizens’ jury which asked members of the public whether their medical free-text data should be shared for research for public benefit, to inform an ethical policy.MethodsEighteen citizens took part over 3 days. Jurors heard a range of expert presentations as well as arguments for and against sharing free text, and then questioned presenters and deliberated together. They answered a questionnaire on whether and how free text should be shared for research, gave reasons for and against sharing and suggestions for alleviating their concerns.ResultsJurors were in favour of sharing medical data and agreed this would benefit health research, but were more cautious about sharing free-text than structured data. They preferred processing of free text where a computer extracted information at scale. Their concerns were lack of transparency in uses of data, and privacy risks. They suggested keeping patients informed about uses of their data, and giving clear pathways to opt out of data sharing.ConclusionsInformed citizens suggested a transparent culture of research for the public benefit, and continuous improvement of technology to protect patient privacy, to mitigate their concerns regarding privacy risks of using patient text data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Sui, Yi, Xiujuan Wang, Kangfeng Zheng, Yutong Shi und Siwei Cao. „Personality Privacy Protection Method of Social Users Based on Generative Adversarial Networks“. Computational Intelligence and Neuroscience 2022 (13.04.2022): 1–13. http://dx.doi.org/10.1155/2022/2419987.

Der volle Inhalt der Quelle
Annotation:
Obscuring or otherwise minimizing the release of personality information from potential victims of social engineering attacks effectively interferes with an attacker’s personality analysis and reduces the success rate of social engineering attacks. We propose a text transformation method named PerTransGAN using generative adversarial networks (GANs) to protect the personality privacy hidden in text data. Making use of reinforcement learning, we use the output of the discriminator as a reward signal to guide the training of the generator. Moreover, the model extracts text features from the discriminator network as additional semantic guidance signals. And the loss function of the generator adds a penalty item to reduce the weight of words that contribute more to personality information in the real text so as to hide the user’s personality privacy. In addition, the semantic and personality modules are designed to calculate the semantic similarity and personality distribution distance between the real text and the generated text as a part of the objective function. Experiments show that the self-attention module and semantic module in the generator improved the content retention of the text by 0.11 compared with the baseline model and obtained the highest BLEU score. In addition, with the addition of penalty item and personality module, compared with the classification accuracy of the original data, the accuracy of the generated text in the personality classifier decreased by 20%. PerTransGAN model preserves users’ personality privacy as found in user data by transforming the text and preserving semantic similarity while blocking privacy theft by attackers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Meystre, Stéphane M., Óscar Ferrández, F. Jeffrey Friedlin, Brett R. South, Shuying Shen und Matthew H. Samore. „Text de-identification for privacy protection: A study of its impact on clinical text information content“. Journal of Biomedical Informatics 50 (August 2014): 142–50. http://dx.doi.org/10.1016/j.jbi.2014.01.011.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Kulkarni, Yogesh R., und T. Senthil Murugan. „Genetic grey wolf optimization and C-mixture for collaborative data publishing“. International Journal of Modeling, Simulation, and Scientific Computing 09, Nr. 06 (Dezember 2018): 1850058. http://dx.doi.org/10.1142/s1793962318500587.

Der volle Inhalt der Quelle
Annotation:
Data publishing is an area of interest in present day technology that has gained huge attention of researchers and experts. The concept of data publishing faces a lot of security issues, indicating that when any trusted organization provides data to a third party, personal information need not be disclosed. Therefore, to maintain the privacy of the data, this paper proposes an algorithm for privacy preserved collaborative data publishing using the Genetic Grey Wolf Optimizer (Genetic GWO) algorithm for which a C-mixture parameter is used. The C-mixture parameter enhances the privacy of the data if the data does not satisfy the privacy constraints, such as the [Formula: see text]-anonymity, [Formula: see text]-diversity and the [Formula: see text]-privacy. A minimum fitness value is maintained that depends on the minimum value of the generalized information loss and the minimum value of the average equivalence class size. The minimum value of the fitness ensures the maximum utility and the maximum privacy. Experimentation was carried out using the adult dataset, and the proposed Genetic GWO outperformed the existing methods in terms of the generalized information loss and the average equivalence class metric and achieved minimum values at a rate of 0.402 and 0.9, respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Wang, Lianhai, und Chenxi Guan. „Improving Security in the Internet of Vehicles: A Blockchain-Based Data Sharing Scheme“. Electronics 13, Nr. 4 (09.02.2024): 714. http://dx.doi.org/10.3390/electronics13040714.

Der volle Inhalt der Quelle
Annotation:
To ensure the aggregation of a high-quality global model during the data-sharing process in the Internet of Vehicles (IoV), current approaches primarily utilize gradient detection to mitigate malicious or low-quality parameter updates. However, deploying gradient detection in plain text neglects adequate privacy protection for vehicular data. This paper proposes the IoV-BDSS, a novel data-sharing scheme that integrates blockchain and hybrid privacy technologies to protect private data in gradient detection. This paper utilizes Euclidean distance to filter the similarity between vehicles and gradients, followed by encrypting the filtered gradients using secret sharing. Moreover, this paper evaluates the contribution and credibility of participating nodes, further ensuring the secure storage of high-quality models on the blockchain. Experimental results demonstrate that our approach achieves data sharing while preserving privacy and accuracy. It also exhibits resilience against 30% poisoning attacks, with a test error rate remaining below 0.16. Furthermore, our scheme incurs a lower computational overhead and faster inference speed, markedly reducing experimental costs by approximately 26% compared to similar methods, rendering it suitable for highly dynamic IoV systems with unstable communication.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Rani, Dr V. Uma, und Godavari S. L. S. Pranitha. „Quick Text Classification with Privacy Protection based on Secure Multiparty Computing“. International Journal for Research in Applied Science and Engineering Technology 11, Nr. 8 (31.08.2023): 2098–104. http://dx.doi.org/10.22214/ijraset.2023.55480.

Der volle Inhalt der Quelle
Annotation:
Abstract: I intend and complete activity a insured Naive Bayes sign to resolve the issue of private theme request. Alice, on one side, is holding a text, and Impact, on the other, is holding a classifier. Bob won't have advanced by any means, and Alice will just know how the calculation answered her text at the show's decision. Reaction depends on Secure Multiparty Computation (SMC). Utilizing Rust variant, it is basic and get to classify unstructured text. I can determine if a SMS is marketing mail or hot dog in inferior 340 milliseconds when Impact's model's declaration remark breadth covers all dispute (n = 5200) and Alice's SMS has m = 160 unigrams. This plan is adjustable and maybe secondhand in miscellaneous position place the Earnest Bayes classifier maybe secondhand. For n = 369 and m = 8, the sane number of marketing mail SMSs in the educational group, plan just takes 21 milliseconds
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Wu, Zongda, Shigen Shen, Xinze Lian, Xinning Su und Enhong Chen. „A dummy-based user privacy protection approach for text information retrieval“. Knowledge-Based Systems 195 (Mai 2020): 105679. http://dx.doi.org/10.1016/j.knosys.2020.105679.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Li, Xiaorong, und Zhinian Shu. „Research on Big Data Text Clustering Algorithm Based on Swarm Intelligence“. Wireless Communications and Mobile Computing 2022 (15.04.2022): 1–10. http://dx.doi.org/10.1155/2022/7551035.

Der volle Inhalt der Quelle
Annotation:
In order to break through the limitations of current clustering algorithms and avoid the direct impact of disturbance on the clustering effect of abnormal big data texts, a big data text clustering algorithm based on swarm intelligence is proposed. According to the characteristics of swarm intelligence, a differential privacy protection model is constructed; At the same time, the data location information operation is carried out to complete the location information preprocessing. Based on KD tree division, the allocation of the differential privacy budget is completed, and the location information is clustered by dimension reduction. Based on the result of the location information dimension reduction clustering, a differential privacy clustering algorithm is proposed to maximize the differentiation of the clustering effect. Select the fuzzy confidence and support thresholds to extract the associated features, use the features to determine the big data text targets to be clustered, establish a target decision matrix, determine the clustering targets, and obtain the target weights, the obtained membership, and the decision results. Realize the effective clustering of big data text based on swarm intelligence. The experimental results show that the algorithm can effectively realize the high-efficiency clustering of big data texts, and the clustering time is short, and it has a good clustering effect.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie