Literatura académica sobre el tema "Backdoor attacks"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Backdoor attacks".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Backdoor attacks"

1

Zhu, Biru, Ganqu Cui, Yangyi Chen, Yujia Qin, Lifan Yuan, Chong Fu, Yangdong Deng, Zhiyuan Liu, Maosong Sun y Ming Gu. "Removing Backdoors in Pre-trained Models by Regularized Continual Pre-training". Transactions of the Association for Computational Linguistics 11 (2023): 1608–23. http://dx.doi.org/10.1162/tacl_a_00622.

Texto completo
Resumen
Abstract Recent research has revealed that pre-trained models (PTMs) are vulnerable to backdoor attacks before the fine-tuning stage. The attackers can implant transferable task-agnostic backdoors in PTMs, and control model outputs on any downstream task, which poses severe security threats to all downstream applications. Existing backdoor-removal defenses focus on task-specific classification models and they are not suitable for defending PTMs against task-agnostic backdoor attacks. To this end, we propose the first task-agnostic backdoor removal method for PTMs. Based on the selective activation phenomenon in backdoored PTMs, we design a simple and effective backdoor eraser, which continually pre-trains the backdoored PTMs with a regularization term in an end-to-end approach. The regularization term removes backdoor functionalities from PTMs while the continual pre-training maintains the normal functionalities of PTMs. We conduct extensive experiments on pre-trained models across different modalities and architectures. The experimental results show that our method can effectively remove backdoors inside PTMs and preserve benign functionalities of PTMs with a few downstream-task-irrelevant auxiliary data, e.g., unlabeled plain texts. The average attack success rate on three downstream datasets is reduced from 99.88% to 8.10% after our defense on the backdoored BERT. The codes are publicly available at https://github.com/thunlp/RECIPE.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Yuan, Guotao, Hong Huang y Xin Li. "Self-supervised learning backdoor defense mixed with self-attention mechanism". Journal of Computing and Electronic Information Management 12, n.º 2 (30 de marzo de 2024): 81–88. http://dx.doi.org/10.54097/7hx9afkw.

Texto completo
Resumen
Recent studies have shown that Deep Neural Networks (DNNs) are vulnerable to backdoor attacks, where attackers embed hidden backdoors into the DNN models by poisoning a small number of training samples. The attacked models perform normally on benign samples, but when the backdoor is activated, their prediction results will be maliciously altered. To address the issues of suboptimal backdoor defense effectiveness and limited generality, a hybrid self-attention mechanism-based self-supervised learning method for backdoor defense is proposed. This method defends against backdoor attacks by leveraging the attack characteristics of backdoor threats, aiming to mitigate their impact. It adopts a decoupling approach, disconnecting the association between poisoned samples and target labels, and enhances the connection between feature labels and clean labels by optimizing the feature extractor. Experimental results on CIFAR-10 and CIFAR-100 datasets show that this method performs moderately in terms of Clean Accuracy (CA), ranking at the median level. However, it achieves significant effectiveness in reducing the Attack Success Rate (ASR), especially against BadNets and Blended attacks, where its defense capability is notably superior to other methods, with attack success rates below 2%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Saha, Aniruddha, Akshayvarun Subramanya y Hamed Pirsiavash. "Hidden Trigger Backdoor Attacks". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 07 (3 de abril de 2020): 11957–65. http://dx.doi.org/10.1609/aaai.v34i07.6871.

Texto completo
Resumen
With the success of deep learning algorithms in various domains, studying adversarial attacks to secure deep models in real world applications has become an important research topic. Backdoor attacks are a form of adversarial attacks on deep networks where the attacker provides poisoned data to the victim to train the model with, and then activates the attack by showing a specific small trigger pattern at the test time. Most state-of-the-art backdoor attacks either provide mislabeled poisoning data that is possible to identify by visual inspection, reveal the trigger in the poisoned data, or use noise to hide the trigger. We propose a novel form of backdoor attack where poisoned data look natural with correct labels and also more importantly, the attacker hides the trigger in the poisoned data and keeps the trigger secret until the test time. We perform an extensive study on various image classification settings and show that our attack can fool the model by pasting the trigger at random locations on unseen images although the model performs well on clean data. We also show that our proposed attack cannot be easily defended using a state-of-the-art defense algorithm for backdoor attacks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Duan, Qiuyu, Zhongyun Hua, Qing Liao, Yushu Zhang y Leo Yu Zhang. "Conditional Backdoor Attack via JPEG Compression". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 18 (24 de marzo de 2024): 19823–31. http://dx.doi.org/10.1609/aaai.v38i18.29957.

Texto completo
Resumen
Deep neural network (DNN) models have been proven vulnerable to backdoor attacks. One trend of backdoor attacks is developing more invisible and dynamic triggers to make attacks stealthier. However, these invisible and dynamic triggers can be inadvertently mitigated by some widely used passive denoising operations, such as image compression, making the efforts under this trend questionable. Another trend is to exploit the full potential of backdoor attacks by proposing new triggering paradigms, such as hibernated or opportunistic backdoors. In line with these trends, our work investigates the first conditional backdoor attack, where the backdoor is activated by a specific condition rather than pre-defined triggers. Specifically, we take the JPEG compression as our condition and jointly optimize the compression operator and the target model's loss function, which can force the target model to accurately learn the JPEG compression behavior as the triggering condition. In this case, besides the conditional triggering feature, our attack is also stealthy and robust to denoising operations. Extensive experiments on the MNIST, GTSRB and CelebA verify our attack's effectiveness, stealthiness and resistance to existing backdoor defenses and denoising operations. As a new triggering paradigm, the conditional backdoor attack brings a new angle for assessing the vulnerability of DNN models, and conditioned over JPEG compression magnifies its threat due to the universal usage of JPEG.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Liu, Zihao, Tianhao Wang, Mengdi Huai y Chenglin Miao. "Backdoor Attacks via Machine Unlearning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 13 (24 de marzo de 2024): 14115–23. http://dx.doi.org/10.1609/aaai.v38i13.29321.

Texto completo
Resumen
As a new paradigm to erase data from a model and protect user privacy, machine unlearning has drawn significant attention. However, existing studies on machine unlearning mainly focus on its effectiveness and efficiency, neglecting the security challenges introduced by this technique. In this paper, we aim to bridge this gap and study the possibility of conducting malicious attacks leveraging machine unlearning. Specifically, we consider the backdoor attack via machine unlearning, where an attacker seeks to inject a backdoor in the unlearned model by submitting malicious unlearning requests, so that the prediction made by the unlearned model can be changed when a particular trigger presents. In our study, we propose two attack approaches. The first attack approach does not require the attacker to poison any training data of the model. The attacker can achieve the attack goal only by requesting to unlearn a small subset of his contributed training data. The second approach allows the attacker to poison a few training instances with a pre-defined trigger upfront, and then activate the attack via submitting a malicious unlearning request. Both attack approaches are proposed with the goal of maximizing the attack utility while ensuring attack stealthiness. The effectiveness of the proposed attacks is demonstrated with different machine unlearning algorithms as well as different models on different datasets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Wang, Tong, Yuan Yao, Feng Xu, Miao Xu, Shengwei An y Ting Wang. "Inspecting Prediction Confidence for Detecting Black-Box Backdoor Attacks". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 1 (24 de marzo de 2024): 274–82. http://dx.doi.org/10.1609/aaai.v38i1.27780.

Texto completo
Resumen
Backdoor attacks have been shown to be a serious security threat against deep learning models, and various defenses have been proposed to detect whether a model is backdoored or not. However, as indicated by a recent black-box attack, existing defenses can be easily bypassed by implanting the backdoor in the frequency domain. To this end, we propose a new defense DTInspector against black-box backdoor attacks, based on a new observation related to the prediction confidence of learning models. That is, to achieve a high attack success rate with a small amount of poisoned data, backdoor attacks usually render a model exhibiting statistically higher prediction confidences on the poisoned samples. We provide both theoretical and empirical evidence for the generality of this observation. DTInspector then carefully examines the prediction confidences of data samples, and decides the existence of backdoor using the shortcut nature of backdoor triggers. Extensive evaluations on six backdoor attacks, four datasets, and three advanced attacking types demonstrate the effectiveness of the proposed defense.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Huynh, Tran, Dang Nguyen, Tung Pham y Anh Tran. "COMBAT: Alternated Training for Effective Clean-Label Backdoor Attacks". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 3 (24 de marzo de 2024): 2436–44. http://dx.doi.org/10.1609/aaai.v38i3.28019.

Texto completo
Resumen
Backdoor attacks pose a critical concern to the practice of using third-party data for AI development. The data can be poisoned to make a trained model misbehave when a predefined trigger pattern appears, granting the attackers illegal benefits. While most proposed backdoor attacks are dirty-label, clean-label attacks are more desirable by keeping data labels unchanged to dodge human inspection. However, designing a working clean-label attack is a challenging task, and existing clean-label attacks show underwhelming performance. In this paper, we propose a novel mechanism to develop clean-label attacks with outstanding attack performance. The key component is a trigger pattern generator, which is trained together with a surrogate model in an alternating manner. Our proposed mechanism is flexible and customizable, allowing different backdoor trigger types and behaviors for either single or multiple target labels. Our backdoor attacks can reach near-perfect attack success rates and bypass all state-of-the-art backdoor defenses, as illustrated via comprehensive experiments on standard benchmark datasets. Our code is available at https://github.com/VinAIResearch/COMBAT.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Zhang, Xianda, Baolin Zheng, Jianbao Hu, Chengyang Li y Xiaoying Bai. "From Toxic to Trustworthy: Using Self-Distillation and Semi-supervised Methods to Refine Neural Networks". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 15 (24 de marzo de 2024): 16873–80. http://dx.doi.org/10.1609/aaai.v38i15.29629.

Texto completo
Resumen
Despite the tremendous success of deep neural networks (DNNs) across various fields, their susceptibility to potential backdoor attacks seriously threatens their application security, particularly in safety-critical or security-sensitive ones. Given this growing threat, there is a pressing need for research into purging backdoors from DNNs. However, prior efforts on erasing backdoor triggers not only failed to withstand increasingly powerful attacks but also resulted in reduced model performance. In this paper, we propose From Toxic to Trustworthy (FTT), an innovative approach to eliminate backdoor triggers while simultaneously enhancing model accuracy. Following the stringent and practical assumption of limited availability of clean data, we introduce a self-attention distillation (SAD) method to remove the backdoor by aligning the shallow and deep parts of the network. Furthermore, we first devise a semi-supervised learning (SSL) method that leverages ubiquitous and available poisoned data to further purify backdoors and improve accuracy. Extensive experiments on various attacks and models have shown that our FTT can reduce the attack success rate from 97% to 1% and improve the accuracy of 4% on average, demonstrating its effectiveness in mitigating backdoor attacks and improving model performance. Compared to state-of-the-art (SOTA) methods, our FTT can reduce the attack success rate by 2 times and improve the accuracy by 5%, shedding light on backdoor cleansing.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Liu, Tao, Yuhang Zhang, Zhu Feng, Zhiqin Yang, Chen Xu, Dapeng Man y Wu Yang. "Beyond Traditional Threats: A Persistent Backdoor Attack on Federated Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 19 (24 de marzo de 2024): 21359–67. http://dx.doi.org/10.1609/aaai.v38i19.30131.

Texto completo
Resumen
Backdoors on federated learning will be diluted by subsequent benign updates. This is reflected in the significant reduction of attack success rate as iterations increase, ultimately failing. We use a new metric to quantify the degree of this weakened backdoor effect, called attack persistence. Given that research to improve this performance has not been widely noted, we propose a Full Combination Backdoor Attack (FCBA) method. It aggregates more combined trigger information for a more complete backdoor pattern in the global model. Trained backdoored global model is more resilient to benign updates, leading to a higher attack success rate on the test set. We test on three datasets and evaluate with two models across various settings. FCBA's persistence outperforms SOTA federated learning backdoor attacks. On GTSRB, post-attack 120 rounds, our attack success rate rose over 50% from baseline. The core code of our method is available at https://github.com/PhD-TaoLiu/FCBA.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Zhang, Lei, Ya Peng, Lifei Wei, Congcong Chen y Xiaoyu Zhang. "DeepDefense: A Steganalysis-Based Backdoor Detecting and Mitigating Protocol in Deep Neural Networks for AI Security". Security and Communication Networks 2023 (9 de mayo de 2023): 1–12. http://dx.doi.org/10.1155/2023/9308909.

Texto completo
Resumen
Backdoor attacks have been recognized as a major AI security threat in deep neural networks (DNNs) recently. The attackers inject backdoors into DNNs during the model training such as federated learning. The infected model behaves normally on the clean samples in AI applications while the backdoors are only activated by the predefined triggers and resulted in the specified results. Most of the existing defensing approaches assume that the trigger settings on different poisoned samples are visible and identical just like a white square in the corner of the image. Besides, the sample-specific triggers are always invisible and difficult to detect in DNNs, which also becomes a great challenge against the existing defensing protocols. In this paper, to address the above problems, we propose a backdoor detecting and mitigating protocol based on a wider separate-then-reunion network (WISERNet) equipped with a cryptographic deep steganalyzer for color images, which detects the backdoors hiding behind the poisoned samples even if the embedding algorithm is unknown and further feeds the poisoned samples into the infected model for backdoor unlearning and mitigation. The experimental results show that our work performs better in the backdoor defensing effect compared to state-of-the-art backdoor defensing methods such as fine-pruning and ABL against three typical backdoor attacks. Our protocol reduces the attack success rate close to 0% on the test data and slightly decreases the classification accuracy on the clean samples within 3%.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Backdoor attacks"

1

Turner, Alexander M. S. M. Massachusetts Institute of Technology. "Exploring the landscape of backdoor attacks on deep neural network models". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123127.

Texto completo
Resumen
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 71-75).
Deep neural networks have recently been demonstrated to be vulnerable to backdoor attacks. Specifically, by introducing a small set of training inputs, an adversary is able to plant a backdoor in the trained model that enables them to fully control the model's behavior during inference. In this thesis, the landscape of these attacks is investigated from both the perspective of an adversary seeking an effective attack and a practitioner seeking protection against them. While the backdoor attacks that have been previously demonstrated are very powerful, they crucially rely on allowing the adversary to introduce arbitrary inputs that are -- often blatantly -- mislabelled. As a result, the introduced inputs are likely to raise suspicion whenever even a rudimentary data filtering scheme flags them as outliers. This makes label-consistency -- the condition that inputs are consistent with their labels -- crucial for these attacks to remain undetected. We draw on adversarial perturbations and generative methods to develop a framework for executing efficient, yet label-consistent, backdoor attacks. Furthermore, we propose the use of differential privacy as a defence against backdoor attacks. This prevents the model from relying heavily on features present in few samples. As we do not require formal privacy guarantees, we are able to relax the requirements imposed by differential privacy and instead evaluate our methods on the explicit goal of avoiding the backdoor attack. We propose a method that uses a relaxed differentially private training procedure to achieve empirical protection from backdoor attacks with only a moderate decrease in acccuacy on natural inputs.
by Alexander M. Turner.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Espinoza, Castellon Fabiola. "Contributions to effective and secure federated learning with client data heterogeneity". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG007.

Texto completo
Resumen
Cette thèse se penche sur deux défis de l'apprentissage fédéré: l'hétérogénéité des données et la sécurité des modèles. Dans la première partie, nous nous attaquons à l'hétérogénéité des données, une problématique inhérente aux applications d'apprentissage fédéré dans un cadre réaliste. Les clients peuvent avoir des distributions de données différentes à cause de leurs opinions, localisations ou habitudes. Nous nous concentrons sur deux types distincts d'hétérogénéité dans les tâches de classification. Premièrement, quand les participants ont des distributions de données différentes mais similaires, l'apprentissage collaboratif est une approche attrayante. Notre première méthode s'appuie sur une approche d'adaptation de domaine collaborative, pour apprendre un dictionnaire empirique. Ce dictionnaire exprime les données de chaque client sous la forme d'une combinaison linéaire d'atomes, qui sont des ensembles empiriques représentant les données d'entraînement. Les clients apprennent les atomes en collaboration, tandis que les poids de la combinaison linéaire sont appris individuellement pour assurer la confidentialité. Ce dictionnaire est ensuite utilisé pour déduire les classes d'un client avec une distribution non étiquetée, mais qui a activement participé au processus d'apprentissage. Notre deuxième méthode traite une forme différente d'hétérogénéité, où les clients expriment des concepts différents dans leurs distributions. Dans ce cas, l'apprentissage collaboratif n'est pas toujours optimal, mais nous supposons qu'il existe une similarité structurelle entre les clients, qui nous permet de les regrouper pour un apprentissage plus efficace. Nous nous intéressons particulièrement à la "scalabilité" de cette méthode, en supposant un nombre élevé de participants. Notre approche est conçue pour estimer la structure cachée entre les clients à chaque agrégation des mises à jour des clients, de manière incrémentale. Contrairement à d'autres approches, nous n'imposons pas que tous les clients soient disponibles simultanément pour estimer leurs clusters d'appartenance. Dans la partie suivante de cette thèse, nous examinons les défis de sécurité de l'apprentissage fédéré, spécifiquement sur la vulnérabilité aux attaques par porte dérobée pendant l'apprentissage. Un système fédéré étant partagé, il est difficile d'assurer que tous les clients sont honnêtes et envoient des mises à jour correctes. L'apprentissage fédéré est vulnérable aux utilisateurs malveillants qui corrompent leurs données. Nos défenses sont conçues pour les attaques par porte dérobée, activées par des déclencheurs. Elles reposent sur la reconstruction de ces déclencheurs, sans fournir au serveur des données ou informations supplémentaires provenant des clients, à l'exception des poids compromis. En supposant certaines hypothèses limitées, le serveur peut estimer le déclencheur de l'attaque à partir du modèle global compromis. Notre troisième méthode utilise le déclencheur estimé pour identifier les neurones d'un réseau encodant l'attaque. Nous proposons d'élaguer ce réseau pour entraver les effets de l'attaque. Cette approche défend efficacement un modèle corrompu, même en présence de données hétérogènes. Enfin, notre dernière méthode déplace la défense vers les utilisateurs, en leur fournissant le déclencheur reconstruit pour contrer les attaques pendant la phase d'inférence. Cette défense se révèle particulièrement efficace même dans les cas extrêmes d'hétérogénéité. En conclusion, cette thèse introduit des méthodes novatrices pour améliorer l'efficacité et la sécurité des systèmes d'apprentissage fédérés. Nous avons exploré divers scénarios d'hétérogénéité des données, en proposant des approches d'apprentissage collaboratif et des défenses. Nous perspectives de recherche envisagent d'améliorer notre reconstruction des déclencheurs et en prenant en compte d'autres défis, tels que la confidentialité, qui est une aspect important en apprentissage fédéré
This thesis addresses two significant challenges in federated learning: data heterogeneity and security. In the first part of our work, we tackle the data heterogeneity challenge. Clients can have different data distributions due to their personal opinions, locations, habits, etc. It is a common and almost inherent obstacle in real-world federated learning applications. We focus on two distinct types of heterogeneity in classification tasks. On the one hand, in the first scenario, participants exhibit diverse yet related data distributions, making collaborative learning an attractive approach. Our first proposed method leverages a domain adaptation approach and collaboratively learns an empirical dictionary. A dictionary expresses each client's data as a linear combination of various atoms, that are a set of empirical samples representing the training data. Clients learn the atoms collaboratively, whereas they learn the weights privately to enhance privacy. Subsequently, the dictionary is utilized to infer classes for the clients' unlabeled distribution that withal actively participated in the learning process. On the other hand, our second method addresses a different form of data heterogeneity, where clients express different concepts through their distributions. Collaborative learning may not be optimal in this context; however, we assume a structural similarity between clients, enabling us to cluster them into groups for more effective group-based learning. In this case, we direct our attention to the scalability of our method by supposing that the number of participants can be very large. We propose to incrementally, each time the server aggregates the clients' updates, estimate the hidden structure between clients. Contrary to alternative approaches, we do not require that all be available at the same time to estimate their belonging clusters. In the second part of this thesis, we delve into the security challenges of federated learning, specifically focusing on defenses against training time backdoor attacks. Since a federated framework is shared, it is not always possible to ensure that all clients are honest and that they all send correctly trained updates.Federated learning is vulnerable to the presence of malicious users who corrupt their training data. Our defenses are elaborated for trigger-based backdoor attacks, and rooted in trigger reconstruction. We do not provide the server additional data or client information, other than the compromised weights. After some limited assumptions are made, the server extracts information about the attack trigger from the compromised model global model. Our third method uses a reconstructed trigger to identify the neurons of a neural network that encode the attack. We propose to prune the network on the server side to hinder the effects of the attack. Our final method shifts the defense mechanism to the end-users, providing them with the reconstructed trigger to counteract attacks during the inference phase. Notably, both defense methods consider data heterogeneity, with the latter proving to be more efficient in extreme data heterogeneity cases. In conclusion, this thesis introduces novel methods to enhance the efficiency and security of federated learning systems. We have explored diverse data heterogeneity scenarios, proposing collaborative learning approaches and robust security defenses based on trigger reconstruction. As part of our future work, we outline perspectives for further research, improving our proposed trigger reconstruction and taking into account other challenges, such as privacy which is very important in the field of federated learning
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

(9034049), Miguel Villarreal-Vasquez. "Anomaly Detection and Security Deep Learning Methods Under Adversarial Situation". Thesis, 2020.

Buscar texto completo
Resumen

Advances in Artificial Intelligence (AI), or more precisely on Neural Networks (NNs), and fast processing technologies (e.g. Graphic Processing Units or GPUs) in recent years have positioned NNs as one of the main machine learning algorithms used to solved a diversity of problems in both academia and the industry. While they have been proved to be effective in solving many tasks, the lack of security guarantees and understanding of their internal processing disrupts their wide adoption in general and cybersecurity-related applications. In this dissertation, we present the findings of a comprehensive study aimed to enable the absorption of state-of-the-art NN algorithms in the development of enterprise solutions. Specifically, this dissertation focuses on (1) the development of defensive mechanisms to protect NNs against adversarial attacks and (2) application of NN models for anomaly detection in enterprise networks.

In this state of affairs, this work makes the following contributions. First, we performed a thorough study of the different adversarial attacks against NNs. We concentrate on the attacks referred to as trojan attacks and introduce a novel model hardening method that removes any trojan (i.e. misbehavior) inserted to the NN models at training time. We carefully evaluate our method and establish the correct metrics to test the efficiency of defensive methods against these types of attacks: (1) accuracy with benign data, (2) attack success rate, and (3) accuracy with adversarial data. Prior work evaluates their solutions using the first two metrics only, which do not suffice to guarantee robustness against untargeted attacks. Our method is compared with the state-of-the-art. The obtained results show our method outperforms it. Second, we proposed a novel approach to detect anomalies using LSTM-based models. Our method analyzes at runtime the event sequences generated by the Endpoint Detection and Response (EDR) system of a renowned security company running and efficiently detects uncommon patterns. The new detecting method is compared with the EDR system. The results show that our method achieves a higher detection rate. Finally, we present a Moving Target Defense technique that smartly reacts upon the detection of anomalies so as to also mitigate the detected attacks. The technique efficiently replaces the entire stack of virtual nodes, making ongoing attacks in the system ineffective.

Los estilos APA, Harvard, Vancouver, ISO, etc.

Libros sobre el tema "Backdoor attacks"

1

Michael A, Newton. Part IV The ICC and its Applicable Law, 29 Charging War Crimes: Policy and Prognosis from a Military Perspective. Oxford University Press, 2015. http://dx.doi.org/10.1093/law/9780198705161.003.0029.

Texto completo
Resumen
The Rome Statute was designed to largely align criminal norms with actual state practice based on the realities of warfare. Article 8 embodied notable new refinements (e.g. in relation to disproportionate attack under Article 8(2)(b)(iv)), but did so against a backdrop of pragmatic military practice. This chapter dissects the structure of war crimes under Rome Statute to demonstrate this deliberate intention of Article 8 and then describes the correlative considerations related to charging practices for the maturing institution, including command responsibility. When properly understood and applied in light of the Elements of Crimes, the Court’s charging decisions with respect to war crimes ought to reflect the paradox that its operative provisions are at once revolutionary yet broadly reflective of the actual practice of warfare.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Yaari, Nurit. The Trojan War and the Israeli–Palestinian Conflict. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198746676.003.0006.

Texto completo
Resumen
This chapter looks at Israeli productions of classical tragedies between 1970 and 1985, against the backdrop of four wars: the Six Day War (1967), the War of Attrition (1967–70), the Yom Kippur War (1973), and the First Lebanon War (1982–5). The tragedies in question recount two fateful and bloody wars of antiquity: the second Persian offensive against Greece (480–479 BCE) which serves as the background to Aeschylusʼ The Persians (472 BCE), and the Trojan War—the prehistoric battle immortalized by Homer in The Iliad and The Odyssey, and in the tragedies of Aeschylus, Sophocles, and Euripides. Analysing the Israeli performances, the chapter discusses the theatrical means through which the audiences were encouraged to criticize the hubris of the attackers and pity the sufferings of the victims.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Bartley, Abel A. Keeping the Faith. Greenwood Publishing Group, Inc., 2000. http://dx.doi.org/10.5040/9798400675553.

Texto completo
Resumen
An examination of the political and economic power of a large African American community in a segregated southern city; this study attacks the myth that blacks were passive victims of the southern Jim Crow system and reveals instead that in Jacksonville, Florida, blacks used political and economic pressure to improve their situation and force politicians to make moderate adjustments in the Jim Crow system. Bartley tells the compelling story of how African Americans first gained, then lost, then regained political representation in Jacksonville. Between the end of the Civil War and the consolidation of city and county government in 1967, the political struggle was buffeted by the ongoing effort to build an economically viable African American economy in the virulently racist South. It was the institutional complexity of the African American community that ultimately made the protest efforts viable. Black leaders relied on the institutions created during Reconstruction to buttress their social agitation. Black churches, schools, fraternal organizations, and businesses underpinned the civil rights activities of community leaders by supplying the people and the evidence of abuse that inflamed the passions of ordinary people. The sixty-year struggle to break down the door blocking political power serves as an intriguing backdrop to community development efforts. Jacksonville's African American community never accepted their second-class status. From the beginning of their subjugation, they fought to remedy the situation by continuing to vote and run for offices while they developed their economic and social institutions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Chorev, Nitsan. Give and Take. Princeton University Press, 2019. http://dx.doi.org/10.23943/princeton/9780691197845.001.0001.

Texto completo
Resumen
This book looks at local drug manufacturing in Kenya, Tanzania, and Uganda, from the early 1980s to the present, to understand the impact of foreign aid on industrial development. While foreign aid has been attacked by critics as wasteful, counterproductive, or exploitative, this book makes a clear case for the effectiveness of what it terms “developmental foreign aid.” Against the backdrop of Africa’s pursuit of economic self-sufficiency, the battle against AIDS and malaria, and bitter negotiations over affordable drugs, the book offers an important corrective to popular views on foreign aid and development. It shows that when foreign aid has provided markets, monitoring, and mentoring, it has supported the emergence and upgrading of local production. In instances where donors were willing to procure local drugs, they created new markets that gave local entrepreneurs an incentive to produce new types of drugs. In turn, when donors enforced exacting standards as a condition to access those markets, they gave these producers an incentive to improve quality standards. And where technical know-how was not readily available and donors provided mentoring, local producers received the guidance necessary for improving production processes. Without losing sight of domestic political-economic conditions, historical legacies, and foreign aid’s own internal contradictions, the book presents new insights into the conditions under which foreign aid can be effective.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Backdoor attacks"

1

Pham, Long H. y Jun Sun. "Verifying Neural Networks Against Backdoor Attacks". En Computer Aided Verification, 171–92. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-13185-1_9.

Texto completo
Resumen
AbstractNeural networks have achieved state-of-the-art performance in solving many problems, including many applications in safety/security-critical systems. Researchers also discovered multiple security issues associated with neural networks. One of them is backdoor attacks, i.e., a neural network may be embedded with a backdoor such that a target output is almost always generated in the presence of a trigger. Existing defense approaches mostly focus on detecting whether a neural network is ‘backdoored’ based on heuristics, e.g., activation patterns. To the best of our knowledge, the only line of work which certifies the absence of backdoor is based on randomized smoothing, which is known to significantly reduce neural network performance. In this work, we propose an approach to verify whether a given neural network is free of backdoor with a certain level of success rate. Our approach integrates statistical sampling as well as abstract interpretation. The experiment results show that our approach effectively verifies the absence of backdoor or generates backdoor triggers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Chan, Shih-Han, Yinpeng Dong, Jun Zhu, Xiaolu Zhang y Jun Zhou. "BadDet: Backdoor Attacks on Object Detection". En Lecture Notes in Computer Science, 396–412. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-25056-9_26.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Narisada, Shintaro, Yuki Matsumoto, Seira Hidano, Toshihiro Uchibayashi, Takuo Suganuma, Masahiro Hiji y Shinsaku Kiyomoto. "Countermeasures Against Backdoor Attacks Towards Malware Detectors". En Cryptology and Network Security, 295–314. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-92548-2_16.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Fu, Hao, Alireza Sarmadi, Prashanth Krishnamurthy, Siddharth Garg y Farshad Khorrami. "Mitigating Backdoor Attacks on Deep Neural Networks". En Embedded Machine Learning for Cyber-Physical, IoT, and Edge Computing, 395–431. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40677-5_16.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Xin, Jinwen, Xixiang Lyu y Jing Ma. "Natural Backdoor Attacks on Speech Recognition Models". En Machine Learning for Cyber Security, 597–610. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-20096-0_45.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Wang, Haochen, Tianshi Mu, Guocong Feng, ShangBo Wu y Yuanzhang Li. "DFaP: Data Filtering and Purification Against Backdoor Attacks". En Artificial Intelligence Security and Privacy, 81–97. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-99-9785-5_7.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Iwahana, Kazuki, Naoto Yanai y Toru Fujiwara. "Backdoor Attacks Leveraging Latent Representation in Competitive Learning". En Computer Security. ESORICS 2023 International Workshops, 700–718. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54129-2_41.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Chen, Xiaoyi, Yinpeng Dong, Zeyu Sun, Shengfang Zhai, Qingni Shen y Zhonghai Wu. "Kallima: A Clean-Label Framework for Textual Backdoor Attacks". En Computer Security – ESORICS 2022, 447–66. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-17140-6_22.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Xuan, Yuexin, Xiaojun Chen, Zhendong Zhao, Bisheng Tang y Ye Dong. "Practical and General Backdoor Attacks Against Vertical Federated Learning". En Machine Learning and Knowledge Discovery in Databases: Research Track, 402–17. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-43415-0_24.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Koffas, Stefanos, Behrad Tajalli, Jing Xu, Mauro Conti y Stjepan Picek. "A Systematic Evaluation of Backdoor Attacks in Various Domains". En Embedded Machine Learning for Cyber-Physical, IoT, and Edge Computing, 519–52. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40677-5_20.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Backdoor attacks"

1

Xia, Pengfei, Ziqiang Li, Wei Zhang y Bin Li. "Data-Efficient Backdoor Attacks". En Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/554.

Texto completo
Resumen
Recent studies have proven that deep neural networks are vulnerable to backdoor attacks. Specifically, by mixing a small number of poisoned samples into the training set, the behavior of the trained model can be maliciously controlled. Existing attack methods construct such adversaries by randomly selecting some clean data from the benign set and then embedding a trigger into them. However, this selection strategy ignores the fact that each poisoned sample contributes inequally to the backdoor injection, which reduces the efficiency of poisoning. In this paper, we formulate improving the poisoned data efficiency by the selection as an optimization problem and propose a Filtering-and-Updating Strategy (FUS) to solve it. The experimental results on CIFAR-10 and ImageNet-10 indicate that the proposed method is effective: the same attack success rate can be achieved with only 47% to 75% of the poisoned sample volume compared to the random selection strategy. More importantly, the adversaries selected according to one setting can generalize well to other settings, exhibiting strong transferability. The prototype code of our method is now available at https://github.com/xpf/Data-Efficient-Backdoor-Attacks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Wang, Lun, Zaynah Javed, Xian Wu, Wenbo Guo, Xinyu Xing y Dawn Song. "BACKDOORL: Backdoor Attack against Competitive Reinforcement Learning". En Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/509.

Texto completo
Resumen
Recent research has confirmed the feasibility of backdoor attacks in deep reinforcement learning (RL) systems. However, the existing attacks require the ability to arbitrarily modify an agent's observation, constraining the application scope to simple RL systems such as Atari games. In this paper, we migrate backdoor attacks to more complex RL systems involving multiple agents and explore the possibility of triggering the backdoor without directly manipulating the agent's observation. As a proof of concept, we demonstrate that an adversary agent can trigger the backdoor of the victim agent with its own action in two-player competitive RL systems. We prototype and evaluate BackdooRL in four competitive environments. The results show that when the backdoor is activated, the winning rate of the victim drops by 17% to 37% compared to when not activated. The videos are hosted at https://github.com/wanglun1996/multi_agent_rl_backdoor_videos.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Xia, Jun, Ting Wang, Jiepin Ding, Xian Wei y Mingsong Chen. "Eliminating Backdoor Triggers for Deep Neural Networks Using Attention Relation Graph Distillation". En Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/206.

Texto completo
Resumen
Due to the prosperity of Artificial Intelligence (AI) techniques, more and more backdoors are designed by adversaries to attack Deep Neural Networks (DNNs). Although the state-of-the-art method Neural Attention Distillation (NAD) can effectively erase backdoor triggers from DNNs, it still suffers from non-negligible Attack Success Rate (ASR) together with lowered classification ACCuracy (ACC), since NAD focuses on backdoor defense using attention features (i.e., attention maps) of the same order. In this paper, we introduce a novel backdoor defense framework named Attention Relation Graph Distillation (ARGD), which fully explores the correlation among attention features with different orders using our proposed Attention Relation Graphs (ARGs). Based on the alignment of ARGs between teacher and student models during knowledge distillation, ARGD can more effectively eradicate backdoors than NAD. Comprehensive experimental results show that, against six latest backdoor attacks, ARGD outperforms NAD by up to 94.85% reduction in ASR, while ACC can be improved by up to 3.23%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Huang, Huayang, Qian Wang, Xueluan Gong y Tao Wang. "Orion: Online Backdoor Sample Detection via Evolution Deviance". En Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/96.

Texto completo
Resumen
Widely-used DNN models are vulnerable to backdoor attacks, where the backdoored model is only triggered by specific inputs but can maintain a high prediction accuracy on benign samples. Existing backdoor input detection strategies rely on the assumption that benign and poisoned samples are separable in the feature representation of the model. However, such an assumption can be broken by advanced feature-hidden backdoor attacks. In this paper, we propose a novel detection framework, dubbed Orion (online backdoor sample detection via evolution deviance). Specifically, we analyze how predictions evolve during a forward pass and find deviations between the shallow and deep outputs of the backdoor inputs. By introducing side nets to track such evolution divergence, Orion eliminates the need for the assumption of latent separability. Additionally, we put forward a scheme to restore the original label of backdoor samples, enabling more robust predictions. Extensive experiments on six attacks, three datasets, and two architectures verify the effectiveness of Orion. It is shown that Orion outperforms state-of-the-art defenses and can identify feature-hidden attacks with an F1-score of 90%, compared to 40% for other detection schemes. Orion can also achieve 80% label recovery accuracy on basic backdoor attacks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Mu, Bingxu, Zhenxing Niu, Le Wang, Xue Wang, Qiguang Mia, Rong Jin y Gang Hua. "Progressive Backdoor Erasing via connecting Backdoor and Adversarial Attacks". En 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.01963.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Ji, Yujie, Xinyang Zhang y Ting Wang. "Backdoor attacks against learning systems". En 2017 IEEE Conference on Communications and Network Security (CNS). IEEE, 2017. http://dx.doi.org/10.1109/cns.2017.8228656.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Sun, Yuhua, Tailai Zhang, Xingjun Ma, Pan Zhou, Jian Lou, Zichuan Xu, Xing Di, Yu Cheng y Lichao Sun. "Backdoor Attacks on Crowd Counting". En MM '22: The 30th ACM International Conference on Multimedia. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3503161.3548296.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Liu, Yugeng, Zheng Li, Michael Backes, Yun Shen y Yang Zhang. "Backdoor Attacks Against Dataset Distillation". En Network and Distributed System Security Symposium. Reston, VA: Internet Society, 2023. http://dx.doi.org/10.14722/ndss.2023.24287.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Yang, Shuiqiao, Bao Gia Doan, Paul Montague, Olivier De Vel, Tamas Abraham, Seyit Camtepe, Damith C. Ranasinghe y Salil S. Kanhere. "Transferable Graph Backdoor Attack". En RAID 2022: 25th International Symposium on Research in Attacks, Intrusions and Defenses. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3545948.3545976.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Ge, Yunjie, Qian Wang, Baolin Zheng, Xinlu Zhuang, Qi Li, Chao Shen y Cong Wang. "Anti-Distillation Backdoor Attacks: Backdoors Can Really Survive in Knowledge Distillation". En MM '21: ACM Multimedia Conference. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3474085.3475254.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Informes sobre el tema "Backdoor attacks"

1

Lewis, Dustin, ed. Database of States’ Statements (August 2011–October 2016) concerning Use of Force in relation to Syria. Harvard Law School Program on International Law and Armed Conflict, mayo de 2017. http://dx.doi.org/10.54813/ekmb4241.

Texto completo
Resumen
Many see armed conflict in Syria as a flashpoint for international law. The situation raises numerous unsettling questions, not least concerning normative foundations of the contemporary collective-security and human-security systems, including the following: Amid recurring reports of attacks directed against civilian populations and hospitals with seeming impunity, what loss of legitimacy might law suffer? May—and should—states forcibly intervene to prevent (more) chemical-weapons attacks? If the government of Syria is considered unwilling or unable to obviate terrorist threats from spilling over its borders into other countries, may another state forcibly intervene to protect itself (and others), even without Syria’s consent and without an express authorization of the U.N. Security Council? What began in Daraa in 2011 as protests escalated into armed conflict. Today, armed conflict in Syria implicates a multitude of people, organizations, states, and entities. Some are obvious, such as the civilian population, the government, and organized armed groups (including designated terrorist organizations, for example the Islamic State of Iraq and Syria, or ISIS). Other implicated actors might be less obvious. They include dozens of third states that have intervened or otherwise acted in relation to armed conflict in Syria; numerous intergovernmental bodies; diverse domestic, foreign, and international courts; and seemingly innumerable NGOs. Over time, different states have adopted wide-ranging and diverse approaches to undertaking measures (or not) concerning armed conflict in Syria, whether in relation to the government, one or more armed opposition groups, or the civilian population. Especially since mid-2014, a growing number of states have undertaken military operations directed against ISIS in Syria. For at least a year-and-a-half, Russia has bolstered military strategies of the Syrian government. At least one state (the United States) has directed an operation against a Syrian military base. And, more broadly, many states provide (other) forms of support or assistance to the government of Syria, to armed opposition groups, or to the civilian population. Against that backdrop, the Harvard Law School Program on International Law and Armed Conflict (HLS PILAC) set out to collect states’ statements made from August 2011 through November 2016 concerning use of force in relation to Syria. A primary aim of the database is to provide a comparatively broad set of reliable resources regarding states’ perspectives, with a focus on legal parameters. A premise underlying the database is that through careful documentation of diverse approaches, we can better understand those perspectives. The intended audience of the database is legal practitioners. The database is composed of statements made on behalf of states and/or by state officials. For the most part, the database focuses on statements regarding legal parameters concerning use of force in relation to Syria. HLS PILAC does not pass judgment on whether each statement is necessarily legally salient for purposes of international law. Nor does HLS PILAC seek to determine whether a particular statement may be understood as an expression of opinio juris or an act of state practice (though it might be).
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Bourekba, Moussa. Climate Change and Violent Extremism in North Africa. The Barcelona Centre for International Affairs, octubre de 2021. http://dx.doi.org/10.55317/casc014.

Texto completo
Resumen
As climate change intensifies in many parts of the world, more and more policymakers are concerned with its effects on human security and violence. From Lake Chad to the Philippines, including Afghanistan and Syria, some violent extremist (VE) groups such as Boko Haram and the Islamic State exploit crises and conflicts resulting from environmental stress to recruit more followers, expand their influence and even gain territorial control. In such cases, climate change may be described as a “risk multiplier” that exacerbates a number of conflict drivers. Against this backdrop, this case study looks at the relationship between climate change and violent extremism in North Africa, and more specifically the Maghreb countries Algeria, Morocco and Tunisia, which are all affected by climate change and violent extremism. There are three justifications for this thematic and geographical focus. Firstly, these countries are affected by climate change in multiple ways: water scarcity, temperature variations and desertification are only a few examples of the numerous cross- border impacts of climate change in this region. Secondly, these three countries have been and remain affected by the activity of violent extremist groups such as Al Qaeda in the Islamic Maghreb (AQIM), the Islamic State organisation (IS) and their respective affiliated groups. Algeria endured a civil war from 1991 to 2002 in which Islamist groups opposed the government, while Morocco and Tunisia have been the targets of multiple terrorist attacks by jihadist individuals and organisations. Thirdly, the connection between climate change and violent extremism has received much less attention in the literature than other climate-related security risks. Although empirical research has not evidenced a direct relationship between climate change and violent extremism, there is a need to examine the ways they may feed each other or least intersect in the context of North African countries. Hence, this study concentrates on the ways violent extremism can reinforce vulnerability to the effects of climate change and on the potential effects of climate change on vulnerability to violent extremism. While most of the existing research on the interplay between climate change and violent extremism concentrates on terrorist organisations (Asaka, 2021; Nett and Rüttinger, 2016; Renard, 2008), this case study focuses on the conditions, drivers and patterns that can lead individuals to join such groups in North Africa. In other words, it looks at the way climate change can exacerbate a series of factors that are believed to lead to violent radicalisation – “a personal process in which individuals adopt extreme political, social, and/or religious ideals and aspirations, and where the attainment of particular goals justifies the use of indiscriminate violence” (Wilner and Dubouloz, 2010: 38). This approach is needed not only to anticipate how climate change could possibly affect violent extremism in the medium and long run but also to determine whether and how the policy responses to both phenomena should intersect in the near future. Does climate change affect the patterns of violent extremism in North Africa? If so, how do these phenomena interact in this region? To answer these questions, the case study paper first gives an overview of the threat posed by violent extremism in the countries of study and examines the drivers and factors that are believed to lead to violent extremism in North Africa. Secondly, it discusses how these drivers could be affected by the effects of climate change on resources, livelihoods, mobility and other factors. Finally, an attempt is made to understand the possible interactions between climate change and violent extremism in the future and the implications for policymaking.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía