Literatura académica sobre el tema "Backdoor attacks"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Backdoor attacks".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Backdoor attacks"
Zhu, Biru, Ganqu Cui, Yangyi Chen, Yujia Qin, Lifan Yuan, Chong Fu, Yangdong Deng, Zhiyuan Liu, Maosong Sun y Ming Gu. "Removing Backdoors in Pre-trained Models by Regularized Continual Pre-training". Transactions of the Association for Computational Linguistics 11 (2023): 1608–23. http://dx.doi.org/10.1162/tacl_a_00622.
Texto completoYuan, Guotao, Hong Huang y Xin Li. "Self-supervised learning backdoor defense mixed with self-attention mechanism". Journal of Computing and Electronic Information Management 12, n.º 2 (30 de marzo de 2024): 81–88. http://dx.doi.org/10.54097/7hx9afkw.
Texto completoSaha, Aniruddha, Akshayvarun Subramanya y Hamed Pirsiavash. "Hidden Trigger Backdoor Attacks". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 07 (3 de abril de 2020): 11957–65. http://dx.doi.org/10.1609/aaai.v34i07.6871.
Texto completoDuan, Qiuyu, Zhongyun Hua, Qing Liao, Yushu Zhang y Leo Yu Zhang. "Conditional Backdoor Attack via JPEG Compression". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 18 (24 de marzo de 2024): 19823–31. http://dx.doi.org/10.1609/aaai.v38i18.29957.
Texto completoLiu, Zihao, Tianhao Wang, Mengdi Huai y Chenglin Miao. "Backdoor Attacks via Machine Unlearning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 13 (24 de marzo de 2024): 14115–23. http://dx.doi.org/10.1609/aaai.v38i13.29321.
Texto completoWang, Tong, Yuan Yao, Feng Xu, Miao Xu, Shengwei An y Ting Wang. "Inspecting Prediction Confidence for Detecting Black-Box Backdoor Attacks". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 1 (24 de marzo de 2024): 274–82. http://dx.doi.org/10.1609/aaai.v38i1.27780.
Texto completoHuynh, Tran, Dang Nguyen, Tung Pham y Anh Tran. "COMBAT: Alternated Training for Effective Clean-Label Backdoor Attacks". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 3 (24 de marzo de 2024): 2436–44. http://dx.doi.org/10.1609/aaai.v38i3.28019.
Texto completoZhang, Xianda, Baolin Zheng, Jianbao Hu, Chengyang Li y Xiaoying Bai. "From Toxic to Trustworthy: Using Self-Distillation and Semi-supervised Methods to Refine Neural Networks". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 15 (24 de marzo de 2024): 16873–80. http://dx.doi.org/10.1609/aaai.v38i15.29629.
Texto completoLiu, Tao, Yuhang Zhang, Zhu Feng, Zhiqin Yang, Chen Xu, Dapeng Man y Wu Yang. "Beyond Traditional Threats: A Persistent Backdoor Attack on Federated Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 19 (24 de marzo de 2024): 21359–67. http://dx.doi.org/10.1609/aaai.v38i19.30131.
Texto completoZhang, Lei, Ya Peng, Lifei Wei, Congcong Chen y Xiaoyu Zhang. "DeepDefense: A Steganalysis-Based Backdoor Detecting and Mitigating Protocol in Deep Neural Networks for AI Security". Security and Communication Networks 2023 (9 de mayo de 2023): 1–12. http://dx.doi.org/10.1155/2023/9308909.
Texto completoTesis sobre el tema "Backdoor attacks"
Turner, Alexander M. S. M. Massachusetts Institute of Technology. "Exploring the landscape of backdoor attacks on deep neural network models". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123127.
Texto completoThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 71-75).
Deep neural networks have recently been demonstrated to be vulnerable to backdoor attacks. Specifically, by introducing a small set of training inputs, an adversary is able to plant a backdoor in the trained model that enables them to fully control the model's behavior during inference. In this thesis, the landscape of these attacks is investigated from both the perspective of an adversary seeking an effective attack and a practitioner seeking protection against them. While the backdoor attacks that have been previously demonstrated are very powerful, they crucially rely on allowing the adversary to introduce arbitrary inputs that are -- often blatantly -- mislabelled. As a result, the introduced inputs are likely to raise suspicion whenever even a rudimentary data filtering scheme flags them as outliers. This makes label-consistency -- the condition that inputs are consistent with their labels -- crucial for these attacks to remain undetected. We draw on adversarial perturbations and generative methods to develop a framework for executing efficient, yet label-consistent, backdoor attacks. Furthermore, we propose the use of differential privacy as a defence against backdoor attacks. This prevents the model from relying heavily on features present in few samples. As we do not require formal privacy guarantees, we are able to relax the requirements imposed by differential privacy and instead evaluate our methods on the explicit goal of avoiding the backdoor attack. We propose a method that uses a relaxed differentially private training procedure to achieve empirical protection from backdoor attacks with only a moderate decrease in acccuacy on natural inputs.
by Alexander M. Turner.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Espinoza, Castellon Fabiola. "Contributions to effective and secure federated learning with client data heterogeneity". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG007.
Texto completoThis thesis addresses two significant challenges in federated learning: data heterogeneity and security. In the first part of our work, we tackle the data heterogeneity challenge. Clients can have different data distributions due to their personal opinions, locations, habits, etc. It is a common and almost inherent obstacle in real-world federated learning applications. We focus on two distinct types of heterogeneity in classification tasks. On the one hand, in the first scenario, participants exhibit diverse yet related data distributions, making collaborative learning an attractive approach. Our first proposed method leverages a domain adaptation approach and collaboratively learns an empirical dictionary. A dictionary expresses each client's data as a linear combination of various atoms, that are a set of empirical samples representing the training data. Clients learn the atoms collaboratively, whereas they learn the weights privately to enhance privacy. Subsequently, the dictionary is utilized to infer classes for the clients' unlabeled distribution that withal actively participated in the learning process. On the other hand, our second method addresses a different form of data heterogeneity, where clients express different concepts through their distributions. Collaborative learning may not be optimal in this context; however, we assume a structural similarity between clients, enabling us to cluster them into groups for more effective group-based learning. In this case, we direct our attention to the scalability of our method by supposing that the number of participants can be very large. We propose to incrementally, each time the server aggregates the clients' updates, estimate the hidden structure between clients. Contrary to alternative approaches, we do not require that all be available at the same time to estimate their belonging clusters. In the second part of this thesis, we delve into the security challenges of federated learning, specifically focusing on defenses against training time backdoor attacks. Since a federated framework is shared, it is not always possible to ensure that all clients are honest and that they all send correctly trained updates.Federated learning is vulnerable to the presence of malicious users who corrupt their training data. Our defenses are elaborated for trigger-based backdoor attacks, and rooted in trigger reconstruction. We do not provide the server additional data or client information, other than the compromised weights. After some limited assumptions are made, the server extracts information about the attack trigger from the compromised model global model. Our third method uses a reconstructed trigger to identify the neurons of a neural network that encode the attack. We propose to prune the network on the server side to hinder the effects of the attack. Our final method shifts the defense mechanism to the end-users, providing them with the reconstructed trigger to counteract attacks during the inference phase. Notably, both defense methods consider data heterogeneity, with the latter proving to be more efficient in extreme data heterogeneity cases. In conclusion, this thesis introduces novel methods to enhance the efficiency and security of federated learning systems. We have explored diverse data heterogeneity scenarios, proposing collaborative learning approaches and robust security defenses based on trigger reconstruction. As part of our future work, we outline perspectives for further research, improving our proposed trigger reconstruction and taking into account other challenges, such as privacy which is very important in the field of federated learning
(9034049), Miguel Villarreal-Vasquez. "Anomaly Detection and Security Deep Learning Methods Under Adversarial Situation". Thesis, 2020.
Buscar texto completoAdvances in Artificial Intelligence (AI), or more precisely on Neural Networks (NNs), and fast processing technologies (e.g. Graphic Processing Units or GPUs) in recent years have positioned NNs as one of the main machine learning algorithms used to solved a diversity of problems in both academia and the industry. While they have been proved to be effective in solving many tasks, the lack of security guarantees and understanding of their internal processing disrupts their wide adoption in general and cybersecurity-related applications. In this dissertation, we present the findings of a comprehensive study aimed to enable the absorption of state-of-the-art NN algorithms in the development of enterprise solutions. Specifically, this dissertation focuses on (1) the development of defensive mechanisms to protect NNs against adversarial attacks and (2) application of NN models for anomaly detection in enterprise networks.
In this state of affairs, this work makes the following contributions. First, we performed a thorough study of the different adversarial attacks against NNs. We concentrate on the attacks referred to as trojan attacks and introduce a novel model hardening method that removes any trojan (i.e. misbehavior) inserted to the NN models at training time. We carefully evaluate our method and establish the correct metrics to test the efficiency of defensive methods against these types of attacks: (1) accuracy with benign data, (2) attack success rate, and (3) accuracy with adversarial data. Prior work evaluates their solutions using the first two metrics only, which do not suffice to guarantee robustness against untargeted attacks. Our method is compared with the state-of-the-art. The obtained results show our method outperforms it. Second, we proposed a novel approach to detect anomalies using LSTM-based models. Our method analyzes at runtime the event sequences generated by the Endpoint Detection and Response (EDR) system of a renowned security company running and efficiently detects uncommon patterns. The new detecting method is compared with the EDR system. The results show that our method achieves a higher detection rate. Finally, we present a Moving Target Defense technique that smartly reacts upon the detection of anomalies so as to also mitigate the detected attacks. The technique efficiently replaces the entire stack of virtual nodes, making ongoing attacks in the system ineffective.
Libros sobre el tema "Backdoor attacks"
Michael A, Newton. Part IV The ICC and its Applicable Law, 29 Charging War Crimes: Policy and Prognosis from a Military Perspective. Oxford University Press, 2015. http://dx.doi.org/10.1093/law/9780198705161.003.0029.
Texto completoYaari, Nurit. The Trojan War and the Israeli–Palestinian Conflict. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198746676.003.0006.
Texto completoBartley, Abel A. Keeping the Faith. Greenwood Publishing Group, Inc., 2000. http://dx.doi.org/10.5040/9798400675553.
Texto completoChorev, Nitsan. Give and Take. Princeton University Press, 2019. http://dx.doi.org/10.23943/princeton/9780691197845.001.0001.
Texto completoCapítulos de libros sobre el tema "Backdoor attacks"
Pham, Long H. y Jun Sun. "Verifying Neural Networks Against Backdoor Attacks". En Computer Aided Verification, 171–92. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-13185-1_9.
Texto completoChan, Shih-Han, Yinpeng Dong, Jun Zhu, Xiaolu Zhang y Jun Zhou. "BadDet: Backdoor Attacks on Object Detection". En Lecture Notes in Computer Science, 396–412. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-25056-9_26.
Texto completoNarisada, Shintaro, Yuki Matsumoto, Seira Hidano, Toshihiro Uchibayashi, Takuo Suganuma, Masahiro Hiji y Shinsaku Kiyomoto. "Countermeasures Against Backdoor Attacks Towards Malware Detectors". En Cryptology and Network Security, 295–314. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-92548-2_16.
Texto completoFu, Hao, Alireza Sarmadi, Prashanth Krishnamurthy, Siddharth Garg y Farshad Khorrami. "Mitigating Backdoor Attacks on Deep Neural Networks". En Embedded Machine Learning for Cyber-Physical, IoT, and Edge Computing, 395–431. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40677-5_16.
Texto completoXin, Jinwen, Xixiang Lyu y Jing Ma. "Natural Backdoor Attacks on Speech Recognition Models". En Machine Learning for Cyber Security, 597–610. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-20096-0_45.
Texto completoWang, Haochen, Tianshi Mu, Guocong Feng, ShangBo Wu y Yuanzhang Li. "DFaP: Data Filtering and Purification Against Backdoor Attacks". En Artificial Intelligence Security and Privacy, 81–97. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-99-9785-5_7.
Texto completoIwahana, Kazuki, Naoto Yanai y Toru Fujiwara. "Backdoor Attacks Leveraging Latent Representation in Competitive Learning". En Computer Security. ESORICS 2023 International Workshops, 700–718. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54129-2_41.
Texto completoChen, Xiaoyi, Yinpeng Dong, Zeyu Sun, Shengfang Zhai, Qingni Shen y Zhonghai Wu. "Kallima: A Clean-Label Framework for Textual Backdoor Attacks". En Computer Security – ESORICS 2022, 447–66. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-17140-6_22.
Texto completoXuan, Yuexin, Xiaojun Chen, Zhendong Zhao, Bisheng Tang y Ye Dong. "Practical and General Backdoor Attacks Against Vertical Federated Learning". En Machine Learning and Knowledge Discovery in Databases: Research Track, 402–17. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-43415-0_24.
Texto completoKoffas, Stefanos, Behrad Tajalli, Jing Xu, Mauro Conti y Stjepan Picek. "A Systematic Evaluation of Backdoor Attacks in Various Domains". En Embedded Machine Learning for Cyber-Physical, IoT, and Edge Computing, 519–52. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40677-5_20.
Texto completoActas de conferencias sobre el tema "Backdoor attacks"
Xia, Pengfei, Ziqiang Li, Wei Zhang y Bin Li. "Data-Efficient Backdoor Attacks". En Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/554.
Texto completoWang, Lun, Zaynah Javed, Xian Wu, Wenbo Guo, Xinyu Xing y Dawn Song. "BACKDOORL: Backdoor Attack against Competitive Reinforcement Learning". En Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/509.
Texto completoXia, Jun, Ting Wang, Jiepin Ding, Xian Wei y Mingsong Chen. "Eliminating Backdoor Triggers for Deep Neural Networks Using Attention Relation Graph Distillation". En Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/206.
Texto completoHuang, Huayang, Qian Wang, Xueluan Gong y Tao Wang. "Orion: Online Backdoor Sample Detection via Evolution Deviance". En Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/96.
Texto completoMu, Bingxu, Zhenxing Niu, Le Wang, Xue Wang, Qiguang Mia, Rong Jin y Gang Hua. "Progressive Backdoor Erasing via connecting Backdoor and Adversarial Attacks". En 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.01963.
Texto completoJi, Yujie, Xinyang Zhang y Ting Wang. "Backdoor attacks against learning systems". En 2017 IEEE Conference on Communications and Network Security (CNS). IEEE, 2017. http://dx.doi.org/10.1109/cns.2017.8228656.
Texto completoSun, Yuhua, Tailai Zhang, Xingjun Ma, Pan Zhou, Jian Lou, Zichuan Xu, Xing Di, Yu Cheng y Lichao Sun. "Backdoor Attacks on Crowd Counting". En MM '22: The 30th ACM International Conference on Multimedia. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3503161.3548296.
Texto completoLiu, Yugeng, Zheng Li, Michael Backes, Yun Shen y Yang Zhang. "Backdoor Attacks Against Dataset Distillation". En Network and Distributed System Security Symposium. Reston, VA: Internet Society, 2023. http://dx.doi.org/10.14722/ndss.2023.24287.
Texto completoYang, Shuiqiao, Bao Gia Doan, Paul Montague, Olivier De Vel, Tamas Abraham, Seyit Camtepe, Damith C. Ranasinghe y Salil S. Kanhere. "Transferable Graph Backdoor Attack". En RAID 2022: 25th International Symposium on Research in Attacks, Intrusions and Defenses. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3545948.3545976.
Texto completoGe, Yunjie, Qian Wang, Baolin Zheng, Xinlu Zhuang, Qi Li, Chao Shen y Cong Wang. "Anti-Distillation Backdoor Attacks: Backdoors Can Really Survive in Knowledge Distillation". En MM '21: ACM Multimedia Conference. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3474085.3475254.
Texto completoInformes sobre el tema "Backdoor attacks"
Lewis, Dustin, ed. Database of States’ Statements (August 2011–October 2016) concerning Use of Force in relation to Syria. Harvard Law School Program on International Law and Armed Conflict, mayo de 2017. http://dx.doi.org/10.54813/ekmb4241.
Texto completoBourekba, Moussa. Climate Change and Violent Extremism in North Africa. The Barcelona Centre for International Affairs, octubre de 2021. http://dx.doi.org/10.55317/casc014.
Texto completo