Academic literature on the topic 'Backdoor attacks'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Backdoor attacks.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Backdoor attacks"
Zhu, Biru, Ganqu Cui, Yangyi Chen, Yujia Qin, Lifan Yuan, Chong Fu, Yangdong Deng, Zhiyuan Liu, Maosong Sun, and Ming Gu. "Removing Backdoors in Pre-trained Models by Regularized Continual Pre-training." Transactions of the Association for Computational Linguistics 11 (2023): 1608–23. http://dx.doi.org/10.1162/tacl_a_00622.
Full textYuan, Guotao, Hong Huang, and Xin Li. "Self-supervised learning backdoor defense mixed with self-attention mechanism." Journal of Computing and Electronic Information Management 12, no. 2 (March 30, 2024): 81–88. http://dx.doi.org/10.54097/7hx9afkw.
Full textSaha, Aniruddha, Akshayvarun Subramanya, and Hamed Pirsiavash. "Hidden Trigger Backdoor Attacks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 11957–65. http://dx.doi.org/10.1609/aaai.v34i07.6871.
Full textDuan, Qiuyu, Zhongyun Hua, Qing Liao, Yushu Zhang, and Leo Yu Zhang. "Conditional Backdoor Attack via JPEG Compression." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 18 (March 24, 2024): 19823–31. http://dx.doi.org/10.1609/aaai.v38i18.29957.
Full textLiu, Zihao, Tianhao Wang, Mengdi Huai, and Chenglin Miao. "Backdoor Attacks via Machine Unlearning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 13 (March 24, 2024): 14115–23. http://dx.doi.org/10.1609/aaai.v38i13.29321.
Full textWang, Tong, Yuan Yao, Feng Xu, Miao Xu, Shengwei An, and Ting Wang. "Inspecting Prediction Confidence for Detecting Black-Box Backdoor Attacks." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 1 (March 24, 2024): 274–82. http://dx.doi.org/10.1609/aaai.v38i1.27780.
Full textHuynh, Tran, Dang Nguyen, Tung Pham, and Anh Tran. "COMBAT: Alternated Training for Effective Clean-Label Backdoor Attacks." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 3 (March 24, 2024): 2436–44. http://dx.doi.org/10.1609/aaai.v38i3.28019.
Full textZhang, Xianda, Baolin Zheng, Jianbao Hu, Chengyang Li, and Xiaoying Bai. "From Toxic to Trustworthy: Using Self-Distillation and Semi-supervised Methods to Refine Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 15 (March 24, 2024): 16873–80. http://dx.doi.org/10.1609/aaai.v38i15.29629.
Full textLiu, Tao, Yuhang Zhang, Zhu Feng, Zhiqin Yang, Chen Xu, Dapeng Man, and Wu Yang. "Beyond Traditional Threats: A Persistent Backdoor Attack on Federated Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 19 (March 24, 2024): 21359–67. http://dx.doi.org/10.1609/aaai.v38i19.30131.
Full textZhang, Lei, Ya Peng, Lifei Wei, Congcong Chen, and Xiaoyu Zhang. "DeepDefense: A Steganalysis-Based Backdoor Detecting and Mitigating Protocol in Deep Neural Networks for AI Security." Security and Communication Networks 2023 (May 9, 2023): 1–12. http://dx.doi.org/10.1155/2023/9308909.
Full textDissertations / Theses on the topic "Backdoor attacks"
Turner, Alexander M. S. M. Massachusetts Institute of Technology. "Exploring the landscape of backdoor attacks on deep neural network models." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123127.
Full textThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 71-75).
Deep neural networks have recently been demonstrated to be vulnerable to backdoor attacks. Specifically, by introducing a small set of training inputs, an adversary is able to plant a backdoor in the trained model that enables them to fully control the model's behavior during inference. In this thesis, the landscape of these attacks is investigated from both the perspective of an adversary seeking an effective attack and a practitioner seeking protection against them. While the backdoor attacks that have been previously demonstrated are very powerful, they crucially rely on allowing the adversary to introduce arbitrary inputs that are -- often blatantly -- mislabelled. As a result, the introduced inputs are likely to raise suspicion whenever even a rudimentary data filtering scheme flags them as outliers. This makes label-consistency -- the condition that inputs are consistent with their labels -- crucial for these attacks to remain undetected. We draw on adversarial perturbations and generative methods to develop a framework for executing efficient, yet label-consistent, backdoor attacks. Furthermore, we propose the use of differential privacy as a defence against backdoor attacks. This prevents the model from relying heavily on features present in few samples. As we do not require formal privacy guarantees, we are able to relax the requirements imposed by differential privacy and instead evaluate our methods on the explicit goal of avoiding the backdoor attack. We propose a method that uses a relaxed differentially private training procedure to achieve empirical protection from backdoor attacks with only a moderate decrease in acccuacy on natural inputs.
by Alexander M. Turner.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Espinoza, Castellon Fabiola. "Contributions to effective and secure federated learning with client data heterogeneity." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG007.
Full textThis thesis addresses two significant challenges in federated learning: data heterogeneity and security. In the first part of our work, we tackle the data heterogeneity challenge. Clients can have different data distributions due to their personal opinions, locations, habits, etc. It is a common and almost inherent obstacle in real-world federated learning applications. We focus on two distinct types of heterogeneity in classification tasks. On the one hand, in the first scenario, participants exhibit diverse yet related data distributions, making collaborative learning an attractive approach. Our first proposed method leverages a domain adaptation approach and collaboratively learns an empirical dictionary. A dictionary expresses each client's data as a linear combination of various atoms, that are a set of empirical samples representing the training data. Clients learn the atoms collaboratively, whereas they learn the weights privately to enhance privacy. Subsequently, the dictionary is utilized to infer classes for the clients' unlabeled distribution that withal actively participated in the learning process. On the other hand, our second method addresses a different form of data heterogeneity, where clients express different concepts through their distributions. Collaborative learning may not be optimal in this context; however, we assume a structural similarity between clients, enabling us to cluster them into groups for more effective group-based learning. In this case, we direct our attention to the scalability of our method by supposing that the number of participants can be very large. We propose to incrementally, each time the server aggregates the clients' updates, estimate the hidden structure between clients. Contrary to alternative approaches, we do not require that all be available at the same time to estimate their belonging clusters. In the second part of this thesis, we delve into the security challenges of federated learning, specifically focusing on defenses against training time backdoor attacks. Since a federated framework is shared, it is not always possible to ensure that all clients are honest and that they all send correctly trained updates.Federated learning is vulnerable to the presence of malicious users who corrupt their training data. Our defenses are elaborated for trigger-based backdoor attacks, and rooted in trigger reconstruction. We do not provide the server additional data or client information, other than the compromised weights. After some limited assumptions are made, the server extracts information about the attack trigger from the compromised model global model. Our third method uses a reconstructed trigger to identify the neurons of a neural network that encode the attack. We propose to prune the network on the server side to hinder the effects of the attack. Our final method shifts the defense mechanism to the end-users, providing them with the reconstructed trigger to counteract attacks during the inference phase. Notably, both defense methods consider data heterogeneity, with the latter proving to be more efficient in extreme data heterogeneity cases. In conclusion, this thesis introduces novel methods to enhance the efficiency and security of federated learning systems. We have explored diverse data heterogeneity scenarios, proposing collaborative learning approaches and robust security defenses based on trigger reconstruction. As part of our future work, we outline perspectives for further research, improving our proposed trigger reconstruction and taking into account other challenges, such as privacy which is very important in the field of federated learning
(9034049), Miguel Villarreal-Vasquez. "Anomaly Detection and Security Deep Learning Methods Under Adversarial Situation." Thesis, 2020.
Find full textAdvances in Artificial Intelligence (AI), or more precisely on Neural Networks (NNs), and fast processing technologies (e.g. Graphic Processing Units or GPUs) in recent years have positioned NNs as one of the main machine learning algorithms used to solved a diversity of problems in both academia and the industry. While they have been proved to be effective in solving many tasks, the lack of security guarantees and understanding of their internal processing disrupts their wide adoption in general and cybersecurity-related applications. In this dissertation, we present the findings of a comprehensive study aimed to enable the absorption of state-of-the-art NN algorithms in the development of enterprise solutions. Specifically, this dissertation focuses on (1) the development of defensive mechanisms to protect NNs against adversarial attacks and (2) application of NN models for anomaly detection in enterprise networks.
In this state of affairs, this work makes the following contributions. First, we performed a thorough study of the different adversarial attacks against NNs. We concentrate on the attacks referred to as trojan attacks and introduce a novel model hardening method that removes any trojan (i.e. misbehavior) inserted to the NN models at training time. We carefully evaluate our method and establish the correct metrics to test the efficiency of defensive methods against these types of attacks: (1) accuracy with benign data, (2) attack success rate, and (3) accuracy with adversarial data. Prior work evaluates their solutions using the first two metrics only, which do not suffice to guarantee robustness against untargeted attacks. Our method is compared with the state-of-the-art. The obtained results show our method outperforms it. Second, we proposed a novel approach to detect anomalies using LSTM-based models. Our method analyzes at runtime the event sequences generated by the Endpoint Detection and Response (EDR) system of a renowned security company running and efficiently detects uncommon patterns. The new detecting method is compared with the EDR system. The results show that our method achieves a higher detection rate. Finally, we present a Moving Target Defense technique that smartly reacts upon the detection of anomalies so as to also mitigate the detected attacks. The technique efficiently replaces the entire stack of virtual nodes, making ongoing attacks in the system ineffective.
Books on the topic "Backdoor attacks"
Michael A, Newton. Part IV The ICC and its Applicable Law, 29 Charging War Crimes: Policy and Prognosis from a Military Perspective. Oxford University Press, 2015. http://dx.doi.org/10.1093/law/9780198705161.003.0029.
Full textYaari, Nurit. The Trojan War and the Israeli–Palestinian Conflict. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198746676.003.0006.
Full textBartley, Abel A. Keeping the Faith. Greenwood Publishing Group, Inc., 2000. http://dx.doi.org/10.5040/9798400675553.
Full textChorev, Nitsan. Give and Take. Princeton University Press, 2019. http://dx.doi.org/10.23943/princeton/9780691197845.001.0001.
Full textBook chapters on the topic "Backdoor attacks"
Pham, Long H., and Jun Sun. "Verifying Neural Networks Against Backdoor Attacks." In Computer Aided Verification, 171–92. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-13185-1_9.
Full textChan, Shih-Han, Yinpeng Dong, Jun Zhu, Xiaolu Zhang, and Jun Zhou. "BadDet: Backdoor Attacks on Object Detection." In Lecture Notes in Computer Science, 396–412. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-25056-9_26.
Full textNarisada, Shintaro, Yuki Matsumoto, Seira Hidano, Toshihiro Uchibayashi, Takuo Suganuma, Masahiro Hiji, and Shinsaku Kiyomoto. "Countermeasures Against Backdoor Attacks Towards Malware Detectors." In Cryptology and Network Security, 295–314. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-92548-2_16.
Full textFu, Hao, Alireza Sarmadi, Prashanth Krishnamurthy, Siddharth Garg, and Farshad Khorrami. "Mitigating Backdoor Attacks on Deep Neural Networks." In Embedded Machine Learning for Cyber-Physical, IoT, and Edge Computing, 395–431. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40677-5_16.
Full textXin, Jinwen, Xixiang Lyu, and Jing Ma. "Natural Backdoor Attacks on Speech Recognition Models." In Machine Learning for Cyber Security, 597–610. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-20096-0_45.
Full textWang, Haochen, Tianshi Mu, Guocong Feng, ShangBo Wu, and Yuanzhang Li. "DFaP: Data Filtering and Purification Against Backdoor Attacks." In Artificial Intelligence Security and Privacy, 81–97. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-99-9785-5_7.
Full textIwahana, Kazuki, Naoto Yanai, and Toru Fujiwara. "Backdoor Attacks Leveraging Latent Representation in Competitive Learning." In Computer Security. ESORICS 2023 International Workshops, 700–718. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54129-2_41.
Full textChen, Xiaoyi, Yinpeng Dong, Zeyu Sun, Shengfang Zhai, Qingni Shen, and Zhonghai Wu. "Kallima: A Clean-Label Framework for Textual Backdoor Attacks." In Computer Security – ESORICS 2022, 447–66. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-17140-6_22.
Full textXuan, Yuexin, Xiaojun Chen, Zhendong Zhao, Bisheng Tang, and Ye Dong. "Practical and General Backdoor Attacks Against Vertical Federated Learning." In Machine Learning and Knowledge Discovery in Databases: Research Track, 402–17. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-43415-0_24.
Full textKoffas, Stefanos, Behrad Tajalli, Jing Xu, Mauro Conti, and Stjepan Picek. "A Systematic Evaluation of Backdoor Attacks in Various Domains." In Embedded Machine Learning for Cyber-Physical, IoT, and Edge Computing, 519–52. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40677-5_20.
Full textConference papers on the topic "Backdoor attacks"
Xia, Pengfei, Ziqiang Li, Wei Zhang, and Bin Li. "Data-Efficient Backdoor Attacks." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/554.
Full textWang, Lun, Zaynah Javed, Xian Wu, Wenbo Guo, Xinyu Xing, and Dawn Song. "BACKDOORL: Backdoor Attack against Competitive Reinforcement Learning." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/509.
Full textXia, Jun, Ting Wang, Jiepin Ding, Xian Wei, and Mingsong Chen. "Eliminating Backdoor Triggers for Deep Neural Networks Using Attention Relation Graph Distillation." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/206.
Full textHuang, Huayang, Qian Wang, Xueluan Gong, and Tao Wang. "Orion: Online Backdoor Sample Detection via Evolution Deviance." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/96.
Full textMu, Bingxu, Zhenxing Niu, Le Wang, Xue Wang, Qiguang Mia, Rong Jin, and Gang Hua. "Progressive Backdoor Erasing via connecting Backdoor and Adversarial Attacks." In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.01963.
Full textJi, Yujie, Xinyang Zhang, and Ting Wang. "Backdoor attacks against learning systems." In 2017 IEEE Conference on Communications and Network Security (CNS). IEEE, 2017. http://dx.doi.org/10.1109/cns.2017.8228656.
Full textSun, Yuhua, Tailai Zhang, Xingjun Ma, Pan Zhou, Jian Lou, Zichuan Xu, Xing Di, Yu Cheng, and Lichao Sun. "Backdoor Attacks on Crowd Counting." In MM '22: The 30th ACM International Conference on Multimedia. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3503161.3548296.
Full textLiu, Yugeng, Zheng Li, Michael Backes, Yun Shen, and Yang Zhang. "Backdoor Attacks Against Dataset Distillation." In Network and Distributed System Security Symposium. Reston, VA: Internet Society, 2023. http://dx.doi.org/10.14722/ndss.2023.24287.
Full textYang, Shuiqiao, Bao Gia Doan, Paul Montague, Olivier De Vel, Tamas Abraham, Seyit Camtepe, Damith C. Ranasinghe, and Salil S. Kanhere. "Transferable Graph Backdoor Attack." In RAID 2022: 25th International Symposium on Research in Attacks, Intrusions and Defenses. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3545948.3545976.
Full textGe, Yunjie, Qian Wang, Baolin Zheng, Xinlu Zhuang, Qi Li, Chao Shen, and Cong Wang. "Anti-Distillation Backdoor Attacks: Backdoors Can Really Survive in Knowledge Distillation." In MM '21: ACM Multimedia Conference. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3474085.3475254.
Full textReports on the topic "Backdoor attacks"
Lewis, Dustin, ed. Database of States’ Statements (August 2011–October 2016) concerning Use of Force in relation to Syria. Harvard Law School Program on International Law and Armed Conflict, May 2017. http://dx.doi.org/10.54813/ekmb4241.
Full textBourekba, Moussa. Climate Change and Violent Extremism in North Africa. The Barcelona Centre for International Affairs, October 2021. http://dx.doi.org/10.55317/casc014.
Full text