Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Backdoor attacks“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Backdoor attacks" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Backdoor attacks"
Zhu, Biru, Ganqu Cui, Yangyi Chen, Yujia Qin, Lifan Yuan, Chong Fu, Yangdong Deng, Zhiyuan Liu, Maosong Sun und Ming Gu. „Removing Backdoors in Pre-trained Models by Regularized Continual Pre-training“. Transactions of the Association for Computational Linguistics 11 (2023): 1608–23. http://dx.doi.org/10.1162/tacl_a_00622.
Der volle Inhalt der QuelleYuan, Guotao, Hong Huang und Xin Li. „Self-supervised learning backdoor defense mixed with self-attention mechanism“. Journal of Computing and Electronic Information Management 12, Nr. 2 (30.03.2024): 81–88. http://dx.doi.org/10.54097/7hx9afkw.
Der volle Inhalt der QuelleSaha, Aniruddha, Akshayvarun Subramanya und Hamed Pirsiavash. „Hidden Trigger Backdoor Attacks“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 07 (03.04.2020): 11957–65. http://dx.doi.org/10.1609/aaai.v34i07.6871.
Der volle Inhalt der QuelleDuan, Qiuyu, Zhongyun Hua, Qing Liao, Yushu Zhang und Leo Yu Zhang. „Conditional Backdoor Attack via JPEG Compression“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 18 (24.03.2024): 19823–31. http://dx.doi.org/10.1609/aaai.v38i18.29957.
Der volle Inhalt der QuelleLiu, Zihao, Tianhao Wang, Mengdi Huai und Chenglin Miao. „Backdoor Attacks via Machine Unlearning“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 13 (24.03.2024): 14115–23. http://dx.doi.org/10.1609/aaai.v38i13.29321.
Der volle Inhalt der QuelleWang, Tong, Yuan Yao, Feng Xu, Miao Xu, Shengwei An und Ting Wang. „Inspecting Prediction Confidence for Detecting Black-Box Backdoor Attacks“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 1 (24.03.2024): 274–82. http://dx.doi.org/10.1609/aaai.v38i1.27780.
Der volle Inhalt der QuelleHuynh, Tran, Dang Nguyen, Tung Pham und Anh Tran. „COMBAT: Alternated Training for Effective Clean-Label Backdoor Attacks“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 3 (24.03.2024): 2436–44. http://dx.doi.org/10.1609/aaai.v38i3.28019.
Der volle Inhalt der QuelleZhang, Xianda, Baolin Zheng, Jianbao Hu, Chengyang Li und Xiaoying Bai. „From Toxic to Trustworthy: Using Self-Distillation and Semi-supervised Methods to Refine Neural Networks“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 15 (24.03.2024): 16873–80. http://dx.doi.org/10.1609/aaai.v38i15.29629.
Der volle Inhalt der QuelleLiu, Tao, Yuhang Zhang, Zhu Feng, Zhiqin Yang, Chen Xu, Dapeng Man und Wu Yang. „Beyond Traditional Threats: A Persistent Backdoor Attack on Federated Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 19 (24.03.2024): 21359–67. http://dx.doi.org/10.1609/aaai.v38i19.30131.
Der volle Inhalt der QuelleZhang, Lei, Ya Peng, Lifei Wei, Congcong Chen und Xiaoyu Zhang. „DeepDefense: A Steganalysis-Based Backdoor Detecting and Mitigating Protocol in Deep Neural Networks for AI Security“. Security and Communication Networks 2023 (09.05.2023): 1–12. http://dx.doi.org/10.1155/2023/9308909.
Der volle Inhalt der QuelleDissertationen zum Thema "Backdoor attacks"
Turner, Alexander M. S. M. Massachusetts Institute of Technology. „Exploring the landscape of backdoor attacks on deep neural network models“. Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123127.
Der volle Inhalt der QuelleThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 71-75).
Deep neural networks have recently been demonstrated to be vulnerable to backdoor attacks. Specifically, by introducing a small set of training inputs, an adversary is able to plant a backdoor in the trained model that enables them to fully control the model's behavior during inference. In this thesis, the landscape of these attacks is investigated from both the perspective of an adversary seeking an effective attack and a practitioner seeking protection against them. While the backdoor attacks that have been previously demonstrated are very powerful, they crucially rely on allowing the adversary to introduce arbitrary inputs that are -- often blatantly -- mislabelled. As a result, the introduced inputs are likely to raise suspicion whenever even a rudimentary data filtering scheme flags them as outliers. This makes label-consistency -- the condition that inputs are consistent with their labels -- crucial for these attacks to remain undetected. We draw on adversarial perturbations and generative methods to develop a framework for executing efficient, yet label-consistent, backdoor attacks. Furthermore, we propose the use of differential privacy as a defence against backdoor attacks. This prevents the model from relying heavily on features present in few samples. As we do not require formal privacy guarantees, we are able to relax the requirements imposed by differential privacy and instead evaluate our methods on the explicit goal of avoiding the backdoor attack. We propose a method that uses a relaxed differentially private training procedure to achieve empirical protection from backdoor attacks with only a moderate decrease in acccuacy on natural inputs.
by Alexander M. Turner.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Espinoza, Castellon Fabiola. „Contributions to effective and secure federated learning with client data heterogeneity“. Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG007.
Der volle Inhalt der QuelleThis thesis addresses two significant challenges in federated learning: data heterogeneity and security. In the first part of our work, we tackle the data heterogeneity challenge. Clients can have different data distributions due to their personal opinions, locations, habits, etc. It is a common and almost inherent obstacle in real-world federated learning applications. We focus on two distinct types of heterogeneity in classification tasks. On the one hand, in the first scenario, participants exhibit diverse yet related data distributions, making collaborative learning an attractive approach. Our first proposed method leverages a domain adaptation approach and collaboratively learns an empirical dictionary. A dictionary expresses each client's data as a linear combination of various atoms, that are a set of empirical samples representing the training data. Clients learn the atoms collaboratively, whereas they learn the weights privately to enhance privacy. Subsequently, the dictionary is utilized to infer classes for the clients' unlabeled distribution that withal actively participated in the learning process. On the other hand, our second method addresses a different form of data heterogeneity, where clients express different concepts through their distributions. Collaborative learning may not be optimal in this context; however, we assume a structural similarity between clients, enabling us to cluster them into groups for more effective group-based learning. In this case, we direct our attention to the scalability of our method by supposing that the number of participants can be very large. We propose to incrementally, each time the server aggregates the clients' updates, estimate the hidden structure between clients. Contrary to alternative approaches, we do not require that all be available at the same time to estimate their belonging clusters. In the second part of this thesis, we delve into the security challenges of federated learning, specifically focusing on defenses against training time backdoor attacks. Since a federated framework is shared, it is not always possible to ensure that all clients are honest and that they all send correctly trained updates.Federated learning is vulnerable to the presence of malicious users who corrupt their training data. Our defenses are elaborated for trigger-based backdoor attacks, and rooted in trigger reconstruction. We do not provide the server additional data or client information, other than the compromised weights. After some limited assumptions are made, the server extracts information about the attack trigger from the compromised model global model. Our third method uses a reconstructed trigger to identify the neurons of a neural network that encode the attack. We propose to prune the network on the server side to hinder the effects of the attack. Our final method shifts the defense mechanism to the end-users, providing them with the reconstructed trigger to counteract attacks during the inference phase. Notably, both defense methods consider data heterogeneity, with the latter proving to be more efficient in extreme data heterogeneity cases. In conclusion, this thesis introduces novel methods to enhance the efficiency and security of federated learning systems. We have explored diverse data heterogeneity scenarios, proposing collaborative learning approaches and robust security defenses based on trigger reconstruction. As part of our future work, we outline perspectives for further research, improving our proposed trigger reconstruction and taking into account other challenges, such as privacy which is very important in the field of federated learning
(9034049), Miguel Villarreal-Vasquez. „Anomaly Detection and Security Deep Learning Methods Under Adversarial Situation“. Thesis, 2020.
Den vollen Inhalt der Quelle findenAdvances in Artificial Intelligence (AI), or more precisely on Neural Networks (NNs), and fast processing technologies (e.g. Graphic Processing Units or GPUs) in recent years have positioned NNs as one of the main machine learning algorithms used to solved a diversity of problems in both academia and the industry. While they have been proved to be effective in solving many tasks, the lack of security guarantees and understanding of their internal processing disrupts their wide adoption in general and cybersecurity-related applications. In this dissertation, we present the findings of a comprehensive study aimed to enable the absorption of state-of-the-art NN algorithms in the development of enterprise solutions. Specifically, this dissertation focuses on (1) the development of defensive mechanisms to protect NNs against adversarial attacks and (2) application of NN models for anomaly detection in enterprise networks.
In this state of affairs, this work makes the following contributions. First, we performed a thorough study of the different adversarial attacks against NNs. We concentrate on the attacks referred to as trojan attacks and introduce a novel model hardening method that removes any trojan (i.e. misbehavior) inserted to the NN models at training time. We carefully evaluate our method and establish the correct metrics to test the efficiency of defensive methods against these types of attacks: (1) accuracy with benign data, (2) attack success rate, and (3) accuracy with adversarial data. Prior work evaluates their solutions using the first two metrics only, which do not suffice to guarantee robustness against untargeted attacks. Our method is compared with the state-of-the-art. The obtained results show our method outperforms it. Second, we proposed a novel approach to detect anomalies using LSTM-based models. Our method analyzes at runtime the event sequences generated by the Endpoint Detection and Response (EDR) system of a renowned security company running and efficiently detects uncommon patterns. The new detecting method is compared with the EDR system. The results show that our method achieves a higher detection rate. Finally, we present a Moving Target Defense technique that smartly reacts upon the detection of anomalies so as to also mitigate the detected attacks. The technique efficiently replaces the entire stack of virtual nodes, making ongoing attacks in the system ineffective.
Bücher zum Thema "Backdoor attacks"
Michael A, Newton. Part IV The ICC and its Applicable Law, 29 Charging War Crimes: Policy and Prognosis from a Military Perspective. Oxford University Press, 2015. http://dx.doi.org/10.1093/law/9780198705161.003.0029.
Der volle Inhalt der QuelleYaari, Nurit. The Trojan War and the Israeli–Palestinian Conflict. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198746676.003.0006.
Der volle Inhalt der QuelleBartley, Abel A. Keeping the Faith. Greenwood Publishing Group, Inc., 2000. http://dx.doi.org/10.5040/9798400675553.
Der volle Inhalt der QuelleChorev, Nitsan. Give and Take. Princeton University Press, 2019. http://dx.doi.org/10.23943/princeton/9780691197845.001.0001.
Der volle Inhalt der QuelleBuchteile zum Thema "Backdoor attacks"
Pham, Long H., und Jun Sun. „Verifying Neural Networks Against Backdoor Attacks“. In Computer Aided Verification, 171–92. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-13185-1_9.
Der volle Inhalt der QuelleChan, Shih-Han, Yinpeng Dong, Jun Zhu, Xiaolu Zhang und Jun Zhou. „BadDet: Backdoor Attacks on Object Detection“. In Lecture Notes in Computer Science, 396–412. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-25056-9_26.
Der volle Inhalt der QuelleNarisada, Shintaro, Yuki Matsumoto, Seira Hidano, Toshihiro Uchibayashi, Takuo Suganuma, Masahiro Hiji und Shinsaku Kiyomoto. „Countermeasures Against Backdoor Attacks Towards Malware Detectors“. In Cryptology and Network Security, 295–314. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-92548-2_16.
Der volle Inhalt der QuelleFu, Hao, Alireza Sarmadi, Prashanth Krishnamurthy, Siddharth Garg und Farshad Khorrami. „Mitigating Backdoor Attacks on Deep Neural Networks“. In Embedded Machine Learning for Cyber-Physical, IoT, and Edge Computing, 395–431. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40677-5_16.
Der volle Inhalt der QuelleXin, Jinwen, Xixiang Lyu und Jing Ma. „Natural Backdoor Attacks on Speech Recognition Models“. In Machine Learning for Cyber Security, 597–610. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-20096-0_45.
Der volle Inhalt der QuelleWang, Haochen, Tianshi Mu, Guocong Feng, ShangBo Wu und Yuanzhang Li. „DFaP: Data Filtering and Purification Against Backdoor Attacks“. In Artificial Intelligence Security and Privacy, 81–97. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-99-9785-5_7.
Der volle Inhalt der QuelleIwahana, Kazuki, Naoto Yanai und Toru Fujiwara. „Backdoor Attacks Leveraging Latent Representation in Competitive Learning“. In Computer Security. ESORICS 2023 International Workshops, 700–718. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54129-2_41.
Der volle Inhalt der QuelleChen, Xiaoyi, Yinpeng Dong, Zeyu Sun, Shengfang Zhai, Qingni Shen und Zhonghai Wu. „Kallima: A Clean-Label Framework for Textual Backdoor Attacks“. In Computer Security – ESORICS 2022, 447–66. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-17140-6_22.
Der volle Inhalt der QuelleXuan, Yuexin, Xiaojun Chen, Zhendong Zhao, Bisheng Tang und Ye Dong. „Practical and General Backdoor Attacks Against Vertical Federated Learning“. In Machine Learning and Knowledge Discovery in Databases: Research Track, 402–17. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-43415-0_24.
Der volle Inhalt der QuelleKoffas, Stefanos, Behrad Tajalli, Jing Xu, Mauro Conti und Stjepan Picek. „A Systematic Evaluation of Backdoor Attacks in Various Domains“. In Embedded Machine Learning for Cyber-Physical, IoT, and Edge Computing, 519–52. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40677-5_20.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Backdoor attacks"
Xia, Pengfei, Ziqiang Li, Wei Zhang und Bin Li. „Data-Efficient Backdoor Attacks“. In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/554.
Der volle Inhalt der QuelleWang, Lun, Zaynah Javed, Xian Wu, Wenbo Guo, Xinyu Xing und Dawn Song. „BACKDOORL: Backdoor Attack against Competitive Reinforcement Learning“. In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/509.
Der volle Inhalt der QuelleXia, Jun, Ting Wang, Jiepin Ding, Xian Wei und Mingsong Chen. „Eliminating Backdoor Triggers for Deep Neural Networks Using Attention Relation Graph Distillation“. In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/206.
Der volle Inhalt der QuelleHuang, Huayang, Qian Wang, Xueluan Gong und Tao Wang. „Orion: Online Backdoor Sample Detection via Evolution Deviance“. In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/96.
Der volle Inhalt der QuelleMu, Bingxu, Zhenxing Niu, Le Wang, Xue Wang, Qiguang Mia, Rong Jin und Gang Hua. „Progressive Backdoor Erasing via connecting Backdoor and Adversarial Attacks“. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.01963.
Der volle Inhalt der QuelleJi, Yujie, Xinyang Zhang und Ting Wang. „Backdoor attacks against learning systems“. In 2017 IEEE Conference on Communications and Network Security (CNS). IEEE, 2017. http://dx.doi.org/10.1109/cns.2017.8228656.
Der volle Inhalt der QuelleSun, Yuhua, Tailai Zhang, Xingjun Ma, Pan Zhou, Jian Lou, Zichuan Xu, Xing Di, Yu Cheng und Lichao Sun. „Backdoor Attacks on Crowd Counting“. In MM '22: The 30th ACM International Conference on Multimedia. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3503161.3548296.
Der volle Inhalt der QuelleLiu, Yugeng, Zheng Li, Michael Backes, Yun Shen und Yang Zhang. „Backdoor Attacks Against Dataset Distillation“. In Network and Distributed System Security Symposium. Reston, VA: Internet Society, 2023. http://dx.doi.org/10.14722/ndss.2023.24287.
Der volle Inhalt der QuelleYang, Shuiqiao, Bao Gia Doan, Paul Montague, Olivier De Vel, Tamas Abraham, Seyit Camtepe, Damith C. Ranasinghe und Salil S. Kanhere. „Transferable Graph Backdoor Attack“. In RAID 2022: 25th International Symposium on Research in Attacks, Intrusions and Defenses. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3545948.3545976.
Der volle Inhalt der QuelleGe, Yunjie, Qian Wang, Baolin Zheng, Xinlu Zhuang, Qi Li, Chao Shen und Cong Wang. „Anti-Distillation Backdoor Attacks: Backdoors Can Really Survive in Knowledge Distillation“. In MM '21: ACM Multimedia Conference. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3474085.3475254.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Backdoor attacks"
Lewis, Dustin, Hrsg. Database of States’ Statements (August 2011–October 2016) concerning Use of Force in relation to Syria. Harvard Law School Program on International Law and Armed Conflict, Mai 2017. http://dx.doi.org/10.54813/ekmb4241.
Der volle Inhalt der QuelleBourekba, Moussa. Climate Change and Violent Extremism in North Africa. The Barcelona Centre for International Affairs, Oktober 2021. http://dx.doi.org/10.55317/casc014.
Der volle Inhalt der Quelle