Добірка наукової літератури з теми "Backdoor Attack"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Backdoor Attack".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Backdoor Attack"
Saha, Aniruddha, Akshayvarun Subramanya, and Hamed Pirsiavash. "Hidden Trigger Backdoor Attacks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 11957–65. http://dx.doi.org/10.1609/aaai.v34i07.6871.
Повний текст джерелаNing, Rui, Jiang Li, Chunsheng Xin, Hongyi Wu, and Chonggang Wang. "Hibernated Backdoor: A Mutual Information Empowered Backdoor Attack to Deep Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (June 28, 2022): 10309–18. http://dx.doi.org/10.1609/aaai.v36i9.21272.
Повний текст джерелаKwon, Hyun, and Sanghyun Lee. "Textual Backdoor Attack for the Text Classification System." Security and Communication Networks 2021 (October 22, 2021): 1–11. http://dx.doi.org/10.1155/2021/2938386.
Повний текст джерелаYe, Jianbin, Xiaoyuan Liu, Zheng You, Guowei Li, and Bo Liu. "DriNet: Dynamic Backdoor Attack against Automatic Speech Recognization Models." Applied Sciences 12, no. 12 (June 7, 2022): 5786. http://dx.doi.org/10.3390/app12125786.
Повний текст джерелаXu, Yixiao, Xiaolei Liu, Kangyi Ding, and Bangzhou Xin. "IBD: An Interpretable Backdoor-Detection Method via Multivariate Interactions." Sensors 22, no. 22 (November 10, 2022): 8697. http://dx.doi.org/10.3390/s22228697.
Повний текст джерелаXiang, Zhen, David J. Miller, Hang Wang, and George Kesidis. "Detecting Scene-Plausible Perceptible Backdoors in Trained DNNs Without Access to the Training Set." Neural Computation 33, no. 5 (April 13, 2021): 1329–71. http://dx.doi.org/10.1162/neco_a_01376.
Повний текст джерелаZhao, Feng, Li Zhou, Qi Zhong, Rushi Lan, and Leo Yu Zhang. "Natural Backdoor Attacks on Deep Neural Networks via Raindrops." Security and Communication Networks 2022 (March 26, 2022): 1–11. http://dx.doi.org/10.1155/2022/4593002.
Повний текст джерелаFang, Shihong, and Anna Choromanska. "Backdoor Attacks on the DNN Interpretation System." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 1 (June 28, 2022): 561–70. http://dx.doi.org/10.1609/aaai.v36i1.19935.
Повний текст джерелаOzdayi, Mustafa Safa, Murat Kantarcioglu, and Yulia R. Gel. "Defending against Backdoors in Federated Learning with Robust Learning Rate." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 10 (May 18, 2021): 9268–76. http://dx.doi.org/10.1609/aaai.v35i10.17118.
Повний текст джерелаKWON, Hyun, Hyunsoo YOON, and Ki-Woong PARK. "Multi-Targeted Backdoor: Indentifying Backdoor Attack for Multiple Deep Neural Networks." IEICE Transactions on Information and Systems E103.D, no. 4 (April 1, 2020): 883–87. http://dx.doi.org/10.1587/transinf.2019edl8170.
Повний текст джерелаДисертації з теми "Backdoor Attack"
Turner, Alexander M. S. M. Massachusetts Institute of Technology. "Exploring the landscape of backdoor attacks on deep neural network models." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123127.
Повний текст джерелаThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 71-75).
Deep neural networks have recently been demonstrated to be vulnerable to backdoor attacks. Specifically, by introducing a small set of training inputs, an adversary is able to plant a backdoor in the trained model that enables them to fully control the model's behavior during inference. In this thesis, the landscape of these attacks is investigated from both the perspective of an adversary seeking an effective attack and a practitioner seeking protection against them. While the backdoor attacks that have been previously demonstrated are very powerful, they crucially rely on allowing the adversary to introduce arbitrary inputs that are -- often blatantly -- mislabelled. As a result, the introduced inputs are likely to raise suspicion whenever even a rudimentary data filtering scheme flags them as outliers. This makes label-consistency -- the condition that inputs are consistent with their labels -- crucial for these attacks to remain undetected. We draw on adversarial perturbations and generative methods to develop a framework for executing efficient, yet label-consistent, backdoor attacks. Furthermore, we propose the use of differential privacy as a defence against backdoor attacks. This prevents the model from relying heavily on features present in few samples. As we do not require formal privacy guarantees, we are able to relax the requirements imposed by differential privacy and instead evaluate our methods on the explicit goal of avoiding the backdoor attack. We propose a method that uses a relaxed differentially private training procedure to achieve empirical protection from backdoor attacks with only a moderate decrease in acccuacy on natural inputs.
by Alexander M. Turner.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
(9034049), Miguel Villarreal-Vasquez. "Anomaly Detection and Security Deep Learning Methods Under Adversarial Situation." Thesis, 2020.
Знайти повний текст джерелаAdvances in Artificial Intelligence (AI), or more precisely on Neural Networks (NNs), and fast processing technologies (e.g. Graphic Processing Units or GPUs) in recent years have positioned NNs as one of the main machine learning algorithms used to solved a diversity of problems in both academia and the industry. While they have been proved to be effective in solving many tasks, the lack of security guarantees and understanding of their internal processing disrupts their wide adoption in general and cybersecurity-related applications. In this dissertation, we present the findings of a comprehensive study aimed to enable the absorption of state-of-the-art NN algorithms in the development of enterprise solutions. Specifically, this dissertation focuses on (1) the development of defensive mechanisms to protect NNs against adversarial attacks and (2) application of NN models for anomaly detection in enterprise networks.
In this state of affairs, this work makes the following contributions. First, we performed a thorough study of the different adversarial attacks against NNs. We concentrate on the attacks referred to as trojan attacks and introduce a novel model hardening method that removes any trojan (i.e. misbehavior) inserted to the NN models at training time. We carefully evaluate our method and establish the correct metrics to test the efficiency of defensive methods against these types of attacks: (1) accuracy with benign data, (2) attack success rate, and (3) accuracy with adversarial data. Prior work evaluates their solutions using the first two metrics only, which do not suffice to guarantee robustness against untargeted attacks. Our method is compared with the state-of-the-art. The obtained results show our method outperforms it. Second, we proposed a novel approach to detect anomalies using LSTM-based models. Our method analyzes at runtime the event sequences generated by the Endpoint Detection and Response (EDR) system of a renowned security company running and efficiently detects uncommon patterns. The new detecting method is compared with the EDR system. The results show that our method achieves a higher detection rate. Finally, we present a Moving Target Defense technique that smartly reacts upon the detection of anomalies so as to also mitigate the detected attacks. The technique efficiently replaces the entire stack of virtual nodes, making ongoing attacks in the system ineffective.
Книги з теми "Backdoor Attack"
Michael A, Newton. Part IV The ICC and its Applicable Law, 29 Charging War Crimes: Policy and Prognosis from a Military Perspective. Oxford University Press, 2015. http://dx.doi.org/10.1093/law/9780198705161.003.0029.
Повний текст джерелаKoh Swee, Yen. Part II Investor-State Arbitration in the Energy Sector, 14 Energy Investor-State Disputes in Asia. Oxford University Press, 2018. http://dx.doi.org/10.1093/law/9780198805786.003.0014.
Повний текст джерелаParsons, Christopher. High-Skilled Migration in Times of Global Economic Crisis. Edited by Mathias Czaika. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198815273.003.0002.
Повний текст джерелаChorev, Nitsan. Give and Take. Princeton University Press, 2019. http://dx.doi.org/10.23943/princeton/9780691197845.001.0001.
Повний текст джерелаHain, Kathryn A. Epilogue. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780190622183.003.0017.
Повний текст джерелаDasgupta, Ushashi. Charles Dickens and the Properties of Fiction. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198859116.001.0001.
Повний текст джерелаЧастини книг з теми "Backdoor Attack"
Liu, Yunfei, Xingjun Ma, James Bailey, and Feng Lu. "Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks." In Computer Vision – ECCV 2020, 182–99. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58607-2_11.
Повний текст джерелаWang, Yu, Haomiao Yang, Jiasheng Li, and Mengyu Ge. "A Pragmatic Label-Specific Backdoor Attack." In Communications in Computer and Information Science, 149–62. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-8445-7_10.
Повний текст джерелаXiong, Yayuan, Fengyuan Xu, Sheng Zhong, and Qun Li. "Escaping Backdoor Attack Detection of Deep Learning." In ICT Systems Security and Privacy Protection, 431–45. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58201-2_29.
Повний текст джерелаGao, Xiangyu, and Meikang Qiu. "Energy-Based Learning for Preventing Backdoor Attack." In Knowledge Science, Engineering and Management, 706–21. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-10989-8_56.
Повний текст джерелаLuo, Yuxiao, Jianwei Tai, Xiaoqi Jia, and Shengzhi Zhang. "Practical Backdoor Attack Against Speaker Recognition System." In Information Security Practice and Experience, 468–84. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-21280-2_26.
Повний текст джерелаWang, Tong, Yuan Yao, Feng Xu, Shengwei An, Hanghang Tong, and Ting Wang. "An Invisible Black-Box Backdoor Attack Through Frequency Domain." In Lecture Notes in Computer Science, 396–413. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19778-9_23.
Повний текст джерелаSheng, Yu, Rong Chen, Guanyu Cai, and Li Kuang. "Backdoor Attack of Graph Neural Networks Based on Subgraph Trigger." In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 276–96. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-92638-0_17.
Повний текст джерелаPhan, Huy, Cong Shi, Yi Xie, Tianfang Zhang, Zhuohang Li, Tianming Zhao, Jian Liu, Yan Wang, Yingying Chen, and Bo Yuan. "RIBAC: Towards Robust and Imperceptible Backdoor Attack against Compact DNN." In Lecture Notes in Computer Science, 708–24. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19772-7_41.
Повний текст джерелаLiu, Jianming, Li Luo, and Xueyan Wang. "Backdoor Attack Against Deep Learning-Based Autonomous Driving with Fogging." In Communications in Computer and Information Science, 247–56. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-7943-9_21.
Повний текст джерелаYang, Xiu-gui, Xiang-yun Qian, Rui Zhang, Ning Huang, and Hui Xia. "Low-Poisoning Rate Invisible Backdoor Attack Based on Important Neurons." In Wireless Algorithms, Systems, and Applications, 375–83. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19214-2_31.
Повний текст джерелаТези доповідей конференцій з теми "Backdoor Attack"
Wang, Lun, Zaynah Javed, Xian Wu, Wenbo Guo, Xinyu Xing, and Dawn Song. "BACKDOORL: Backdoor Attack against Competitive Reinforcement Learning." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/509.
Повний текст джерелаXia, Jun, Ting Wang, Jiepin Ding, Xian Wei, and Mingsong Chen. "Eliminating Backdoor Triggers for Deep Neural Networks Using Attention Relation Graph Distillation." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/206.
Повний текст джерелаRen, Yankun, Longfei Li, and Jun Zhou. "Simtrojan: Stealthy Backdoor Attack." In 2021 IEEE International Conference on Image Processing (ICIP). IEEE, 2021. http://dx.doi.org/10.1109/icip42928.2021.9506313.
Повний текст джерелаYang, Shuiqiao, Bao Gia Doan, Paul Montague, Olivier De Vel, Tamas Abraham, Seyit Camtepe, Damith C. Ranasinghe, and Salil S. Kanhere. "Transferable Graph Backdoor Attack." In RAID 2022: 25th International Symposium on Research in Attacks, Intrusions and Defenses. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3545948.3545976.
Повний текст джерелаLi, Shuang, Hongwei Li, and Hanxiao Chen. "Stand-in Backdoor: A Stealthy and Powerful Backdoor Attack." In GLOBECOM 2021 - 2021 IEEE Global Communications Conference. IEEE, 2021. http://dx.doi.org/10.1109/globecom46510.2021.9685762.
Повний текст джерелаXia, Pengfei, Ziqiang Li, Wei Zhang, and Bin Li. "Data-Efficient Backdoor Attacks." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/554.
Повний текст джерелаZhai, Tongqing, Yiming Li, Ziqi Zhang, Baoyuan Wu, Yong Jiang, and Shu-Tao Xia. "Backdoor Attack Against Speaker Verification." In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021. http://dx.doi.org/10.1109/icassp39728.2021.9413468.
Повний текст джерелаKees, Natasha, Yaxuan Wang, Yiling Jiang, Fang Lue, and Patrick P. K. Chan. "Segmentation Based Backdoor Attack Detection." In 2020 International Conference on Machine Learning and Cybernetics (ICMLC). IEEE, 2020. http://dx.doi.org/10.1109/icmlc51923.2020.9469037.
Повний текст джерелаJavid, Farshad, and Mina Zolfy Lighvan. "Honeypots Vulnerabilities to Backdoor Attack." In 2021 International Conference on Information Security and Cryptology (ISCTURKEY). IEEE, 2021. http://dx.doi.org/10.1109/iscturkey53027.2021.9654401.
Повний текст джерелаZhong, Nan, Zhenxing Qian, and Xinpeng Zhang. "Imperceptible Backdoor Attack: From Input Space to Feature Representation." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/242.
Повний текст джерелаЗвіти організацій з теми "Backdoor Attack"
Lewis, Dustin, ed. Database of States’ Statements (August 2011–October 2016) concerning Use of Force in relation to Syria. Harvard Law School Program on International Law and Armed Conflict, May 2017. http://dx.doi.org/10.54813/ekmb4241.
Повний текст джерела