Zeitschriftenartikel zum Thema „Backdoor attacks“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Backdoor attacks" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Zhu, Biru, Ganqu Cui, Yangyi Chen, Yujia Qin, Lifan Yuan, Chong Fu, Yangdong Deng, Zhiyuan Liu, Maosong Sun und Ming Gu. „Removing Backdoors in Pre-trained Models by Regularized Continual Pre-training“. Transactions of the Association for Computational Linguistics 11 (2023): 1608–23. http://dx.doi.org/10.1162/tacl_a_00622.
Der volle Inhalt der QuelleYuan, Guotao, Hong Huang und Xin Li. „Self-supervised learning backdoor defense mixed with self-attention mechanism“. Journal of Computing and Electronic Information Management 12, Nr. 2 (30.03.2024): 81–88. http://dx.doi.org/10.54097/7hx9afkw.
Der volle Inhalt der QuelleSaha, Aniruddha, Akshayvarun Subramanya und Hamed Pirsiavash. „Hidden Trigger Backdoor Attacks“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 07 (03.04.2020): 11957–65. http://dx.doi.org/10.1609/aaai.v34i07.6871.
Der volle Inhalt der QuelleDuan, Qiuyu, Zhongyun Hua, Qing Liao, Yushu Zhang und Leo Yu Zhang. „Conditional Backdoor Attack via JPEG Compression“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 18 (24.03.2024): 19823–31. http://dx.doi.org/10.1609/aaai.v38i18.29957.
Der volle Inhalt der QuelleLiu, Zihao, Tianhao Wang, Mengdi Huai und Chenglin Miao. „Backdoor Attacks via Machine Unlearning“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 13 (24.03.2024): 14115–23. http://dx.doi.org/10.1609/aaai.v38i13.29321.
Der volle Inhalt der QuelleWang, Tong, Yuan Yao, Feng Xu, Miao Xu, Shengwei An und Ting Wang. „Inspecting Prediction Confidence for Detecting Black-Box Backdoor Attacks“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 1 (24.03.2024): 274–82. http://dx.doi.org/10.1609/aaai.v38i1.27780.
Der volle Inhalt der QuelleHuynh, Tran, Dang Nguyen, Tung Pham und Anh Tran. „COMBAT: Alternated Training for Effective Clean-Label Backdoor Attacks“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 3 (24.03.2024): 2436–44. http://dx.doi.org/10.1609/aaai.v38i3.28019.
Der volle Inhalt der QuelleZhang, Xianda, Baolin Zheng, Jianbao Hu, Chengyang Li und Xiaoying Bai. „From Toxic to Trustworthy: Using Self-Distillation and Semi-supervised Methods to Refine Neural Networks“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 15 (24.03.2024): 16873–80. http://dx.doi.org/10.1609/aaai.v38i15.29629.
Der volle Inhalt der QuelleLiu, Tao, Yuhang Zhang, Zhu Feng, Zhiqin Yang, Chen Xu, Dapeng Man und Wu Yang. „Beyond Traditional Threats: A Persistent Backdoor Attack on Federated Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 19 (24.03.2024): 21359–67. http://dx.doi.org/10.1609/aaai.v38i19.30131.
Der volle Inhalt der QuelleZhang, Lei, Ya Peng, Lifei Wei, Congcong Chen und Xiaoyu Zhang. „DeepDefense: A Steganalysis-Based Backdoor Detecting and Mitigating Protocol in Deep Neural Networks for AI Security“. Security and Communication Networks 2023 (09.05.2023): 1–12. http://dx.doi.org/10.1155/2023/9308909.
Der volle Inhalt der QuelleHuang, Yihao, Felix Juefei-Xu, Qing Guo, Jie Zhang, Yutong Wu, Ming Hu, Tianlin Li, Geguang Pu und Yang Liu. „Personalization as a Shortcut for Few-Shot Backdoor Attack against Text-to-Image Diffusion Models“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 19 (24.03.2024): 21169–78. http://dx.doi.org/10.1609/aaai.v38i19.30110.
Der volle Inhalt der QuelleLi, Xi, Songhe Wang, Ruiquan Huang, Mahanth Gowda und George Kesidis. „Temporal-Distributed Backdoor Attack against Video Based Action Recognition“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 4 (24.03.2024): 3199–207. http://dx.doi.org/10.1609/aaai.v38i4.28104.
Der volle Inhalt der QuelleNing, Rui, Jiang Li, Chunsheng Xin, Hongyi Wu und Chonggang Wang. „Hibernated Backdoor: A Mutual Information Empowered Backdoor Attack to Deep Neural Networks“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 9 (28.06.2022): 10309–18. http://dx.doi.org/10.1609/aaai.v36i9.21272.
Der volle Inhalt der QuelleYu, Fangchao, Bo Zeng, Kai Zhao, Zhi Pang und Lina Wang. „Chronic Poisoning: Backdoor Attack against Split Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 15 (24.03.2024): 16531–38. http://dx.doi.org/10.1609/aaai.v38i15.29591.
Der volle Inhalt der QuelleLi, Yiming. „Poisoning-Based Backdoor Attacks in Computer Vision“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 13 (26.06.2023): 16121–22. http://dx.doi.org/10.1609/aaai.v37i13.26921.
Der volle Inhalt der QuelleDoan, Khoa D., Yingjie Lao, Peng Yang und Ping Li. „Defending Backdoor Attacks on Vision Transformer via Patch Processing“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 1 (26.06.2023): 506–15. http://dx.doi.org/10.1609/aaai.v37i1.25125.
Der volle Inhalt der QuelleAn, Shengwei, Sheng-Yen Chou, Kaiyuan Zhang, Qiuling Xu, Guanhong Tao, Guangyu Shen, Siyuan Cheng et al. „Elijah: Eliminating Backdoors Injected in Diffusion Models via Distribution Shift“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 10 (24.03.2024): 10847–55. http://dx.doi.org/10.1609/aaai.v38i10.28958.
Der volle Inhalt der QuelleLiu, Xinwei, Xiaojun Jia, Jindong Gu, Yuan Xun, Siyuan Liang und Xiaochun Cao. „Does Few-Shot Learning Suffer from Backdoor Attacks?“ Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 18 (24.03.2024): 19893–901. http://dx.doi.org/10.1609/aaai.v38i18.29965.
Der volle Inhalt der QuelleXiang, Zhen, David J. Miller, Hang Wang und George Kesidis. „Detecting Scene-Plausible Perceptible Backdoors in Trained DNNs Without Access to the Training Set“. Neural Computation 33, Nr. 5 (13.04.2021): 1329–71. http://dx.doi.org/10.1162/neco_a_01376.
Der volle Inhalt der QuelleZhang, Shengchuan, und Suhang Ye. „Backdoor Attack against Face Sketch Synthesis“. Entropy 25, Nr. 7 (25.06.2023): 974. http://dx.doi.org/10.3390/e25070974.
Der volle Inhalt der QuelleXu, Yixiao, Xiaolei Liu, Kangyi Ding und Bangzhou Xin. „IBD: An Interpretable Backdoor-Detection Method via Multivariate Interactions“. Sensors 22, Nr. 22 (10.11.2022): 8697. http://dx.doi.org/10.3390/s22228697.
Der volle Inhalt der QuelleSun, Xiaofei, Xiaoya Li, Yuxian Meng, Xiang Ao, Lingjuan Lyu, Jiwei Li und Tianwei Zhang. „Defending against Backdoor Attacks in Natural Language Generation“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 4 (26.06.2023): 5257–65. http://dx.doi.org/10.1609/aaai.v37i4.25656.
Der volle Inhalt der QuelleZhao, Yue, Congyi Li und Kai Chen. „UMA: Facilitating Backdoor Scanning via Unlearning-Based Model Ablation“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 19 (24.03.2024): 21823–31. http://dx.doi.org/10.1609/aaai.v38i19.30183.
Der volle Inhalt der QuelleFan, Linkun, Fazhi He, Tongzhen Si, Wei Tang und Bing Li. „Invisible Backdoor Attack against 3D Point Cloud Classifier in Graph Spectral Domain“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 19 (24.03.2024): 21072–80. http://dx.doi.org/10.1609/aaai.v38i19.30099.
Der volle Inhalt der QuelleChen, Yang, Zhonglin Ye, Haixing Zhao und Ying Wang. „Feature-Based Graph Backdoor Attack in the Node Classification Task“. International Journal of Intelligent Systems 2023 (21.02.2023): 1–13. http://dx.doi.org/10.1155/2023/5418398.
Der volle Inhalt der QuelleCui, Jing, Yufei Han, Yuzhe Ma, Jianbin Jiao und Junge Zhang. „BadRL: Sparse Targeted Backdoor Attack against Reinforcement Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 10 (24.03.2024): 11687–94. http://dx.doi.org/10.1609/aaai.v38i10.29052.
Der volle Inhalt der QuelleWu, Yalun, Yanfeng Gu, Yuanwan Chen, Xiaoshu Cui, Qiong Li, Yingxiao Xiang, Endong Tong, Jianhua Li, Zhen Han und Jiqiang Liu. „Camouflage Backdoor Attack against Pedestrian Detection“. Applied Sciences 13, Nr. 23 (28.11.2023): 12752. http://dx.doi.org/10.3390/app132312752.
Der volle Inhalt der QuelleOzdayi, Mustafa Safa, Murat Kantarcioglu und Yulia R. Gel. „Defending against Backdoors in Federated Learning with Robust Learning Rate“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 10 (18.05.2021): 9268–76. http://dx.doi.org/10.1609/aaai.v35i10.17118.
Der volle Inhalt der QuelleYe, Jianbin, Xiaoyuan Liu, Zheng You, Guowei Li und Bo Liu. „DriNet: Dynamic Backdoor Attack against Automatic Speech Recognization Models“. Applied Sciences 12, Nr. 12 (07.06.2022): 5786. http://dx.doi.org/10.3390/app12125786.
Der volle Inhalt der QuelleJang, Jinhyeok, Yoonsoo An, Dowan Kim und Daeseon Choi. „Feature Importance-Based Backdoor Attack in NSL-KDD“. Electronics 12, Nr. 24 (09.12.2023): 4953. http://dx.doi.org/10.3390/electronics12244953.
Der volle Inhalt der QuelleFang, Shihong, und Anna Choromanska. „Backdoor Attacks on the DNN Interpretation System“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 1 (28.06.2022): 561–70. http://dx.doi.org/10.1609/aaai.v36i1.19935.
Der volle Inhalt der QuelleGao, Yudong, Honglong Chen, Peng Sun, Junjian Li, Anqing Zhang, Zhibo Wang und Weifeng Liu. „A Dual Stealthy Backdoor: From Both Spatial and Frequency Perspectives“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 3 (24.03.2024): 1851–59. http://dx.doi.org/10.1609/aaai.v38i3.27954.
Der volle Inhalt der QuelleIslam, Kazi Aminul, Hongyi Wu, Chunsheng Xin, Rui Ning, Liuwan Zhu und Jiang Li. „Sub-Band Backdoor Attack in Remote Sensing Imagery“. Algorithms 17, Nr. 5 (28.04.2024): 182. http://dx.doi.org/10.3390/a17050182.
Der volle Inhalt der QuelleZhao, Feng, Li Zhou, Qi Zhong, Rushi Lan und Leo Yu Zhang. „Natural Backdoor Attacks on Deep Neural Networks via Raindrops“. Security and Communication Networks 2022 (26.03.2022): 1–11. http://dx.doi.org/10.1155/2022/4593002.
Der volle Inhalt der QuelleKwon, Hyun, und Sanghyun Lee. „Textual Backdoor Attack for the Text Classification System“. Security and Communication Networks 2021 (22.10.2021): 1–11. http://dx.doi.org/10.1155/2021/2938386.
Der volle Inhalt der QuelleJia, Jinyuan, Yupei Liu, Xiaoyu Cao und Neil Zhenqiang Gong. „Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks“. Proceedings of the AAAI Conference on Artificial Intelligence 36, Nr. 9 (28.06.2022): 9575–83. http://dx.doi.org/10.1609/aaai.v36i9.21191.
Der volle Inhalt der QuelleLiu, Jiawang, Changgen Peng, Weijie Tan und Chenghui Shi. „Federated Learning Backdoor Attack Based on Frequency Domain Injection“. Entropy 26, Nr. 2 (14.02.2024): 164. http://dx.doi.org/10.3390/e26020164.
Der volle Inhalt der QuelleMatsuo, Yuki, und Kazuhiro Takemoto. „Backdoor Attacks on Deep Neural Networks via Transfer Learning from Natural Images“. Applied Sciences 12, Nr. 24 (08.12.2022): 12564. http://dx.doi.org/10.3390/app122412564.
Der volle Inhalt der QuelleShao, Kun, Yu Zhang, Junan Yang und Hui Liu. „Textual Backdoor Defense via Poisoned Sample Recognition“. Applied Sciences 11, Nr. 21 (25.10.2021): 9938. http://dx.doi.org/10.3390/app11219938.
Der volle Inhalt der QuelleMercier, Arthur, Nikita Smolin, Oliver Sihlovec, Stefanos Koffas und Stjepan Picek. „Backdoor Pony: Evaluating backdoor attacks and defenses in different domains“. SoftwareX 22 (Mai 2023): 101387. http://dx.doi.org/10.1016/j.softx.2023.101387.
Der volle Inhalt der QuelleChen, Yiming, Haiwei Wu und Jiantao Zhou. „Progressive Poisoned Data Isolation for Training-Time Backdoor Defense“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 10 (24.03.2024): 11425–33. http://dx.doi.org/10.1609/aaai.v38i10.29023.
Der volle Inhalt der QuelleWang, Zhen, Buhong Wang, Chuanlei Zhang, Yaohui Liu und Jianxin Guo. „Robust Feature-Guided Generative Adversarial Network for Aerial Image Semantic Segmentation against Backdoor Attacks“. Remote Sensing 15, Nr. 10 (15.05.2023): 2580. http://dx.doi.org/10.3390/rs15102580.
Der volle Inhalt der QuelleCheng, Siyuan, Yingqi Liu, Shiqing Ma und Xiangyu Zhang. „Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 2 (18.05.2021): 1148–56. http://dx.doi.org/10.1609/aaai.v35i2.16201.
Der volle Inhalt der QuelleNa, Hyunsik, und Daeseon Choi. „Image-Synthesis-Based Backdoor Attack Approach for Face Classification Task“. Electronics 12, Nr. 21 (03.11.2023): 4535. http://dx.doi.org/10.3390/electronics12214535.
Der volle Inhalt der QuelleShamshiri, Samaneh, Ki Jin Han und Insoo Sohn. „DB-COVIDNet: A Defense Method against Backdoor Attacks“. Mathematics 11, Nr. 20 (10.10.2023): 4236. http://dx.doi.org/10.3390/math11204236.
Der volle Inhalt der QuelleMatsuo, Yuki, und Kazuhiro Takemoto. „Backdoor Attacks to Deep Neural Network-Based System for COVID-19 Detection from Chest X-ray Images“. Applied Sciences 11, Nr. 20 (14.10.2021): 9556. http://dx.doi.org/10.3390/app11209556.
Der volle Inhalt der QuelleMatsuo, Yuki, und Kazuhiro Takemoto. „Backdoor Attacks to Deep Neural Network-Based System for COVID-19 Detection from Chest X-ray Images“. Applied Sciences 11, Nr. 20 (14.10.2021): 9556. http://dx.doi.org/10.3390/app11209556.
Der volle Inhalt der QuelleOyama, Tatsuya, Shunsuke Okura, Kota Yoshida und Takeshi Fujino. „Backdoor Attack on Deep Neural Networks Triggered by Fault Injection Attack on Image Sensor Interface“. Sensors 23, Nr. 10 (14.05.2023): 4742. http://dx.doi.org/10.3390/s23104742.
Der volle Inhalt der QuelleWang, Derui, Sheng Wen, Alireza Jolfaei, Mohammad Sayad Haghighi, Surya Nepal und Yang Xiang. „On the Neural Backdoor of Federated Generative Models in Edge Computing“. ACM Transactions on Internet Technology 22, Nr. 2 (31.05.2022): 1–21. http://dx.doi.org/10.1145/3425662.
Der volle Inhalt der QuelleChen, Chien-Lun, Sara Babakniya, Marco Paolieri und Leana Golubchik. „Defending against Poisoning Backdoor Attacks on Federated Meta-learning“. ACM Transactions on Intelligent Systems and Technology 13, Nr. 5 (31.10.2022): 1–25. http://dx.doi.org/10.1145/3523062.
Der volle Inhalt der Quelle