Artykuły w czasopismach na temat „Backdoor attacks”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Backdoor attacks”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.
Zhu, Biru, Ganqu Cui, Yangyi Chen, Yujia Qin, Lifan Yuan, Chong Fu, Yangdong Deng, Zhiyuan Liu, Maosong Sun i Ming Gu. "Removing Backdoors in Pre-trained Models by Regularized Continual Pre-training". Transactions of the Association for Computational Linguistics 11 (2023): 1608–23. http://dx.doi.org/10.1162/tacl_a_00622.
Pełny tekst źródłaYuan, Guotao, Hong Huang i Xin Li. "Self-supervised learning backdoor defense mixed with self-attention mechanism". Journal of Computing and Electronic Information Management 12, nr 2 (30.03.2024): 81–88. http://dx.doi.org/10.54097/7hx9afkw.
Pełny tekst źródłaSaha, Aniruddha, Akshayvarun Subramanya i Hamed Pirsiavash. "Hidden Trigger Backdoor Attacks". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 07 (3.04.2020): 11957–65. http://dx.doi.org/10.1609/aaai.v34i07.6871.
Pełny tekst źródłaDuan, Qiuyu, Zhongyun Hua, Qing Liao, Yushu Zhang i Leo Yu Zhang. "Conditional Backdoor Attack via JPEG Compression". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 18 (24.03.2024): 19823–31. http://dx.doi.org/10.1609/aaai.v38i18.29957.
Pełny tekst źródłaLiu, Zihao, Tianhao Wang, Mengdi Huai i Chenglin Miao. "Backdoor Attacks via Machine Unlearning". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 13 (24.03.2024): 14115–23. http://dx.doi.org/10.1609/aaai.v38i13.29321.
Pełny tekst źródłaWang, Tong, Yuan Yao, Feng Xu, Miao Xu, Shengwei An i Ting Wang. "Inspecting Prediction Confidence for Detecting Black-Box Backdoor Attacks". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 1 (24.03.2024): 274–82. http://dx.doi.org/10.1609/aaai.v38i1.27780.
Pełny tekst źródłaHuynh, Tran, Dang Nguyen, Tung Pham i Anh Tran. "COMBAT: Alternated Training for Effective Clean-Label Backdoor Attacks". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 3 (24.03.2024): 2436–44. http://dx.doi.org/10.1609/aaai.v38i3.28019.
Pełny tekst źródłaZhang, Xianda, Baolin Zheng, Jianbao Hu, Chengyang Li i Xiaoying Bai. "From Toxic to Trustworthy: Using Self-Distillation and Semi-supervised Methods to Refine Neural Networks". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 15 (24.03.2024): 16873–80. http://dx.doi.org/10.1609/aaai.v38i15.29629.
Pełny tekst źródłaLiu, Tao, Yuhang Zhang, Zhu Feng, Zhiqin Yang, Chen Xu, Dapeng Man i Wu Yang. "Beyond Traditional Threats: A Persistent Backdoor Attack on Federated Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 19 (24.03.2024): 21359–67. http://dx.doi.org/10.1609/aaai.v38i19.30131.
Pełny tekst źródłaZhang, Lei, Ya Peng, Lifei Wei, Congcong Chen i Xiaoyu Zhang. "DeepDefense: A Steganalysis-Based Backdoor Detecting and Mitigating Protocol in Deep Neural Networks for AI Security". Security and Communication Networks 2023 (9.05.2023): 1–12. http://dx.doi.org/10.1155/2023/9308909.
Pełny tekst źródłaHuang, Yihao, Felix Juefei-Xu, Qing Guo, Jie Zhang, Yutong Wu, Ming Hu, Tianlin Li, Geguang Pu i Yang Liu. "Personalization as a Shortcut for Few-Shot Backdoor Attack against Text-to-Image Diffusion Models". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 19 (24.03.2024): 21169–78. http://dx.doi.org/10.1609/aaai.v38i19.30110.
Pełny tekst źródłaLi, Xi, Songhe Wang, Ruiquan Huang, Mahanth Gowda i George Kesidis. "Temporal-Distributed Backdoor Attack against Video Based Action Recognition". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 4 (24.03.2024): 3199–207. http://dx.doi.org/10.1609/aaai.v38i4.28104.
Pełny tekst źródłaNing, Rui, Jiang Li, Chunsheng Xin, Hongyi Wu i Chonggang Wang. "Hibernated Backdoor: A Mutual Information Empowered Backdoor Attack to Deep Neural Networks". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 9 (28.06.2022): 10309–18. http://dx.doi.org/10.1609/aaai.v36i9.21272.
Pełny tekst źródłaYu, Fangchao, Bo Zeng, Kai Zhao, Zhi Pang i Lina Wang. "Chronic Poisoning: Backdoor Attack against Split Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 15 (24.03.2024): 16531–38. http://dx.doi.org/10.1609/aaai.v38i15.29591.
Pełny tekst źródłaLi, Yiming. "Poisoning-Based Backdoor Attacks in Computer Vision". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 13 (26.06.2023): 16121–22. http://dx.doi.org/10.1609/aaai.v37i13.26921.
Pełny tekst źródłaDoan, Khoa D., Yingjie Lao, Peng Yang i Ping Li. "Defending Backdoor Attacks on Vision Transformer via Patch Processing". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 1 (26.06.2023): 506–15. http://dx.doi.org/10.1609/aaai.v37i1.25125.
Pełny tekst źródłaAn, Shengwei, Sheng-Yen Chou, Kaiyuan Zhang, Qiuling Xu, Guanhong Tao, Guangyu Shen, Siyuan Cheng i in. "Elijah: Eliminating Backdoors Injected in Diffusion Models via Distribution Shift". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 10 (24.03.2024): 10847–55. http://dx.doi.org/10.1609/aaai.v38i10.28958.
Pełny tekst źródłaLiu, Xinwei, Xiaojun Jia, Jindong Gu, Yuan Xun, Siyuan Liang i Xiaochun Cao. "Does Few-Shot Learning Suffer from Backdoor Attacks?" Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 18 (24.03.2024): 19893–901. http://dx.doi.org/10.1609/aaai.v38i18.29965.
Pełny tekst źródłaXiang, Zhen, David J. Miller, Hang Wang i George Kesidis. "Detecting Scene-Plausible Perceptible Backdoors in Trained DNNs Without Access to the Training Set". Neural Computation 33, nr 5 (13.04.2021): 1329–71. http://dx.doi.org/10.1162/neco_a_01376.
Pełny tekst źródłaZhang, Shengchuan, i Suhang Ye. "Backdoor Attack against Face Sketch Synthesis". Entropy 25, nr 7 (25.06.2023): 974. http://dx.doi.org/10.3390/e25070974.
Pełny tekst źródłaXu, Yixiao, Xiaolei Liu, Kangyi Ding i Bangzhou Xin. "IBD: An Interpretable Backdoor-Detection Method via Multivariate Interactions". Sensors 22, nr 22 (10.11.2022): 8697. http://dx.doi.org/10.3390/s22228697.
Pełny tekst źródłaSun, Xiaofei, Xiaoya Li, Yuxian Meng, Xiang Ao, Lingjuan Lyu, Jiwei Li i Tianwei Zhang. "Defending against Backdoor Attacks in Natural Language Generation". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 4 (26.06.2023): 5257–65. http://dx.doi.org/10.1609/aaai.v37i4.25656.
Pełny tekst źródłaZhao, Yue, Congyi Li i Kai Chen. "UMA: Facilitating Backdoor Scanning via Unlearning-Based Model Ablation". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 19 (24.03.2024): 21823–31. http://dx.doi.org/10.1609/aaai.v38i19.30183.
Pełny tekst źródłaFan, Linkun, Fazhi He, Tongzhen Si, Wei Tang i Bing Li. "Invisible Backdoor Attack against 3D Point Cloud Classifier in Graph Spectral Domain". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 19 (24.03.2024): 21072–80. http://dx.doi.org/10.1609/aaai.v38i19.30099.
Pełny tekst źródłaChen, Yang, Zhonglin Ye, Haixing Zhao i Ying Wang. "Feature-Based Graph Backdoor Attack in the Node Classification Task". International Journal of Intelligent Systems 2023 (21.02.2023): 1–13. http://dx.doi.org/10.1155/2023/5418398.
Pełny tekst źródłaCui, Jing, Yufei Han, Yuzhe Ma, Jianbin Jiao i Junge Zhang. "BadRL: Sparse Targeted Backdoor Attack against Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 10 (24.03.2024): 11687–94. http://dx.doi.org/10.1609/aaai.v38i10.29052.
Pełny tekst źródłaWu, Yalun, Yanfeng Gu, Yuanwan Chen, Xiaoshu Cui, Qiong Li, Yingxiao Xiang, Endong Tong, Jianhua Li, Zhen Han i Jiqiang Liu. "Camouflage Backdoor Attack against Pedestrian Detection". Applied Sciences 13, nr 23 (28.11.2023): 12752. http://dx.doi.org/10.3390/app132312752.
Pełny tekst źródłaOzdayi, Mustafa Safa, Murat Kantarcioglu i Yulia R. Gel. "Defending against Backdoors in Federated Learning with Robust Learning Rate". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 10 (18.05.2021): 9268–76. http://dx.doi.org/10.1609/aaai.v35i10.17118.
Pełny tekst źródłaYe, Jianbin, Xiaoyuan Liu, Zheng You, Guowei Li i Bo Liu. "DriNet: Dynamic Backdoor Attack against Automatic Speech Recognization Models". Applied Sciences 12, nr 12 (7.06.2022): 5786. http://dx.doi.org/10.3390/app12125786.
Pełny tekst źródłaJang, Jinhyeok, Yoonsoo An, Dowan Kim i Daeseon Choi. "Feature Importance-Based Backdoor Attack in NSL-KDD". Electronics 12, nr 24 (9.12.2023): 4953. http://dx.doi.org/10.3390/electronics12244953.
Pełny tekst źródłaFang, Shihong, i Anna Choromanska. "Backdoor Attacks on the DNN Interpretation System". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 1 (28.06.2022): 561–70. http://dx.doi.org/10.1609/aaai.v36i1.19935.
Pełny tekst źródłaGao, Yudong, Honglong Chen, Peng Sun, Junjian Li, Anqing Zhang, Zhibo Wang i Weifeng Liu. "A Dual Stealthy Backdoor: From Both Spatial and Frequency Perspectives". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 3 (24.03.2024): 1851–59. http://dx.doi.org/10.1609/aaai.v38i3.27954.
Pełny tekst źródłaIslam, Kazi Aminul, Hongyi Wu, Chunsheng Xin, Rui Ning, Liuwan Zhu i Jiang Li. "Sub-Band Backdoor Attack in Remote Sensing Imagery". Algorithms 17, nr 5 (28.04.2024): 182. http://dx.doi.org/10.3390/a17050182.
Pełny tekst źródłaZhao, Feng, Li Zhou, Qi Zhong, Rushi Lan i Leo Yu Zhang. "Natural Backdoor Attacks on Deep Neural Networks via Raindrops". Security and Communication Networks 2022 (26.03.2022): 1–11. http://dx.doi.org/10.1155/2022/4593002.
Pełny tekst źródłaKwon, Hyun, i Sanghyun Lee. "Textual Backdoor Attack for the Text Classification System". Security and Communication Networks 2021 (22.10.2021): 1–11. http://dx.doi.org/10.1155/2021/2938386.
Pełny tekst źródłaJia, Jinyuan, Yupei Liu, Xiaoyu Cao i Neil Zhenqiang Gong. "Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 9 (28.06.2022): 9575–83. http://dx.doi.org/10.1609/aaai.v36i9.21191.
Pełny tekst źródłaLiu, Jiawang, Changgen Peng, Weijie Tan i Chenghui Shi. "Federated Learning Backdoor Attack Based on Frequency Domain Injection". Entropy 26, nr 2 (14.02.2024): 164. http://dx.doi.org/10.3390/e26020164.
Pełny tekst źródłaMatsuo, Yuki, i Kazuhiro Takemoto. "Backdoor Attacks on Deep Neural Networks via Transfer Learning from Natural Images". Applied Sciences 12, nr 24 (8.12.2022): 12564. http://dx.doi.org/10.3390/app122412564.
Pełny tekst źródłaShao, Kun, Yu Zhang, Junan Yang i Hui Liu. "Textual Backdoor Defense via Poisoned Sample Recognition". Applied Sciences 11, nr 21 (25.10.2021): 9938. http://dx.doi.org/10.3390/app11219938.
Pełny tekst źródłaMercier, Arthur, Nikita Smolin, Oliver Sihlovec, Stefanos Koffas i Stjepan Picek. "Backdoor Pony: Evaluating backdoor attacks and defenses in different domains". SoftwareX 22 (maj 2023): 101387. http://dx.doi.org/10.1016/j.softx.2023.101387.
Pełny tekst źródłaChen, Yiming, Haiwei Wu i Jiantao Zhou. "Progressive Poisoned Data Isolation for Training-Time Backdoor Defense". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 10 (24.03.2024): 11425–33. http://dx.doi.org/10.1609/aaai.v38i10.29023.
Pełny tekst źródłaWang, Zhen, Buhong Wang, Chuanlei Zhang, Yaohui Liu i Jianxin Guo. "Robust Feature-Guided Generative Adversarial Network for Aerial Image Semantic Segmentation against Backdoor Attacks". Remote Sensing 15, nr 10 (15.05.2023): 2580. http://dx.doi.org/10.3390/rs15102580.
Pełny tekst źródłaCheng, Siyuan, Yingqi Liu, Shiqing Ma i Xiangyu Zhang. "Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 2 (18.05.2021): 1148–56. http://dx.doi.org/10.1609/aaai.v35i2.16201.
Pełny tekst źródłaNa, Hyunsik, i Daeseon Choi. "Image-Synthesis-Based Backdoor Attack Approach for Face Classification Task". Electronics 12, nr 21 (3.11.2023): 4535. http://dx.doi.org/10.3390/electronics12214535.
Pełny tekst źródłaShamshiri, Samaneh, Ki Jin Han i Insoo Sohn. "DB-COVIDNet: A Defense Method against Backdoor Attacks". Mathematics 11, nr 20 (10.10.2023): 4236. http://dx.doi.org/10.3390/math11204236.
Pełny tekst źródłaMatsuo, Yuki, i Kazuhiro Takemoto. "Backdoor Attacks to Deep Neural Network-Based System for COVID-19 Detection from Chest X-ray Images". Applied Sciences 11, nr 20 (14.10.2021): 9556. http://dx.doi.org/10.3390/app11209556.
Pełny tekst źródłaMatsuo, Yuki, i Kazuhiro Takemoto. "Backdoor Attacks to Deep Neural Network-Based System for COVID-19 Detection from Chest X-ray Images". Applied Sciences 11, nr 20 (14.10.2021): 9556. http://dx.doi.org/10.3390/app11209556.
Pełny tekst źródłaOyama, Tatsuya, Shunsuke Okura, Kota Yoshida i Takeshi Fujino. "Backdoor Attack on Deep Neural Networks Triggered by Fault Injection Attack on Image Sensor Interface". Sensors 23, nr 10 (14.05.2023): 4742. http://dx.doi.org/10.3390/s23104742.
Pełny tekst źródłaWang, Derui, Sheng Wen, Alireza Jolfaei, Mohammad Sayad Haghighi, Surya Nepal i Yang Xiang. "On the Neural Backdoor of Federated Generative Models in Edge Computing". ACM Transactions on Internet Technology 22, nr 2 (31.05.2022): 1–21. http://dx.doi.org/10.1145/3425662.
Pełny tekst źródłaChen, Chien-Lun, Sara Babakniya, Marco Paolieri i Leana Golubchik. "Defending against Poisoning Backdoor Attacks on Federated Meta-learning". ACM Transactions on Intelligent Systems and Technology 13, nr 5 (31.10.2022): 1–25. http://dx.doi.org/10.1145/3523062.
Pełny tekst źródła