Artículos de revistas sobre el tema "Backdoor attacks"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Backdoor attacks".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Zhu, Biru, Ganqu Cui, Yangyi Chen, Yujia Qin, Lifan Yuan, Chong Fu, Yangdong Deng, Zhiyuan Liu, Maosong Sun y Ming Gu. "Removing Backdoors in Pre-trained Models by Regularized Continual Pre-training". Transactions of the Association for Computational Linguistics 11 (2023): 1608–23. http://dx.doi.org/10.1162/tacl_a_00622.
Texto completoYuan, Guotao, Hong Huang y Xin Li. "Self-supervised learning backdoor defense mixed with self-attention mechanism". Journal of Computing and Electronic Information Management 12, n.º 2 (30 de marzo de 2024): 81–88. http://dx.doi.org/10.54097/7hx9afkw.
Texto completoSaha, Aniruddha, Akshayvarun Subramanya y Hamed Pirsiavash. "Hidden Trigger Backdoor Attacks". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 07 (3 de abril de 2020): 11957–65. http://dx.doi.org/10.1609/aaai.v34i07.6871.
Texto completoDuan, Qiuyu, Zhongyun Hua, Qing Liao, Yushu Zhang y Leo Yu Zhang. "Conditional Backdoor Attack via JPEG Compression". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 18 (24 de marzo de 2024): 19823–31. http://dx.doi.org/10.1609/aaai.v38i18.29957.
Texto completoLiu, Zihao, Tianhao Wang, Mengdi Huai y Chenglin Miao. "Backdoor Attacks via Machine Unlearning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 13 (24 de marzo de 2024): 14115–23. http://dx.doi.org/10.1609/aaai.v38i13.29321.
Texto completoWang, Tong, Yuan Yao, Feng Xu, Miao Xu, Shengwei An y Ting Wang. "Inspecting Prediction Confidence for Detecting Black-Box Backdoor Attacks". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 1 (24 de marzo de 2024): 274–82. http://dx.doi.org/10.1609/aaai.v38i1.27780.
Texto completoHuynh, Tran, Dang Nguyen, Tung Pham y Anh Tran. "COMBAT: Alternated Training for Effective Clean-Label Backdoor Attacks". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 3 (24 de marzo de 2024): 2436–44. http://dx.doi.org/10.1609/aaai.v38i3.28019.
Texto completoZhang, Xianda, Baolin Zheng, Jianbao Hu, Chengyang Li y Xiaoying Bai. "From Toxic to Trustworthy: Using Self-Distillation and Semi-supervised Methods to Refine Neural Networks". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 15 (24 de marzo de 2024): 16873–80. http://dx.doi.org/10.1609/aaai.v38i15.29629.
Texto completoLiu, Tao, Yuhang Zhang, Zhu Feng, Zhiqin Yang, Chen Xu, Dapeng Man y Wu Yang. "Beyond Traditional Threats: A Persistent Backdoor Attack on Federated Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 19 (24 de marzo de 2024): 21359–67. http://dx.doi.org/10.1609/aaai.v38i19.30131.
Texto completoZhang, Lei, Ya Peng, Lifei Wei, Congcong Chen y Xiaoyu Zhang. "DeepDefense: A Steganalysis-Based Backdoor Detecting and Mitigating Protocol in Deep Neural Networks for AI Security". Security and Communication Networks 2023 (9 de mayo de 2023): 1–12. http://dx.doi.org/10.1155/2023/9308909.
Texto completoHuang, Yihao, Felix Juefei-Xu, Qing Guo, Jie Zhang, Yutong Wu, Ming Hu, Tianlin Li, Geguang Pu y Yang Liu. "Personalization as a Shortcut for Few-Shot Backdoor Attack against Text-to-Image Diffusion Models". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 19 (24 de marzo de 2024): 21169–78. http://dx.doi.org/10.1609/aaai.v38i19.30110.
Texto completoLi, Xi, Songhe Wang, Ruiquan Huang, Mahanth Gowda y George Kesidis. "Temporal-Distributed Backdoor Attack against Video Based Action Recognition". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 4 (24 de marzo de 2024): 3199–207. http://dx.doi.org/10.1609/aaai.v38i4.28104.
Texto completoNing, Rui, Jiang Li, Chunsheng Xin, Hongyi Wu y Chonggang Wang. "Hibernated Backdoor: A Mutual Information Empowered Backdoor Attack to Deep Neural Networks". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 9 (28 de junio de 2022): 10309–18. http://dx.doi.org/10.1609/aaai.v36i9.21272.
Texto completoYu, Fangchao, Bo Zeng, Kai Zhao, Zhi Pang y Lina Wang. "Chronic Poisoning: Backdoor Attack against Split Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 15 (24 de marzo de 2024): 16531–38. http://dx.doi.org/10.1609/aaai.v38i15.29591.
Texto completoLi, Yiming. "Poisoning-Based Backdoor Attacks in Computer Vision". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 13 (26 de junio de 2023): 16121–22. http://dx.doi.org/10.1609/aaai.v37i13.26921.
Texto completoDoan, Khoa D., Yingjie Lao, Peng Yang y Ping Li. "Defending Backdoor Attacks on Vision Transformer via Patch Processing". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 1 (26 de junio de 2023): 506–15. http://dx.doi.org/10.1609/aaai.v37i1.25125.
Texto completoAn, Shengwei, Sheng-Yen Chou, Kaiyuan Zhang, Qiuling Xu, Guanhong Tao, Guangyu Shen, Siyuan Cheng et al. "Elijah: Eliminating Backdoors Injected in Diffusion Models via Distribution Shift". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 10 (24 de marzo de 2024): 10847–55. http://dx.doi.org/10.1609/aaai.v38i10.28958.
Texto completoLiu, Xinwei, Xiaojun Jia, Jindong Gu, Yuan Xun, Siyuan Liang y Xiaochun Cao. "Does Few-Shot Learning Suffer from Backdoor Attacks?" Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 18 (24 de marzo de 2024): 19893–901. http://dx.doi.org/10.1609/aaai.v38i18.29965.
Texto completoXiang, Zhen, David J. Miller, Hang Wang y George Kesidis. "Detecting Scene-Plausible Perceptible Backdoors in Trained DNNs Without Access to the Training Set". Neural Computation 33, n.º 5 (13 de abril de 2021): 1329–71. http://dx.doi.org/10.1162/neco_a_01376.
Texto completoZhang, Shengchuan y Suhang Ye. "Backdoor Attack against Face Sketch Synthesis". Entropy 25, n.º 7 (25 de junio de 2023): 974. http://dx.doi.org/10.3390/e25070974.
Texto completoXu, Yixiao, Xiaolei Liu, Kangyi Ding y Bangzhou Xin. "IBD: An Interpretable Backdoor-Detection Method via Multivariate Interactions". Sensors 22, n.º 22 (10 de noviembre de 2022): 8697. http://dx.doi.org/10.3390/s22228697.
Texto completoSun, Xiaofei, Xiaoya Li, Yuxian Meng, Xiang Ao, Lingjuan Lyu, Jiwei Li y Tianwei Zhang. "Defending against Backdoor Attacks in Natural Language Generation". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 4 (26 de junio de 2023): 5257–65. http://dx.doi.org/10.1609/aaai.v37i4.25656.
Texto completoZhao, Yue, Congyi Li y Kai Chen. "UMA: Facilitating Backdoor Scanning via Unlearning-Based Model Ablation". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 19 (24 de marzo de 2024): 21823–31. http://dx.doi.org/10.1609/aaai.v38i19.30183.
Texto completoFan, Linkun, Fazhi He, Tongzhen Si, Wei Tang y Bing Li. "Invisible Backdoor Attack against 3D Point Cloud Classifier in Graph Spectral Domain". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 19 (24 de marzo de 2024): 21072–80. http://dx.doi.org/10.1609/aaai.v38i19.30099.
Texto completoChen, Yang, Zhonglin Ye, Haixing Zhao y Ying Wang. "Feature-Based Graph Backdoor Attack in the Node Classification Task". International Journal of Intelligent Systems 2023 (21 de febrero de 2023): 1–13. http://dx.doi.org/10.1155/2023/5418398.
Texto completoCui, Jing, Yufei Han, Yuzhe Ma, Jianbin Jiao y Junge Zhang. "BadRL: Sparse Targeted Backdoor Attack against Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 10 (24 de marzo de 2024): 11687–94. http://dx.doi.org/10.1609/aaai.v38i10.29052.
Texto completoWu, Yalun, Yanfeng Gu, Yuanwan Chen, Xiaoshu Cui, Qiong Li, Yingxiao Xiang, Endong Tong, Jianhua Li, Zhen Han y Jiqiang Liu. "Camouflage Backdoor Attack against Pedestrian Detection". Applied Sciences 13, n.º 23 (28 de noviembre de 2023): 12752. http://dx.doi.org/10.3390/app132312752.
Texto completoOzdayi, Mustafa Safa, Murat Kantarcioglu y Yulia R. Gel. "Defending against Backdoors in Federated Learning with Robust Learning Rate". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 10 (18 de mayo de 2021): 9268–76. http://dx.doi.org/10.1609/aaai.v35i10.17118.
Texto completoYe, Jianbin, Xiaoyuan Liu, Zheng You, Guowei Li y Bo Liu. "DriNet: Dynamic Backdoor Attack against Automatic Speech Recognization Models". Applied Sciences 12, n.º 12 (7 de junio de 2022): 5786. http://dx.doi.org/10.3390/app12125786.
Texto completoJang, Jinhyeok, Yoonsoo An, Dowan Kim y Daeseon Choi. "Feature Importance-Based Backdoor Attack in NSL-KDD". Electronics 12, n.º 24 (9 de diciembre de 2023): 4953. http://dx.doi.org/10.3390/electronics12244953.
Texto completoFang, Shihong y Anna Choromanska. "Backdoor Attacks on the DNN Interpretation System". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 1 (28 de junio de 2022): 561–70. http://dx.doi.org/10.1609/aaai.v36i1.19935.
Texto completoGao, Yudong, Honglong Chen, Peng Sun, Junjian Li, Anqing Zhang, Zhibo Wang y Weifeng Liu. "A Dual Stealthy Backdoor: From Both Spatial and Frequency Perspectives". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 3 (24 de marzo de 2024): 1851–59. http://dx.doi.org/10.1609/aaai.v38i3.27954.
Texto completoIslam, Kazi Aminul, Hongyi Wu, Chunsheng Xin, Rui Ning, Liuwan Zhu y Jiang Li. "Sub-Band Backdoor Attack in Remote Sensing Imagery". Algorithms 17, n.º 5 (28 de abril de 2024): 182. http://dx.doi.org/10.3390/a17050182.
Texto completoZhao, Feng, Li Zhou, Qi Zhong, Rushi Lan y Leo Yu Zhang. "Natural Backdoor Attacks on Deep Neural Networks via Raindrops". Security and Communication Networks 2022 (26 de marzo de 2022): 1–11. http://dx.doi.org/10.1155/2022/4593002.
Texto completoKwon, Hyun y Sanghyun Lee. "Textual Backdoor Attack for the Text Classification System". Security and Communication Networks 2021 (22 de octubre de 2021): 1–11. http://dx.doi.org/10.1155/2021/2938386.
Texto completoJia, Jinyuan, Yupei Liu, Xiaoyu Cao y Neil Zhenqiang Gong. "Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 9 (28 de junio de 2022): 9575–83. http://dx.doi.org/10.1609/aaai.v36i9.21191.
Texto completoLiu, Jiawang, Changgen Peng, Weijie Tan y Chenghui Shi. "Federated Learning Backdoor Attack Based on Frequency Domain Injection". Entropy 26, n.º 2 (14 de febrero de 2024): 164. http://dx.doi.org/10.3390/e26020164.
Texto completoMatsuo, Yuki y Kazuhiro Takemoto. "Backdoor Attacks on Deep Neural Networks via Transfer Learning from Natural Images". Applied Sciences 12, n.º 24 (8 de diciembre de 2022): 12564. http://dx.doi.org/10.3390/app122412564.
Texto completoShao, Kun, Yu Zhang, Junan Yang y Hui Liu. "Textual Backdoor Defense via Poisoned Sample Recognition". Applied Sciences 11, n.º 21 (25 de octubre de 2021): 9938. http://dx.doi.org/10.3390/app11219938.
Texto completoMercier, Arthur, Nikita Smolin, Oliver Sihlovec, Stefanos Koffas y Stjepan Picek. "Backdoor Pony: Evaluating backdoor attacks and defenses in different domains". SoftwareX 22 (mayo de 2023): 101387. http://dx.doi.org/10.1016/j.softx.2023.101387.
Texto completoChen, Yiming, Haiwei Wu y Jiantao Zhou. "Progressive Poisoned Data Isolation for Training-Time Backdoor Defense". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 10 (24 de marzo de 2024): 11425–33. http://dx.doi.org/10.1609/aaai.v38i10.29023.
Texto completoWang, Zhen, Buhong Wang, Chuanlei Zhang, Yaohui Liu y Jianxin Guo. "Robust Feature-Guided Generative Adversarial Network for Aerial Image Semantic Segmentation against Backdoor Attacks". Remote Sensing 15, n.º 10 (15 de mayo de 2023): 2580. http://dx.doi.org/10.3390/rs15102580.
Texto completoCheng, Siyuan, Yingqi Liu, Shiqing Ma y Xiangyu Zhang. "Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 2 (18 de mayo de 2021): 1148–56. http://dx.doi.org/10.1609/aaai.v35i2.16201.
Texto completoNa, Hyunsik y Daeseon Choi. "Image-Synthesis-Based Backdoor Attack Approach for Face Classification Task". Electronics 12, n.º 21 (3 de noviembre de 2023): 4535. http://dx.doi.org/10.3390/electronics12214535.
Texto completoShamshiri, Samaneh, Ki Jin Han y Insoo Sohn. "DB-COVIDNet: A Defense Method against Backdoor Attacks". Mathematics 11, n.º 20 (10 de octubre de 2023): 4236. http://dx.doi.org/10.3390/math11204236.
Texto completoMatsuo, Yuki y Kazuhiro Takemoto. "Backdoor Attacks to Deep Neural Network-Based System for COVID-19 Detection from Chest X-ray Images". Applied Sciences 11, n.º 20 (14 de octubre de 2021): 9556. http://dx.doi.org/10.3390/app11209556.
Texto completoMatsuo, Yuki y Kazuhiro Takemoto. "Backdoor Attacks to Deep Neural Network-Based System for COVID-19 Detection from Chest X-ray Images". Applied Sciences 11, n.º 20 (14 de octubre de 2021): 9556. http://dx.doi.org/10.3390/app11209556.
Texto completoOyama, Tatsuya, Shunsuke Okura, Kota Yoshida y Takeshi Fujino. "Backdoor Attack on Deep Neural Networks Triggered by Fault Injection Attack on Image Sensor Interface". Sensors 23, n.º 10 (14 de mayo de 2023): 4742. http://dx.doi.org/10.3390/s23104742.
Texto completoWang, Derui, Sheng Wen, Alireza Jolfaei, Mohammad Sayad Haghighi, Surya Nepal y Yang Xiang. "On the Neural Backdoor of Federated Generative Models in Edge Computing". ACM Transactions on Internet Technology 22, n.º 2 (31 de mayo de 2022): 1–21. http://dx.doi.org/10.1145/3425662.
Texto completoChen, Chien-Lun, Sara Babakniya, Marco Paolieri y Leana Golubchik. "Defending against Poisoning Backdoor Attacks on Federated Meta-learning". ACM Transactions on Intelligent Systems and Technology 13, n.º 5 (31 de octubre de 2022): 1–25. http://dx.doi.org/10.1145/3523062.
Texto completo