Artigos de revistas sobre o tema "Backdoor attacks"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Backdoor attacks".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.
Zhu, Biru, Ganqu Cui, Yangyi Chen, Yujia Qin, Lifan Yuan, Chong Fu, Yangdong Deng, Zhiyuan Liu, Maosong Sun e Ming Gu. "Removing Backdoors in Pre-trained Models by Regularized Continual Pre-training". Transactions of the Association for Computational Linguistics 11 (2023): 1608–23. http://dx.doi.org/10.1162/tacl_a_00622.
Texto completo da fonteYuan, Guotao, Hong Huang e Xin Li. "Self-supervised learning backdoor defense mixed with self-attention mechanism". Journal of Computing and Electronic Information Management 12, n.º 2 (30 de março de 2024): 81–88. http://dx.doi.org/10.54097/7hx9afkw.
Texto completo da fonteSaha, Aniruddha, Akshayvarun Subramanya e Hamed Pirsiavash. "Hidden Trigger Backdoor Attacks". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 07 (3 de abril de 2020): 11957–65. http://dx.doi.org/10.1609/aaai.v34i07.6871.
Texto completo da fonteDuan, Qiuyu, Zhongyun Hua, Qing Liao, Yushu Zhang e Leo Yu Zhang. "Conditional Backdoor Attack via JPEG Compression". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 18 (24 de março de 2024): 19823–31. http://dx.doi.org/10.1609/aaai.v38i18.29957.
Texto completo da fonteLiu, Zihao, Tianhao Wang, Mengdi Huai e Chenglin Miao. "Backdoor Attacks via Machine Unlearning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 13 (24 de março de 2024): 14115–23. http://dx.doi.org/10.1609/aaai.v38i13.29321.
Texto completo da fonteWang, Tong, Yuan Yao, Feng Xu, Miao Xu, Shengwei An e Ting Wang. "Inspecting Prediction Confidence for Detecting Black-Box Backdoor Attacks". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 1 (24 de março de 2024): 274–82. http://dx.doi.org/10.1609/aaai.v38i1.27780.
Texto completo da fonteHuynh, Tran, Dang Nguyen, Tung Pham e Anh Tran. "COMBAT: Alternated Training for Effective Clean-Label Backdoor Attacks". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 3 (24 de março de 2024): 2436–44. http://dx.doi.org/10.1609/aaai.v38i3.28019.
Texto completo da fonteZhang, Xianda, Baolin Zheng, Jianbao Hu, Chengyang Li e Xiaoying Bai. "From Toxic to Trustworthy: Using Self-Distillation and Semi-supervised Methods to Refine Neural Networks". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 15 (24 de março de 2024): 16873–80. http://dx.doi.org/10.1609/aaai.v38i15.29629.
Texto completo da fonteLiu, Tao, Yuhang Zhang, Zhu Feng, Zhiqin Yang, Chen Xu, Dapeng Man e Wu Yang. "Beyond Traditional Threats: A Persistent Backdoor Attack on Federated Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 19 (24 de março de 2024): 21359–67. http://dx.doi.org/10.1609/aaai.v38i19.30131.
Texto completo da fonteZhang, Lei, Ya Peng, Lifei Wei, Congcong Chen e Xiaoyu Zhang. "DeepDefense: A Steganalysis-Based Backdoor Detecting and Mitigating Protocol in Deep Neural Networks for AI Security". Security and Communication Networks 2023 (9 de maio de 2023): 1–12. http://dx.doi.org/10.1155/2023/9308909.
Texto completo da fonteHuang, Yihao, Felix Juefei-Xu, Qing Guo, Jie Zhang, Yutong Wu, Ming Hu, Tianlin Li, Geguang Pu e Yang Liu. "Personalization as a Shortcut for Few-Shot Backdoor Attack against Text-to-Image Diffusion Models". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 19 (24 de março de 2024): 21169–78. http://dx.doi.org/10.1609/aaai.v38i19.30110.
Texto completo da fonteLi, Xi, Songhe Wang, Ruiquan Huang, Mahanth Gowda e George Kesidis. "Temporal-Distributed Backdoor Attack against Video Based Action Recognition". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 4 (24 de março de 2024): 3199–207. http://dx.doi.org/10.1609/aaai.v38i4.28104.
Texto completo da fonteNing, Rui, Jiang Li, Chunsheng Xin, Hongyi Wu e Chonggang Wang. "Hibernated Backdoor: A Mutual Information Empowered Backdoor Attack to Deep Neural Networks". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 9 (28 de junho de 2022): 10309–18. http://dx.doi.org/10.1609/aaai.v36i9.21272.
Texto completo da fonteYu, Fangchao, Bo Zeng, Kai Zhao, Zhi Pang e Lina Wang. "Chronic Poisoning: Backdoor Attack against Split Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 15 (24 de março de 2024): 16531–38. http://dx.doi.org/10.1609/aaai.v38i15.29591.
Texto completo da fonteLi, Yiming. "Poisoning-Based Backdoor Attacks in Computer Vision". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 13 (26 de junho de 2023): 16121–22. http://dx.doi.org/10.1609/aaai.v37i13.26921.
Texto completo da fonteDoan, Khoa D., Yingjie Lao, Peng Yang e Ping Li. "Defending Backdoor Attacks on Vision Transformer via Patch Processing". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 1 (26 de junho de 2023): 506–15. http://dx.doi.org/10.1609/aaai.v37i1.25125.
Texto completo da fonteAn, Shengwei, Sheng-Yen Chou, Kaiyuan Zhang, Qiuling Xu, Guanhong Tao, Guangyu Shen, Siyuan Cheng et al. "Elijah: Eliminating Backdoors Injected in Diffusion Models via Distribution Shift". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 10 (24 de março de 2024): 10847–55. http://dx.doi.org/10.1609/aaai.v38i10.28958.
Texto completo da fonteLiu, Xinwei, Xiaojun Jia, Jindong Gu, Yuan Xun, Siyuan Liang e Xiaochun Cao. "Does Few-Shot Learning Suffer from Backdoor Attacks?" Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 18 (24 de março de 2024): 19893–901. http://dx.doi.org/10.1609/aaai.v38i18.29965.
Texto completo da fonteXiang, Zhen, David J. Miller, Hang Wang e George Kesidis. "Detecting Scene-Plausible Perceptible Backdoors in Trained DNNs Without Access to the Training Set". Neural Computation 33, n.º 5 (13 de abril de 2021): 1329–71. http://dx.doi.org/10.1162/neco_a_01376.
Texto completo da fonteZhang, Shengchuan, e Suhang Ye. "Backdoor Attack against Face Sketch Synthesis". Entropy 25, n.º 7 (25 de junho de 2023): 974. http://dx.doi.org/10.3390/e25070974.
Texto completo da fonteXu, Yixiao, Xiaolei Liu, Kangyi Ding e Bangzhou Xin. "IBD: An Interpretable Backdoor-Detection Method via Multivariate Interactions". Sensors 22, n.º 22 (10 de novembro de 2022): 8697. http://dx.doi.org/10.3390/s22228697.
Texto completo da fonteSun, Xiaofei, Xiaoya Li, Yuxian Meng, Xiang Ao, Lingjuan Lyu, Jiwei Li e Tianwei Zhang. "Defending against Backdoor Attacks in Natural Language Generation". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 4 (26 de junho de 2023): 5257–65. http://dx.doi.org/10.1609/aaai.v37i4.25656.
Texto completo da fonteZhao, Yue, Congyi Li e Kai Chen. "UMA: Facilitating Backdoor Scanning via Unlearning-Based Model Ablation". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 19 (24 de março de 2024): 21823–31. http://dx.doi.org/10.1609/aaai.v38i19.30183.
Texto completo da fonteFan, Linkun, Fazhi He, Tongzhen Si, Wei Tang e Bing Li. "Invisible Backdoor Attack against 3D Point Cloud Classifier in Graph Spectral Domain". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 19 (24 de março de 2024): 21072–80. http://dx.doi.org/10.1609/aaai.v38i19.30099.
Texto completo da fonteChen, Yang, Zhonglin Ye, Haixing Zhao e Ying Wang. "Feature-Based Graph Backdoor Attack in the Node Classification Task". International Journal of Intelligent Systems 2023 (21 de fevereiro de 2023): 1–13. http://dx.doi.org/10.1155/2023/5418398.
Texto completo da fonteCui, Jing, Yufei Han, Yuzhe Ma, Jianbin Jiao e Junge Zhang. "BadRL: Sparse Targeted Backdoor Attack against Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 10 (24 de março de 2024): 11687–94. http://dx.doi.org/10.1609/aaai.v38i10.29052.
Texto completo da fonteWu, Yalun, Yanfeng Gu, Yuanwan Chen, Xiaoshu Cui, Qiong Li, Yingxiao Xiang, Endong Tong, Jianhua Li, Zhen Han e Jiqiang Liu. "Camouflage Backdoor Attack against Pedestrian Detection". Applied Sciences 13, n.º 23 (28 de novembro de 2023): 12752. http://dx.doi.org/10.3390/app132312752.
Texto completo da fonteOzdayi, Mustafa Safa, Murat Kantarcioglu e Yulia R. Gel. "Defending against Backdoors in Federated Learning with Robust Learning Rate". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 10 (18 de maio de 2021): 9268–76. http://dx.doi.org/10.1609/aaai.v35i10.17118.
Texto completo da fonteYe, Jianbin, Xiaoyuan Liu, Zheng You, Guowei Li e Bo Liu. "DriNet: Dynamic Backdoor Attack against Automatic Speech Recognization Models". Applied Sciences 12, n.º 12 (7 de junho de 2022): 5786. http://dx.doi.org/10.3390/app12125786.
Texto completo da fonteJang, Jinhyeok, Yoonsoo An, Dowan Kim e Daeseon Choi. "Feature Importance-Based Backdoor Attack in NSL-KDD". Electronics 12, n.º 24 (9 de dezembro de 2023): 4953. http://dx.doi.org/10.3390/electronics12244953.
Texto completo da fonteFang, Shihong, e Anna Choromanska. "Backdoor Attacks on the DNN Interpretation System". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 1 (28 de junho de 2022): 561–70. http://dx.doi.org/10.1609/aaai.v36i1.19935.
Texto completo da fonteGao, Yudong, Honglong Chen, Peng Sun, Junjian Li, Anqing Zhang, Zhibo Wang e Weifeng Liu. "A Dual Stealthy Backdoor: From Both Spatial and Frequency Perspectives". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 3 (24 de março de 2024): 1851–59. http://dx.doi.org/10.1609/aaai.v38i3.27954.
Texto completo da fonteIslam, Kazi Aminul, Hongyi Wu, Chunsheng Xin, Rui Ning, Liuwan Zhu e Jiang Li. "Sub-Band Backdoor Attack in Remote Sensing Imagery". Algorithms 17, n.º 5 (28 de abril de 2024): 182. http://dx.doi.org/10.3390/a17050182.
Texto completo da fonteZhao, Feng, Li Zhou, Qi Zhong, Rushi Lan e Leo Yu Zhang. "Natural Backdoor Attacks on Deep Neural Networks via Raindrops". Security and Communication Networks 2022 (26 de março de 2022): 1–11. http://dx.doi.org/10.1155/2022/4593002.
Texto completo da fonteKwon, Hyun, e Sanghyun Lee. "Textual Backdoor Attack for the Text Classification System". Security and Communication Networks 2021 (22 de outubro de 2021): 1–11. http://dx.doi.org/10.1155/2021/2938386.
Texto completo da fonteJia, Jinyuan, Yupei Liu, Xiaoyu Cao e Neil Zhenqiang Gong. "Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 9 (28 de junho de 2022): 9575–83. http://dx.doi.org/10.1609/aaai.v36i9.21191.
Texto completo da fonteLiu, Jiawang, Changgen Peng, Weijie Tan e Chenghui Shi. "Federated Learning Backdoor Attack Based on Frequency Domain Injection". Entropy 26, n.º 2 (14 de fevereiro de 2024): 164. http://dx.doi.org/10.3390/e26020164.
Texto completo da fonteMatsuo, Yuki, e Kazuhiro Takemoto. "Backdoor Attacks on Deep Neural Networks via Transfer Learning from Natural Images". Applied Sciences 12, n.º 24 (8 de dezembro de 2022): 12564. http://dx.doi.org/10.3390/app122412564.
Texto completo da fonteShao, Kun, Yu Zhang, Junan Yang e Hui Liu. "Textual Backdoor Defense via Poisoned Sample Recognition". Applied Sciences 11, n.º 21 (25 de outubro de 2021): 9938. http://dx.doi.org/10.3390/app11219938.
Texto completo da fonteMercier, Arthur, Nikita Smolin, Oliver Sihlovec, Stefanos Koffas e Stjepan Picek. "Backdoor Pony: Evaluating backdoor attacks and defenses in different domains". SoftwareX 22 (maio de 2023): 101387. http://dx.doi.org/10.1016/j.softx.2023.101387.
Texto completo da fonteChen, Yiming, Haiwei Wu e Jiantao Zhou. "Progressive Poisoned Data Isolation for Training-Time Backdoor Defense". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 10 (24 de março de 2024): 11425–33. http://dx.doi.org/10.1609/aaai.v38i10.29023.
Texto completo da fonteWang, Zhen, Buhong Wang, Chuanlei Zhang, Yaohui Liu e Jianxin Guo. "Robust Feature-Guided Generative Adversarial Network for Aerial Image Semantic Segmentation against Backdoor Attacks". Remote Sensing 15, n.º 10 (15 de maio de 2023): 2580. http://dx.doi.org/10.3390/rs15102580.
Texto completo da fonteCheng, Siyuan, Yingqi Liu, Shiqing Ma e Xiangyu Zhang. "Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 2 (18 de maio de 2021): 1148–56. http://dx.doi.org/10.1609/aaai.v35i2.16201.
Texto completo da fonteNa, Hyunsik, e Daeseon Choi. "Image-Synthesis-Based Backdoor Attack Approach for Face Classification Task". Electronics 12, n.º 21 (3 de novembro de 2023): 4535. http://dx.doi.org/10.3390/electronics12214535.
Texto completo da fonteShamshiri, Samaneh, Ki Jin Han e Insoo Sohn. "DB-COVIDNet: A Defense Method against Backdoor Attacks". Mathematics 11, n.º 20 (10 de outubro de 2023): 4236. http://dx.doi.org/10.3390/math11204236.
Texto completo da fonteMatsuo, Yuki, e Kazuhiro Takemoto. "Backdoor Attacks to Deep Neural Network-Based System for COVID-19 Detection from Chest X-ray Images". Applied Sciences 11, n.º 20 (14 de outubro de 2021): 9556. http://dx.doi.org/10.3390/app11209556.
Texto completo da fonteMatsuo, Yuki, e Kazuhiro Takemoto. "Backdoor Attacks to Deep Neural Network-Based System for COVID-19 Detection from Chest X-ray Images". Applied Sciences 11, n.º 20 (14 de outubro de 2021): 9556. http://dx.doi.org/10.3390/app11209556.
Texto completo da fonteOyama, Tatsuya, Shunsuke Okura, Kota Yoshida e Takeshi Fujino. "Backdoor Attack on Deep Neural Networks Triggered by Fault Injection Attack on Image Sensor Interface". Sensors 23, n.º 10 (14 de maio de 2023): 4742. http://dx.doi.org/10.3390/s23104742.
Texto completo da fonteWang, Derui, Sheng Wen, Alireza Jolfaei, Mohammad Sayad Haghighi, Surya Nepal e Yang Xiang. "On the Neural Backdoor of Federated Generative Models in Edge Computing". ACM Transactions on Internet Technology 22, n.º 2 (31 de maio de 2022): 1–21. http://dx.doi.org/10.1145/3425662.
Texto completo da fonteChen, Chien-Lun, Sara Babakniya, Marco Paolieri e Leana Golubchik. "Defending against Poisoning Backdoor Attacks on Federated Meta-learning". ACM Transactions on Intelligent Systems and Technology 13, n.º 5 (31 de outubro de 2022): 1–25. http://dx.doi.org/10.1145/3523062.
Texto completo da fonte