Artículos de revistas sobre el tema "Fast Gradient Sign Method (FGSM)"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Fast Gradient Sign Method (FGSM)".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Hong, Dian, Deng Chen, Yanduo Zhang, Huabing Zhou y Liang Xie. "Attacking Robot Vision Models Efficiently Based on Improved Fast Gradient Sign Method". Applied Sciences 14, n.º 3 (2 de febrero de 2024): 1257. http://dx.doi.org/10.3390/app14031257.
Texto completoLong, Sheng, Wei Tao, Shuohao LI, Jun Lei y Jun Zhang. "On the Convergence of an Adaptive Momentum Method for Adversarial Attacks". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 13 (24 de marzo de 2024): 14132–40. http://dx.doi.org/10.1609/aaai.v38i13.29323.
Texto completoPan, Chao, Qing Li y Xin Yao. "Adversarial Initialization with Universal Adversarial Perturbation: A New Approach to Fast Adversarial Training". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 19 (24 de marzo de 2024): 21501–9. http://dx.doi.org/10.1609/aaai.v38i19.30147.
Texto completoWibawa, Sigit. "Analysis of Adversarial Attacks on AI-based With Fast Gradient Sign Method". International Journal of Engineering Continuity 2, n.º 2 (1 de agosto de 2023): 72–79. http://dx.doi.org/10.58291/ijec.v2i2.120.
Texto completoKadhim, Ansam y Salah Al-Darraji. "Face Recognition System Against Adversarial Attack Using Convolutional Neural Network". Iraqi Journal for Electrical and Electronic Engineering 18, n.º 1 (6 de noviembre de 2021): 1–8. http://dx.doi.org/10.37917/ijeee.18.1.1.
Texto completoPervin, Mst Tasnim, Linmi Tao y Aminul Huq. "Adversarial attack driven data augmentation for medical images". International Journal of Electrical and Computer Engineering (IJECE) 13, n.º 6 (1 de diciembre de 2023): 6285. http://dx.doi.org/10.11591/ijece.v13i6.pp6285-6292.
Texto completoVillegas-Ch, William, Angel Jaramillo-Alcázar y Sergio Luján-Mora. "Evaluating the Robustness of Deep Learning Models against Adversarial Attacks: An Analysis with FGSM, PGD and CW". Big Data and Cognitive Computing 8, n.º 1 (16 de enero de 2024): 8. http://dx.doi.org/10.3390/bdcc8010008.
Texto completoKurniawan S, Putu Widiarsa, Yosi Kristian y Joan Santoso. "Pemanfaatan Deep Convulutional Auto-encoder untuk Mitigasi Serangan Adversarial Attack pada Citra Digital". J-INTECH 11, n.º 1 (4 de julio de 2023): 50–59. http://dx.doi.org/10.32664/j-intech.v11i1.845.
Texto completoKumari, Rekha, Tushar Bhatia, Peeyush Kumar Singh y Kanishk Vikram Singh. "Dissecting Adversarial Attacks: A Comparative Analysis of Adversarial Perturbation Effects on Pre-Trained Deep Learning Models". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, n.º 11 (1 de noviembre de 2023): 1–11. http://dx.doi.org/10.55041/ijsrem27337.
Texto completoPal, Biprodip, Debashis Gupta, Md Rashed-Al-Mahfuz, Salem A. Alyami y Mohammad Ali Moni. "Vulnerability in Deep Transfer Learning Models to Adversarial Fast Gradient Sign Attack for COVID-19 Prediction from Chest Radiography Images". Applied Sciences 11, n.º 9 (7 de mayo de 2021): 4233. http://dx.doi.org/10.3390/app11094233.
Texto completoKim, Hoki, Woojin Lee y Jaewook Lee. "Understanding Catastrophic Overfitting in Single-step Adversarial Training". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 9 (18 de mayo de 2021): 8119–27. http://dx.doi.org/10.1609/aaai.v35i9.16989.
Texto completoLu, Fan. "Adversarial attack against deep learning algorithms for gun category detection". Applied and Computational Engineering 53, n.º 1 (28 de marzo de 2024): 190–96. http://dx.doi.org/10.54254/2755-2721/53/20241368.
Texto completoCui, Chenrui. "Adversarial attack study on VGG16 for cat and dog image classification task". Applied and Computational Engineering 50, n.º 1 (25 de marzo de 2024): 170–75. http://dx.doi.org/10.54254/2755-2721/50/20241438.
Texto completoMohamed, Mahmoud y Mohamed Bilal. "Comparing the Performance of Deep Denoising Sparse Autoencoder with Other Defense Methods Against Adversarial Attacks for Arabic letters". Jordan Journal of Electrical Engineering 10, n.º 1 (2024): 122. http://dx.doi.org/10.5455/jjee.204-1687363297.
Texto completoNavjot Kaur. "Robustness and Security in Deep Learning: Adversarial Attacks and Countermeasures". Journal of Electrical Systems 20, n.º 3s (4 de abril de 2024): 1250–57. http://dx.doi.org/10.52783/jes.1436.
Texto completoZhang, Qikun, Yuzhi Zhang, Yanling Shao, Mengqi Liu, Jianyong Li, Junling Yuan y Ruifang Wang. "Boosting Adversarial Attacks with Nadam Optimizer". Electronics 12, n.º 6 (20 de marzo de 2023): 1464. http://dx.doi.org/10.3390/electronics12061464.
Texto completoYang, Bo, Kaiyong Xu, Hengjun Wang y Hengwei Zhang. "Random Transformation of image brightness for adversarial attack". Journal of Intelligent & Fuzzy Systems 42, n.º 3 (2 de febrero de 2022): 1693–704. http://dx.doi.org/10.3233/jifs-211157.
Texto completoVyas, Dhairya y Viral V. Kapadia. "Designing defensive techniques to handle adversarial attack on deep learning based model". PeerJ Computer Science 10 (8 de marzo de 2024): e1868. http://dx.doi.org/10.7717/peerj-cs.1868.
Texto completoZou, Junhua, Yexin Duan, Boyu Li, Wu Zhang, Yu Pan y Zhisong Pan. "Making Adversarial Examples More Transferable and Indistinguishable". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 3 (28 de junio de 2022): 3662–70. http://dx.doi.org/10.1609/aaai.v36i3.20279.
Texto completoUtomo, Sapdo, Adarsh Rouniyar, Hsiu-Chun Hsu y Pao-Ann Hsiung. "Federated Adversarial Training Strategies for Achieving Privacy and Security in Sustainable Smart City Applications". Future Internet 15, n.º 11 (20 de noviembre de 2023): 371. http://dx.doi.org/10.3390/fi15110371.
Texto completoHan, Dong, Reza Babaei, Shangqing Zhao y Samuel Cheng. "Exploring the Efficacy of Learning Techniques in Model Extraction Attacks on Image Classifiers: A Comparative Study". Applied Sciences 14, n.º 9 (29 de abril de 2024): 3785. http://dx.doi.org/10.3390/app14093785.
Texto completoTrinh Quang Kien. "Improving the robustness of binarized neural network using the EFAT method". Journal of Military Science and Technology, CSCE5 (15 de diciembre de 2021): 14–23. http://dx.doi.org/10.54939/1859-1043.j.mst.csce5.2021.14-23.
Texto completoRudd-Orthner, Richard N. M. y Lyudmila Mihaylova. "Deep ConvNet: Non-Random Weight Initialization for Repeatable Determinism, Examined with FSGM". Sensors 21, n.º 14 (13 de julio de 2021): 4772. http://dx.doi.org/10.3390/s21144772.
Texto completoXu, Wei y Veerawat Sirivesmas. "Study on Network Virtual Printing Sculpture Design using Artificial Intelligence". International Journal of Communication Networks and Information Security (IJCNIS) 15, n.º 1 (30 de mayo de 2023): 132–45. http://dx.doi.org/10.17762/ijcnis.v15i1.5694.
Texto completoGuan, Dejian y Wentao Zhao . "Adversarial Detection Based on Inner-Class Adjusted Cosine Similarity". Applied Sciences 12, n.º 19 (20 de septiembre de 2022): 9406. http://dx.doi.org/10.3390/app12199406.
Texto completoZhao, Weimin, Sanaa Alwidian y Qusay H. Mahmoud. "Adversarial Training Methods for Deep Learning: A Systematic Review". Algorithms 15, n.º 8 (12 de agosto de 2022): 283. http://dx.doi.org/10.3390/a15080283.
Texto completoLi, Xinyu, Shaogang Dai y Zhijin Zhao. "Unsupervised Learning-Based Spectrum Sensing Algorithm with Defending Adversarial Attacks". Applied Sciences 13, n.º 16 (9 de agosto de 2023): 9101. http://dx.doi.org/10.3390/app13169101.
Texto completoZhu, Min-Ling, Liang-Liang Zhao y Li Xiao. "Image Denoising Based on GAN with Optimization Algorithm". Electronics 11, n.º 15 (5 de agosto de 2022): 2445. http://dx.doi.org/10.3390/electronics11152445.
Texto completoLee , Jungeun y Hoeseok Yang . "Performance Improvement of Image-Reconstruction-Based Defense against Adversarial Attack". Electronics 11, n.º 15 (28 de julio de 2022): 2372. http://dx.doi.org/10.3390/electronics11152372.
Texto completoWu, Fei, Wenxue Yang, Limin Xiao y Jinbin Zhu. "Adaptive Wiener Filter and Natural Noise to Eliminate Adversarial Perturbation". Electronics 9, n.º 10 (3 de octubre de 2020): 1634. http://dx.doi.org/10.3390/electronics9101634.
Texto completoBhandari, Mohan, Tej Bahadur Shahi y Arjun Neupane. "Evaluating Retinal Disease Diagnosis with an Interpretable Lightweight CNN Model Resistant to Adversarial Attacks". Journal of Imaging 9, n.º 10 (11 de octubre de 2023): 219. http://dx.doi.org/10.3390/jimaging9100219.
Texto completoSu, Guanpeng. "Analysis of the attack effect of adversarial attacks on machine learning models". Applied and Computational Engineering 6, n.º 1 (14 de junio de 2023): 1212–18. http://dx.doi.org/10.54254/2755-2721/6/20230607.
Texto completoHuang, Bowen, Ruoheng Feng y Jiahao Yuan. "Exploiting ensembled neural network model for social platform rumor detection". Applied and Computational Engineering 20, n.º 1 (23 de octubre de 2023): 231–39. http://dx.doi.org/10.54254/2755-2721/20/20231103.
Texto completoKwon, Hyun. "MedicalGuard: U-Net Model Robust against Adversarially Perturbed Images". Security and Communication Networks 2021 (9 de agosto de 2021): 1–8. http://dx.doi.org/10.1155/2021/5595026.
Texto completoHaroon, Muhammad Shahzad y Husnain Mansoor Ali. "Ensemble adversarial training based defense against adversarial attacks for machine learning-based intrusion detection system". Neural Network World 33, n.º 5 (2023): 317–36. http://dx.doi.org/10.14311/nnw.2023.33.018.
Texto completoShi, Lin, Teyi Liao y Jianfeng He. "Defending Adversarial Attacks against DNN Image Classification Models by a Noise-Fusion Method". Electronics 11, n.º 12 (8 de junio de 2022): 1814. http://dx.doi.org/10.3390/electronics11121814.
Texto completoSun, Guangling, Yuying Su, Chuan Qin, Wenbo Xu, Xiaofeng Lu y Andrzej Ceglowski. "Complete Defense Framework to Protect Deep Neural Networks against Adversarial Examples". Mathematical Problems in Engineering 2020 (11 de mayo de 2020): 1–17. http://dx.doi.org/10.1155/2020/8319249.
Texto completoSaxena, Rishabh, Amit Sanjay Adate y Don Sasikumar. "A Comparative Study on Adversarial Noise Generation for Single Image Classification". International Journal of Intelligent Information Technologies 16, n.º 1 (enero de 2020): 75–87. http://dx.doi.org/10.4018/ijiit.2020010105.
Texto completoAn, Tong, Tao Zhang, Yanzhang Geng y Haiquan Jiao. "Normalized Combinations of Proportionate Affine Projection Sign Subband Adaptive Filter". Scientific Programming 2021 (26 de agosto de 2021): 1–12. http://dx.doi.org/10.1155/2021/8826868.
Texto completoHirano, Hokuto y Kazuhiro Takemoto. "Simple Iterative Method for Generating Targeted Universal Adversarial Perturbations". Algorithms 13, n.º 11 (22 de octubre de 2020): 268. http://dx.doi.org/10.3390/a13110268.
Texto completoZhang, Xingyu, Xiongwei Zhang, Xia Zou, Haibo Liu y Meng Sun. "Towards Generating Adversarial Examples on Combined Systems of Automatic Speaker Verification and Spoofing Countermeasure". Security and Communication Networks 2022 (31 de julio de 2022): 1–12. http://dx.doi.org/10.1155/2022/2666534.
Texto completoPapadopoulos, Pavlos, Oliver Thornewill von Essen, Nikolaos Pitropakis, Christos Chrysoulas, Alexios Mylonas y William J. Buchanan. "Launching Adversarial Attacks against Network Intrusion Detection Systems for IoT". Journal of Cybersecurity and Privacy 1, n.º 2 (23 de abril de 2021): 252–73. http://dx.doi.org/10.3390/jcp1020014.
Texto completoDing, Ning y Knut Möller. "Using adaptive learning rate to generate adversarial images". Current Directions in Biomedical Engineering 9, n.º 1 (1 de septiembre de 2023): 359–62. http://dx.doi.org/10.1515/cdbme-2023-1090.
Texto completoYang, Zhongguo, Irshad Ahmed Abbasi, Fahad Algarni, Sikandar Ali y Mingzhu Zhang. "An IoT Time Series Data Security Model for Adversarial Attack Based on Thermometer Encoding". Security and Communication Networks 2021 (9 de marzo de 2021): 1–11. http://dx.doi.org/10.1155/2021/5537041.
Texto completoSantana, Everton Jose, Ricardo Petri Silva, Bruno Bogaz Zarpelão y Sylvio Barbon Junior. "Detecting and Mitigating Adversarial Examples in Regression Tasks: A Photovoltaic Power Generation Forecasting Case Study". Information 12, n.º 10 (26 de septiembre de 2021): 394. http://dx.doi.org/10.3390/info12100394.
Texto completoPantiukhin, D. V. "Educational and methodological materials of the master class “Adversarial attacks on image recognition neural networks” for students and schoolchildren". Informatics and education 38, n.º 1 (16 de abril de 2023): 55–63. http://dx.doi.org/10.32517/0234-0453-2023-38-1-55-63.
Texto completoKumar, P. Sathish y K. V. D. Kiran. "Momentum Iterative Fast Gradient Sign Algorithm for Adversarial Attacks and Defenses". Research Journal of Engineering and Technology, 30 de junio de 2023, 7–24. http://dx.doi.org/10.52711/2321-581x.2023.00002.
Texto completoNaseem, Muhammad Luqman. "Trans-IFFT-FGSM: a novel fast gradient sign method for adversarial attacks". Multimedia Tools and Applications, 9 de febrero de 2024. http://dx.doi.org/10.1007/s11042-024-18475-7.
Texto completoXie, Pengfei, Shuhao Shi, Shuai Yang, Kai Qiao, Ningning Liang, Linyuan Wang, Jian Chen, Guoen Hu y Bin Yan. "Improving the Transferability of Adversarial Examples With a Noise Data Enhancement Framework and Random Erasing". Frontiers in Neurorobotics 15 (9 de diciembre de 2021). http://dx.doi.org/10.3389/fnbot.2021.784053.
Texto completoZhang, Junjian, Hao Tan, Le Wang, Yaguan Qian y Zhaoquan Gu. "Rethinking multi‐spatial information for transferable adversarial attacks on speaker recognition systems". CAAI Transactions on Intelligence Technology, 29 de marzo de 2024. http://dx.doi.org/10.1049/cit2.12295.
Texto completo