Academic literature on the topic 'Fast Gradient Sign Method (FGSM)'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Fast Gradient Sign Method (FGSM).'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Fast Gradient Sign Method (FGSM)"
Hong, Dian, Deng Chen, Yanduo Zhang, Huabing Zhou, and Liang Xie. "Attacking Robot Vision Models Efficiently Based on Improved Fast Gradient Sign Method." Applied Sciences 14, no. 3 (February 2, 2024): 1257. http://dx.doi.org/10.3390/app14031257.
Full textLong, Sheng, Wei Tao, Shuohao LI, Jun Lei, and Jun Zhang. "On the Convergence of an Adaptive Momentum Method for Adversarial Attacks." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 13 (March 24, 2024): 14132–40. http://dx.doi.org/10.1609/aaai.v38i13.29323.
Full textPan, Chao, Qing Li, and Xin Yao. "Adversarial Initialization with Universal Adversarial Perturbation: A New Approach to Fast Adversarial Training." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 19 (March 24, 2024): 21501–9. http://dx.doi.org/10.1609/aaai.v38i19.30147.
Full textWibawa, Sigit. "Analysis of Adversarial Attacks on AI-based With Fast Gradient Sign Method." International Journal of Engineering Continuity 2, no. 2 (August 1, 2023): 72–79. http://dx.doi.org/10.58291/ijec.v2i2.120.
Full textKadhim, Ansam, and Salah Al-Darraji. "Face Recognition System Against Adversarial Attack Using Convolutional Neural Network." Iraqi Journal for Electrical and Electronic Engineering 18, no. 1 (November 6, 2021): 1–8. http://dx.doi.org/10.37917/ijeee.18.1.1.
Full textPervin, Mst Tasnim, Linmi Tao, and Aminul Huq. "Adversarial attack driven data augmentation for medical images." International Journal of Electrical and Computer Engineering (IJECE) 13, no. 6 (December 1, 2023): 6285. http://dx.doi.org/10.11591/ijece.v13i6.pp6285-6292.
Full textVillegas-Ch, William, Angel Jaramillo-Alcázar, and Sergio Luján-Mora. "Evaluating the Robustness of Deep Learning Models against Adversarial Attacks: An Analysis with FGSM, PGD and CW." Big Data and Cognitive Computing 8, no. 1 (January 16, 2024): 8. http://dx.doi.org/10.3390/bdcc8010008.
Full textKurniawan S, Putu Widiarsa, Yosi Kristian, and Joan Santoso. "Pemanfaatan Deep Convulutional Auto-encoder untuk Mitigasi Serangan Adversarial Attack pada Citra Digital." J-INTECH 11, no. 1 (July 4, 2023): 50–59. http://dx.doi.org/10.32664/j-intech.v11i1.845.
Full textKumari, Rekha, Tushar Bhatia, Peeyush Kumar Singh, and Kanishk Vikram Singh. "Dissecting Adversarial Attacks: A Comparative Analysis of Adversarial Perturbation Effects on Pre-Trained Deep Learning Models." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, no. 11 (November 1, 2023): 1–11. http://dx.doi.org/10.55041/ijsrem27337.
Full textPal, Biprodip, Debashis Gupta, Md Rashed-Al-Mahfuz, Salem A. Alyami, and Mohammad Ali Moni. "Vulnerability in Deep Transfer Learning Models to Adversarial Fast Gradient Sign Attack for COVID-19 Prediction from Chest Radiography Images." Applied Sciences 11, no. 9 (May 7, 2021): 4233. http://dx.doi.org/10.3390/app11094233.
Full textDissertations / Theses on the topic "Fast Gradient Sign Method (FGSM)"
Darwaish, Asim. "Adversary-aware machine learning models for malware detection systems." Electronic Thesis or Diss., Université Paris Cité, 2022. http://www.theses.fr/2022UNIP7283.
Full textThe exhilarating proliferation of smartphones and their indispensability to human life is inevitable. The exponential growth is also triggering widespread malware and stumbling the prosperous mobile ecosystem. Among all handheld devices, Android is the most targeted hive for malware authors due to its popularity, open-source availability, and intrinsic infirmity to access internal resources. Machine learning-based approaches have been successfully deployed to combat evolving and polymorphic malware campaigns. As the classifier becomes popular and widely adopted, the incentive to evade the classifier also increases. Researchers and adversaries are in a never-ending race to strengthen and evade the android malware detection system. To combat malware campaigns and counter adversarial attacks, we propose a robust image-based android malware detection system that has proven its robustness against various adversarial attacks. The proposed platform first constructs the android malware detection system by intelligently transforming the Android Application Packaging (APK) file into a lightweight RGB image and training a convolutional neural network (CNN) for malware detection and family classification. Our novel transformation method generates evident patterns for benign and malware APKs in color images, making the classification easier. The detection system yielded an excellent accuracy of 99.37% with a False Negative Rate (FNR) of 0.8% and a False Positive Rate (FPR) of 0.39% for legacy and new malware variants. In the second phase, we evaluate the robustness of our secured image-based android malware detection system. To validate its hardness and effectiveness against evasion, we have crafted three novel adversarial attack models. Our thorough evaluation reveals that state-of-the-art learning-based malware detection systems are easy to evade, with more than a 50% evasion rate. However, our proposed system builds a secure mechanism against adversarial perturbations using its intrinsic continuous space obtained after the intelligent transformation of Dex and Manifest files which makes the detection system strenuous to bypass
Vivek, B. S. "Towards Learning Adversarially Robust Deep Learning Models." Thesis, 2019. https://etd.iisc.ac.in/handle/2005/4488.
Full textBook chapters on the topic "Fast Gradient Sign Method (FGSM)"
Muncsan, Tamás, and Attila Kiss. "Transferability of Fast Gradient Sign Method." In Advances in Intelligent Systems and Computing, 23–34. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-55187-2_3.
Full textXia, Xiaoyan, Wei Xue, Pengcheng Wan, Hui Zhang, Xinyu Wang, and Zhiting Zhang. "FCGSM: Fast Conjugate Gradient Sign Method for Adversarial Attack on Image Classification." In Lecture Notes in Electrical Engineering, 709–16. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-2287-1_98.
Full textHong, Dian, and Deng Chen. "Gradient-Based Adversarial Example Generation with Root Mean Square Propagation." In Artificial Intelligence and Human-Computer Interaction. IOS Press, 2024. http://dx.doi.org/10.3233/faia240141.
Full textKnaup, Julian, Christoph-Alexander Holst, and Volker Lohweg. "Robust Training with Adversarial Examples on Industrial Data." In Proceedings - 33. Workshop Computational Intelligence: Berlin, 23.-24. November 2023, 123–42. KIT Scientific Publishing, 2023. http://dx.doi.org/10.58895/ksp/1000162754-9.
Full textSen, Jaydip, and Subhasis Dasgupta. "Adversarial Attacks on Image Classification Models: FGSM and Patch Attacks and Their Impact." In Information Security and Privacy in the Digital World - Some Selected Topics [Working Title]. IntechOpen, 2023. http://dx.doi.org/10.5772/intechopen.112442.
Full textConference papers on the topic "Fast Gradient Sign Method (FGSM)"
Hassan, Muhammad, Shahzad Younis, Ahmed Rasheed, and Muhammad Bilal. "Integrating single-shot Fast Gradient Sign Method (FGSM) with classical image processing techniques for generating adversarial attacks on deep learning classifiers." In Fourteenth International Conference on Machine Vision (ICMV 2021), edited by Wolfgang Osten, Dmitry Nikolaev, and Jianhong Zhou. SPIE, 2022. http://dx.doi.org/10.1117/12.2623585.
Full textReyes-Amezcua, Ivan, Gilberto Ochoa-Ruiz, and Andres Mendez-Vazquez. "Transfer Robustness to Downstream Tasks Through Sampling Adversarial Perturbations." In LatinX in AI at Computer Vision and Pattern Recognition Conference 2023. Journal of LatinX in AI Research, 2023. http://dx.doi.org/10.52591/lxai2023061811.
Full textSilva, Gabriel H. N. Espindola da, Rodrigo Sanches Miani, and Bruno Bogaz Zarpelão. "Investigando o Impacto de Amostras Adversárias na Detecção de Intrusões em um Sistema Ciberfísico." In Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos. Sociedade Brasileira de Computação - SBC, 2023. http://dx.doi.org/10.5753/sbrc.2023.488.
Full textLiu, Yujie, Shuai Mao, Xiang Mei, Tao Yang, and Xuran Zhao. "Sensitivity of Adversarial Perturbation in Fast Gradient Sign Method." In 2019 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 2019. http://dx.doi.org/10.1109/ssci44817.2019.9002856.
Full textXu, Jin. "Generate Adversarial Examples by Nesterov-momentum Iterative Fast Gradient Sign Method." In 2020 IEEE 11th International Conference on Software Engineering and Service Science (ICSESS). IEEE, 2020. http://dx.doi.org/10.1109/icsess49938.2020.9237700.
Full textZhang, Taozheng, and Jiajian Meng. "STFGSM: Intelligent Image Classification Model Based on Swin Transformer and Fast Gradient Sign Method." In ICDLT 2023: 2023 7th International Conference on Deep Learning Technologies. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3613330.3613339.
Full textHong, In-pyo, Gyu-ho Choi, Pan-koo Kim, and Chang Choi. "Security Verification Software Platform of Data-efficient Image Transformer Based on Fast Gradient Sign Method." In SAC '23: 38th ACM/SIGAPP Symposium on Applied Computing. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3555776.3577731.
Full text