Literatura académica sobre el tema "Fast Gradient Sign Method (FGSM)"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Fast Gradient Sign Method (FGSM)".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Fast Gradient Sign Method (FGSM)"
Hong, Dian, Deng Chen, Yanduo Zhang, Huabing Zhou y Liang Xie. "Attacking Robot Vision Models Efficiently Based on Improved Fast Gradient Sign Method". Applied Sciences 14, n.º 3 (2 de febrero de 2024): 1257. http://dx.doi.org/10.3390/app14031257.
Texto completoLong, Sheng, Wei Tao, Shuohao LI, Jun Lei y Jun Zhang. "On the Convergence of an Adaptive Momentum Method for Adversarial Attacks". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 13 (24 de marzo de 2024): 14132–40. http://dx.doi.org/10.1609/aaai.v38i13.29323.
Texto completoPan, Chao, Qing Li y Xin Yao. "Adversarial Initialization with Universal Adversarial Perturbation: A New Approach to Fast Adversarial Training". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 19 (24 de marzo de 2024): 21501–9. http://dx.doi.org/10.1609/aaai.v38i19.30147.
Texto completoWibawa, Sigit. "Analysis of Adversarial Attacks on AI-based With Fast Gradient Sign Method". International Journal of Engineering Continuity 2, n.º 2 (1 de agosto de 2023): 72–79. http://dx.doi.org/10.58291/ijec.v2i2.120.
Texto completoKadhim, Ansam y Salah Al-Darraji. "Face Recognition System Against Adversarial Attack Using Convolutional Neural Network". Iraqi Journal for Electrical and Electronic Engineering 18, n.º 1 (6 de noviembre de 2021): 1–8. http://dx.doi.org/10.37917/ijeee.18.1.1.
Texto completoPervin, Mst Tasnim, Linmi Tao y Aminul Huq. "Adversarial attack driven data augmentation for medical images". International Journal of Electrical and Computer Engineering (IJECE) 13, n.º 6 (1 de diciembre de 2023): 6285. http://dx.doi.org/10.11591/ijece.v13i6.pp6285-6292.
Texto completoVillegas-Ch, William, Angel Jaramillo-Alcázar y Sergio Luján-Mora. "Evaluating the Robustness of Deep Learning Models against Adversarial Attacks: An Analysis with FGSM, PGD and CW". Big Data and Cognitive Computing 8, n.º 1 (16 de enero de 2024): 8. http://dx.doi.org/10.3390/bdcc8010008.
Texto completoKurniawan S, Putu Widiarsa, Yosi Kristian y Joan Santoso. "Pemanfaatan Deep Convulutional Auto-encoder untuk Mitigasi Serangan Adversarial Attack pada Citra Digital". J-INTECH 11, n.º 1 (4 de julio de 2023): 50–59. http://dx.doi.org/10.32664/j-intech.v11i1.845.
Texto completoKumari, Rekha, Tushar Bhatia, Peeyush Kumar Singh y Kanishk Vikram Singh. "Dissecting Adversarial Attacks: A Comparative Analysis of Adversarial Perturbation Effects on Pre-Trained Deep Learning Models". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, n.º 11 (1 de noviembre de 2023): 1–11. http://dx.doi.org/10.55041/ijsrem27337.
Texto completoPal, Biprodip, Debashis Gupta, Md Rashed-Al-Mahfuz, Salem A. Alyami y Mohammad Ali Moni. "Vulnerability in Deep Transfer Learning Models to Adversarial Fast Gradient Sign Attack for COVID-19 Prediction from Chest Radiography Images". Applied Sciences 11, n.º 9 (7 de mayo de 2021): 4233. http://dx.doi.org/10.3390/app11094233.
Texto completoTesis sobre el tema "Fast Gradient Sign Method (FGSM)"
Darwaish, Asim. "Adversary-aware machine learning models for malware detection systems". Electronic Thesis or Diss., Université Paris Cité, 2022. http://www.theses.fr/2022UNIP7283.
Texto completoThe exhilarating proliferation of smartphones and their indispensability to human life is inevitable. The exponential growth is also triggering widespread malware and stumbling the prosperous mobile ecosystem. Among all handheld devices, Android is the most targeted hive for malware authors due to its popularity, open-source availability, and intrinsic infirmity to access internal resources. Machine learning-based approaches have been successfully deployed to combat evolving and polymorphic malware campaigns. As the classifier becomes popular and widely adopted, the incentive to evade the classifier also increases. Researchers and adversaries are in a never-ending race to strengthen and evade the android malware detection system. To combat malware campaigns and counter adversarial attacks, we propose a robust image-based android malware detection system that has proven its robustness against various adversarial attacks. The proposed platform first constructs the android malware detection system by intelligently transforming the Android Application Packaging (APK) file into a lightweight RGB image and training a convolutional neural network (CNN) for malware detection and family classification. Our novel transformation method generates evident patterns for benign and malware APKs in color images, making the classification easier. The detection system yielded an excellent accuracy of 99.37% with a False Negative Rate (FNR) of 0.8% and a False Positive Rate (FPR) of 0.39% for legacy and new malware variants. In the second phase, we evaluate the robustness of our secured image-based android malware detection system. To validate its hardness and effectiveness against evasion, we have crafted three novel adversarial attack models. Our thorough evaluation reveals that state-of-the-art learning-based malware detection systems are easy to evade, with more than a 50% evasion rate. However, our proposed system builds a secure mechanism against adversarial perturbations using its intrinsic continuous space obtained after the intelligent transformation of Dex and Manifest files which makes the detection system strenuous to bypass
Vivek, B. S. "Towards Learning Adversarially Robust Deep Learning Models". Thesis, 2019. https://etd.iisc.ac.in/handle/2005/4488.
Texto completoCapítulos de libros sobre el tema "Fast Gradient Sign Method (FGSM)"
Muncsan, Tamás y Attila Kiss. "Transferability of Fast Gradient Sign Method". En Advances in Intelligent Systems and Computing, 23–34. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-55187-2_3.
Texto completoXia, Xiaoyan, Wei Xue, Pengcheng Wan, Hui Zhang, Xinyu Wang y Zhiting Zhang. "FCGSM: Fast Conjugate Gradient Sign Method for Adversarial Attack on Image Classification". En Lecture Notes in Electrical Engineering, 709–16. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-2287-1_98.
Texto completoHong, Dian y Deng Chen. "Gradient-Based Adversarial Example Generation with Root Mean Square Propagation". En Artificial Intelligence and Human-Computer Interaction. IOS Press, 2024. http://dx.doi.org/10.3233/faia240141.
Texto completoKnaup, Julian, Christoph-Alexander Holst y Volker Lohweg. "Robust Training with Adversarial Examples on Industrial Data". En Proceedings - 33. Workshop Computational Intelligence: Berlin, 23.-24. November 2023, 123–42. KIT Scientific Publishing, 2023. http://dx.doi.org/10.58895/ksp/1000162754-9.
Texto completoSen, Jaydip y Subhasis Dasgupta. "Adversarial Attacks on Image Classification Models: FGSM and Patch Attacks and Their Impact". En Information Security and Privacy in the Digital World - Some Selected Topics [Working Title]. IntechOpen, 2023. http://dx.doi.org/10.5772/intechopen.112442.
Texto completoActas de conferencias sobre el tema "Fast Gradient Sign Method (FGSM)"
Hassan, Muhammad, Shahzad Younis, Ahmed Rasheed y Muhammad Bilal. "Integrating single-shot Fast Gradient Sign Method (FGSM) with classical image processing techniques for generating adversarial attacks on deep learning classifiers". En Fourteenth International Conference on Machine Vision (ICMV 2021), editado por Wolfgang Osten, Dmitry Nikolaev y Jianhong Zhou. SPIE, 2022. http://dx.doi.org/10.1117/12.2623585.
Texto completoReyes-Amezcua, Ivan, Gilberto Ochoa-Ruiz y Andres Mendez-Vazquez. "Transfer Robustness to Downstream Tasks Through Sampling Adversarial Perturbations". En LatinX in AI at Computer Vision and Pattern Recognition Conference 2023. Journal of LatinX in AI Research, 2023. http://dx.doi.org/10.52591/lxai2023061811.
Texto completoSilva, Gabriel H. N. Espindola da, Rodrigo Sanches Miani y Bruno Bogaz Zarpelão. "Investigando o Impacto de Amostras Adversárias na Detecção de Intrusões em um Sistema Ciberfísico". En Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos. Sociedade Brasileira de Computação - SBC, 2023. http://dx.doi.org/10.5753/sbrc.2023.488.
Texto completoLiu, Yujie, Shuai Mao, Xiang Mei, Tao Yang y Xuran Zhao. "Sensitivity of Adversarial Perturbation in Fast Gradient Sign Method". En 2019 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 2019. http://dx.doi.org/10.1109/ssci44817.2019.9002856.
Texto completoXu, Jin. "Generate Adversarial Examples by Nesterov-momentum Iterative Fast Gradient Sign Method". En 2020 IEEE 11th International Conference on Software Engineering and Service Science (ICSESS). IEEE, 2020. http://dx.doi.org/10.1109/icsess49938.2020.9237700.
Texto completoZhang, Taozheng y Jiajian Meng. "STFGSM: Intelligent Image Classification Model Based on Swin Transformer and Fast Gradient Sign Method". En ICDLT 2023: 2023 7th International Conference on Deep Learning Technologies. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3613330.3613339.
Texto completoHong, In-pyo, Gyu-ho Choi, Pan-koo Kim y Chang Choi. "Security Verification Software Platform of Data-efficient Image Transformer Based on Fast Gradient Sign Method". En SAC '23: 38th ACM/SIGAPP Symposium on Applied Computing. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3555776.3577731.
Texto completo