Literatura científica selecionada sobre o tema "Fast Gradient Sign Method (FGSM)"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Fast Gradient Sign Method (FGSM)".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Fast Gradient Sign Method (FGSM)"
Hong, Dian, Deng Chen, Yanduo Zhang, Huabing Zhou e Liang Xie. "Attacking Robot Vision Models Efficiently Based on Improved Fast Gradient Sign Method". Applied Sciences 14, n.º 3 (2 de fevereiro de 2024): 1257. http://dx.doi.org/10.3390/app14031257.
Texto completo da fonteLong, Sheng, Wei Tao, Shuohao LI, Jun Lei e Jun Zhang. "On the Convergence of an Adaptive Momentum Method for Adversarial Attacks". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 13 (24 de março de 2024): 14132–40. http://dx.doi.org/10.1609/aaai.v38i13.29323.
Texto completo da fontePan, Chao, Qing Li e Xin Yao. "Adversarial Initialization with Universal Adversarial Perturbation: A New Approach to Fast Adversarial Training". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 19 (24 de março de 2024): 21501–9. http://dx.doi.org/10.1609/aaai.v38i19.30147.
Texto completo da fonteWibawa, Sigit. "Analysis of Adversarial Attacks on AI-based With Fast Gradient Sign Method". International Journal of Engineering Continuity 2, n.º 2 (1 de agosto de 2023): 72–79. http://dx.doi.org/10.58291/ijec.v2i2.120.
Texto completo da fonteKadhim, Ansam, e Salah Al-Darraji. "Face Recognition System Against Adversarial Attack Using Convolutional Neural Network". Iraqi Journal for Electrical and Electronic Engineering 18, n.º 1 (6 de novembro de 2021): 1–8. http://dx.doi.org/10.37917/ijeee.18.1.1.
Texto completo da fontePervin, Mst Tasnim, Linmi Tao e Aminul Huq. "Adversarial attack driven data augmentation for medical images". International Journal of Electrical and Computer Engineering (IJECE) 13, n.º 6 (1 de dezembro de 2023): 6285. http://dx.doi.org/10.11591/ijece.v13i6.pp6285-6292.
Texto completo da fonteVillegas-Ch, William, Angel Jaramillo-Alcázar e Sergio Luján-Mora. "Evaluating the Robustness of Deep Learning Models against Adversarial Attacks: An Analysis with FGSM, PGD and CW". Big Data and Cognitive Computing 8, n.º 1 (16 de janeiro de 2024): 8. http://dx.doi.org/10.3390/bdcc8010008.
Texto completo da fonteKurniawan S, Putu Widiarsa, Yosi Kristian e Joan Santoso. "Pemanfaatan Deep Convulutional Auto-encoder untuk Mitigasi Serangan Adversarial Attack pada Citra Digital". J-INTECH 11, n.º 1 (4 de julho de 2023): 50–59. http://dx.doi.org/10.32664/j-intech.v11i1.845.
Texto completo da fonteKumari, Rekha, Tushar Bhatia, Peeyush Kumar Singh e Kanishk Vikram Singh. "Dissecting Adversarial Attacks: A Comparative Analysis of Adversarial Perturbation Effects on Pre-Trained Deep Learning Models". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, n.º 11 (1 de novembro de 2023): 1–11. http://dx.doi.org/10.55041/ijsrem27337.
Texto completo da fontePal, Biprodip, Debashis Gupta, Md Rashed-Al-Mahfuz, Salem A. Alyami e Mohammad Ali Moni. "Vulnerability in Deep Transfer Learning Models to Adversarial Fast Gradient Sign Attack for COVID-19 Prediction from Chest Radiography Images". Applied Sciences 11, n.º 9 (7 de maio de 2021): 4233. http://dx.doi.org/10.3390/app11094233.
Texto completo da fonteTeses / dissertações sobre o assunto "Fast Gradient Sign Method (FGSM)"
Darwaish, Asim. "Adversary-aware machine learning models for malware detection systems". Electronic Thesis or Diss., Université Paris Cité, 2022. http://www.theses.fr/2022UNIP7283.
Texto completo da fonteThe exhilarating proliferation of smartphones and their indispensability to human life is inevitable. The exponential growth is also triggering widespread malware and stumbling the prosperous mobile ecosystem. Among all handheld devices, Android is the most targeted hive for malware authors due to its popularity, open-source availability, and intrinsic infirmity to access internal resources. Machine learning-based approaches have been successfully deployed to combat evolving and polymorphic malware campaigns. As the classifier becomes popular and widely adopted, the incentive to evade the classifier also increases. Researchers and adversaries are in a never-ending race to strengthen and evade the android malware detection system. To combat malware campaigns and counter adversarial attacks, we propose a robust image-based android malware detection system that has proven its robustness against various adversarial attacks. The proposed platform first constructs the android malware detection system by intelligently transforming the Android Application Packaging (APK) file into a lightweight RGB image and training a convolutional neural network (CNN) for malware detection and family classification. Our novel transformation method generates evident patterns for benign and malware APKs in color images, making the classification easier. The detection system yielded an excellent accuracy of 99.37% with a False Negative Rate (FNR) of 0.8% and a False Positive Rate (FPR) of 0.39% for legacy and new malware variants. In the second phase, we evaluate the robustness of our secured image-based android malware detection system. To validate its hardness and effectiveness against evasion, we have crafted three novel adversarial attack models. Our thorough evaluation reveals that state-of-the-art learning-based malware detection systems are easy to evade, with more than a 50% evasion rate. However, our proposed system builds a secure mechanism against adversarial perturbations using its intrinsic continuous space obtained after the intelligent transformation of Dex and Manifest files which makes the detection system strenuous to bypass
Vivek, B. S. "Towards Learning Adversarially Robust Deep Learning Models". Thesis, 2019. https://etd.iisc.ac.in/handle/2005/4488.
Texto completo da fonteCapítulos de livros sobre o assunto "Fast Gradient Sign Method (FGSM)"
Muncsan, Tamás, e Attila Kiss. "Transferability of Fast Gradient Sign Method". In Advances in Intelligent Systems and Computing, 23–34. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-55187-2_3.
Texto completo da fonteXia, Xiaoyan, Wei Xue, Pengcheng Wan, Hui Zhang, Xinyu Wang e Zhiting Zhang. "FCGSM: Fast Conjugate Gradient Sign Method for Adversarial Attack on Image Classification". In Lecture Notes in Electrical Engineering, 709–16. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-2287-1_98.
Texto completo da fonteHong, Dian, e Deng Chen. "Gradient-Based Adversarial Example Generation with Root Mean Square Propagation". In Artificial Intelligence and Human-Computer Interaction. IOS Press, 2024. http://dx.doi.org/10.3233/faia240141.
Texto completo da fonteKnaup, Julian, Christoph-Alexander Holst e Volker Lohweg. "Robust Training with Adversarial Examples on Industrial Data". In Proceedings - 33. Workshop Computational Intelligence: Berlin, 23.-24. November 2023, 123–42. KIT Scientific Publishing, 2023. http://dx.doi.org/10.58895/ksp/1000162754-9.
Texto completo da fonteSen, Jaydip, e Subhasis Dasgupta. "Adversarial Attacks on Image Classification Models: FGSM and Patch Attacks and Their Impact". In Information Security and Privacy in the Digital World - Some Selected Topics [Working Title]. IntechOpen, 2023. http://dx.doi.org/10.5772/intechopen.112442.
Texto completo da fonteTrabalhos de conferências sobre o assunto "Fast Gradient Sign Method (FGSM)"
Hassan, Muhammad, Shahzad Younis, Ahmed Rasheed e Muhammad Bilal. "Integrating single-shot Fast Gradient Sign Method (FGSM) with classical image processing techniques for generating adversarial attacks on deep learning classifiers". In Fourteenth International Conference on Machine Vision (ICMV 2021), editado por Wolfgang Osten, Dmitry Nikolaev e Jianhong Zhou. SPIE, 2022. http://dx.doi.org/10.1117/12.2623585.
Texto completo da fonteReyes-Amezcua, Ivan, Gilberto Ochoa-Ruiz e Andres Mendez-Vazquez. "Transfer Robustness to Downstream Tasks Through Sampling Adversarial Perturbations". In LatinX in AI at Computer Vision and Pattern Recognition Conference 2023. Journal of LatinX in AI Research, 2023. http://dx.doi.org/10.52591/lxai2023061811.
Texto completo da fonteSilva, Gabriel H. N. Espindola da, Rodrigo Sanches Miani e Bruno Bogaz Zarpelão. "Investigando o Impacto de Amostras Adversárias na Detecção de Intrusões em um Sistema Ciberfísico". In Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos. Sociedade Brasileira de Computação - SBC, 2023. http://dx.doi.org/10.5753/sbrc.2023.488.
Texto completo da fonteLiu, Yujie, Shuai Mao, Xiang Mei, Tao Yang e Xuran Zhao. "Sensitivity of Adversarial Perturbation in Fast Gradient Sign Method". In 2019 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 2019. http://dx.doi.org/10.1109/ssci44817.2019.9002856.
Texto completo da fonteXu, Jin. "Generate Adversarial Examples by Nesterov-momentum Iterative Fast Gradient Sign Method". In 2020 IEEE 11th International Conference on Software Engineering and Service Science (ICSESS). IEEE, 2020. http://dx.doi.org/10.1109/icsess49938.2020.9237700.
Texto completo da fonteZhang, Taozheng, e Jiajian Meng. "STFGSM: Intelligent Image Classification Model Based on Swin Transformer and Fast Gradient Sign Method". In ICDLT 2023: 2023 7th International Conference on Deep Learning Technologies. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3613330.3613339.
Texto completo da fonteHong, In-pyo, Gyu-ho Choi, Pan-koo Kim e Chang Choi. "Security Verification Software Platform of Data-efficient Image Transformer Based on Fast Gradient Sign Method". In SAC '23: 38th ACM/SIGAPP Symposium on Applied Computing. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3555776.3577731.
Texto completo da fonte