Gotowa bibliografia na temat „Fast Gradient Sign Method (FGSM)”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Fast Gradient Sign Method (FGSM)”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Fast Gradient Sign Method (FGSM)"
Hong, Dian, Deng Chen, Yanduo Zhang, Huabing Zhou i Liang Xie. "Attacking Robot Vision Models Efficiently Based on Improved Fast Gradient Sign Method". Applied Sciences 14, nr 3 (2.02.2024): 1257. http://dx.doi.org/10.3390/app14031257.
Pełny tekst źródłaLong, Sheng, Wei Tao, Shuohao LI, Jun Lei i Jun Zhang. "On the Convergence of an Adaptive Momentum Method for Adversarial Attacks". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 13 (24.03.2024): 14132–40. http://dx.doi.org/10.1609/aaai.v38i13.29323.
Pełny tekst źródłaPan, Chao, Qing Li i Xin Yao. "Adversarial Initialization with Universal Adversarial Perturbation: A New Approach to Fast Adversarial Training". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 19 (24.03.2024): 21501–9. http://dx.doi.org/10.1609/aaai.v38i19.30147.
Pełny tekst źródłaWibawa, Sigit. "Analysis of Adversarial Attacks on AI-based With Fast Gradient Sign Method". International Journal of Engineering Continuity 2, nr 2 (1.08.2023): 72–79. http://dx.doi.org/10.58291/ijec.v2i2.120.
Pełny tekst źródłaKadhim, Ansam, i Salah Al-Darraji. "Face Recognition System Against Adversarial Attack Using Convolutional Neural Network". Iraqi Journal for Electrical and Electronic Engineering 18, nr 1 (6.11.2021): 1–8. http://dx.doi.org/10.37917/ijeee.18.1.1.
Pełny tekst źródłaPervin, Mst Tasnim, Linmi Tao i Aminul Huq. "Adversarial attack driven data augmentation for medical images". International Journal of Electrical and Computer Engineering (IJECE) 13, nr 6 (1.12.2023): 6285. http://dx.doi.org/10.11591/ijece.v13i6.pp6285-6292.
Pełny tekst źródłaVillegas-Ch, William, Angel Jaramillo-Alcázar i Sergio Luján-Mora. "Evaluating the Robustness of Deep Learning Models against Adversarial Attacks: An Analysis with FGSM, PGD and CW". Big Data and Cognitive Computing 8, nr 1 (16.01.2024): 8. http://dx.doi.org/10.3390/bdcc8010008.
Pełny tekst źródłaKurniawan S, Putu Widiarsa, Yosi Kristian i Joan Santoso. "Pemanfaatan Deep Convulutional Auto-encoder untuk Mitigasi Serangan Adversarial Attack pada Citra Digital". J-INTECH 11, nr 1 (4.07.2023): 50–59. http://dx.doi.org/10.32664/j-intech.v11i1.845.
Pełny tekst źródłaKumari, Rekha, Tushar Bhatia, Peeyush Kumar Singh i Kanishk Vikram Singh. "Dissecting Adversarial Attacks: A Comparative Analysis of Adversarial Perturbation Effects on Pre-Trained Deep Learning Models". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, nr 11 (1.11.2023): 1–11. http://dx.doi.org/10.55041/ijsrem27337.
Pełny tekst źródłaPal, Biprodip, Debashis Gupta, Md Rashed-Al-Mahfuz, Salem A. Alyami i Mohammad Ali Moni. "Vulnerability in Deep Transfer Learning Models to Adversarial Fast Gradient Sign Attack for COVID-19 Prediction from Chest Radiography Images". Applied Sciences 11, nr 9 (7.05.2021): 4233. http://dx.doi.org/10.3390/app11094233.
Pełny tekst źródłaRozprawy doktorskie na temat "Fast Gradient Sign Method (FGSM)"
Darwaish, Asim. "Adversary-aware machine learning models for malware detection systems". Electronic Thesis or Diss., Université Paris Cité, 2022. http://www.theses.fr/2022UNIP7283.
Pełny tekst źródłaThe exhilarating proliferation of smartphones and their indispensability to human life is inevitable. The exponential growth is also triggering widespread malware and stumbling the prosperous mobile ecosystem. Among all handheld devices, Android is the most targeted hive for malware authors due to its popularity, open-source availability, and intrinsic infirmity to access internal resources. Machine learning-based approaches have been successfully deployed to combat evolving and polymorphic malware campaigns. As the classifier becomes popular and widely adopted, the incentive to evade the classifier also increases. Researchers and adversaries are in a never-ending race to strengthen and evade the android malware detection system. To combat malware campaigns and counter adversarial attacks, we propose a robust image-based android malware detection system that has proven its robustness against various adversarial attacks. The proposed platform first constructs the android malware detection system by intelligently transforming the Android Application Packaging (APK) file into a lightweight RGB image and training a convolutional neural network (CNN) for malware detection and family classification. Our novel transformation method generates evident patterns for benign and malware APKs in color images, making the classification easier. The detection system yielded an excellent accuracy of 99.37% with a False Negative Rate (FNR) of 0.8% and a False Positive Rate (FPR) of 0.39% for legacy and new malware variants. In the second phase, we evaluate the robustness of our secured image-based android malware detection system. To validate its hardness and effectiveness against evasion, we have crafted three novel adversarial attack models. Our thorough evaluation reveals that state-of-the-art learning-based malware detection systems are easy to evade, with more than a 50% evasion rate. However, our proposed system builds a secure mechanism against adversarial perturbations using its intrinsic continuous space obtained after the intelligent transformation of Dex and Manifest files which makes the detection system strenuous to bypass
Vivek, B. S. "Towards Learning Adversarially Robust Deep Learning Models". Thesis, 2019. https://etd.iisc.ac.in/handle/2005/4488.
Pełny tekst źródłaCzęści książek na temat "Fast Gradient Sign Method (FGSM)"
Muncsan, Tamás, i Attila Kiss. "Transferability of Fast Gradient Sign Method". W Advances in Intelligent Systems and Computing, 23–34. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-55187-2_3.
Pełny tekst źródłaXia, Xiaoyan, Wei Xue, Pengcheng Wan, Hui Zhang, Xinyu Wang i Zhiting Zhang. "FCGSM: Fast Conjugate Gradient Sign Method for Adversarial Attack on Image Classification". W Lecture Notes in Electrical Engineering, 709–16. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-2287-1_98.
Pełny tekst źródłaHong, Dian, i Deng Chen. "Gradient-Based Adversarial Example Generation with Root Mean Square Propagation". W Artificial Intelligence and Human-Computer Interaction. IOS Press, 2024. http://dx.doi.org/10.3233/faia240141.
Pełny tekst źródłaKnaup, Julian, Christoph-Alexander Holst i Volker Lohweg. "Robust Training with Adversarial Examples on Industrial Data". W Proceedings - 33. Workshop Computational Intelligence: Berlin, 23.-24. November 2023, 123–42. KIT Scientific Publishing, 2023. http://dx.doi.org/10.58895/ksp/1000162754-9.
Pełny tekst źródłaSen, Jaydip, i Subhasis Dasgupta. "Adversarial Attacks on Image Classification Models: FGSM and Patch Attacks and Their Impact". W Information Security and Privacy in the Digital World - Some Selected Topics [Working Title]. IntechOpen, 2023. http://dx.doi.org/10.5772/intechopen.112442.
Pełny tekst źródłaStreszczenia konferencji na temat "Fast Gradient Sign Method (FGSM)"
Hassan, Muhammad, Shahzad Younis, Ahmed Rasheed i Muhammad Bilal. "Integrating single-shot Fast Gradient Sign Method (FGSM) with classical image processing techniques for generating adversarial attacks on deep learning classifiers". W Fourteenth International Conference on Machine Vision (ICMV 2021), redaktorzy Wolfgang Osten, Dmitry Nikolaev i Jianhong Zhou. SPIE, 2022. http://dx.doi.org/10.1117/12.2623585.
Pełny tekst źródłaReyes-Amezcua, Ivan, Gilberto Ochoa-Ruiz i Andres Mendez-Vazquez. "Transfer Robustness to Downstream Tasks Through Sampling Adversarial Perturbations". W LatinX in AI at Computer Vision and Pattern Recognition Conference 2023. Journal of LatinX in AI Research, 2023. http://dx.doi.org/10.52591/lxai2023061811.
Pełny tekst źródłaSilva, Gabriel H. N. Espindola da, Rodrigo Sanches Miani i Bruno Bogaz Zarpelão. "Investigando o Impacto de Amostras Adversárias na Detecção de Intrusões em um Sistema Ciberfísico". W Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos. Sociedade Brasileira de Computação - SBC, 2023. http://dx.doi.org/10.5753/sbrc.2023.488.
Pełny tekst źródłaLiu, Yujie, Shuai Mao, Xiang Mei, Tao Yang i Xuran Zhao. "Sensitivity of Adversarial Perturbation in Fast Gradient Sign Method". W 2019 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 2019. http://dx.doi.org/10.1109/ssci44817.2019.9002856.
Pełny tekst źródłaXu, Jin. "Generate Adversarial Examples by Nesterov-momentum Iterative Fast Gradient Sign Method". W 2020 IEEE 11th International Conference on Software Engineering and Service Science (ICSESS). IEEE, 2020. http://dx.doi.org/10.1109/icsess49938.2020.9237700.
Pełny tekst źródłaZhang, Taozheng, i Jiajian Meng. "STFGSM: Intelligent Image Classification Model Based on Swin Transformer and Fast Gradient Sign Method". W ICDLT 2023: 2023 7th International Conference on Deep Learning Technologies. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3613330.3613339.
Pełny tekst źródłaHong, In-pyo, Gyu-ho Choi, Pan-koo Kim i Chang Choi. "Security Verification Software Platform of Data-efficient Image Transformer Based on Fast Gradient Sign Method". W SAC '23: 38th ACM/SIGAPP Symposium on Applied Computing. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3555776.3577731.
Pełny tekst źródła