Artykuły w czasopismach na temat „Fast Gradient Sign Method (FGSM)”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Fast Gradient Sign Method (FGSM)”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.
Hong, Dian, Deng Chen, Yanduo Zhang, Huabing Zhou i Liang Xie. "Attacking Robot Vision Models Efficiently Based on Improved Fast Gradient Sign Method". Applied Sciences 14, nr 3 (2.02.2024): 1257. http://dx.doi.org/10.3390/app14031257.
Pełny tekst źródłaLong, Sheng, Wei Tao, Shuohao LI, Jun Lei i Jun Zhang. "On the Convergence of an Adaptive Momentum Method for Adversarial Attacks". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 13 (24.03.2024): 14132–40. http://dx.doi.org/10.1609/aaai.v38i13.29323.
Pełny tekst źródłaPan, Chao, Qing Li i Xin Yao. "Adversarial Initialization with Universal Adversarial Perturbation: A New Approach to Fast Adversarial Training". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 19 (24.03.2024): 21501–9. http://dx.doi.org/10.1609/aaai.v38i19.30147.
Pełny tekst źródłaWibawa, Sigit. "Analysis of Adversarial Attacks on AI-based With Fast Gradient Sign Method". International Journal of Engineering Continuity 2, nr 2 (1.08.2023): 72–79. http://dx.doi.org/10.58291/ijec.v2i2.120.
Pełny tekst źródłaKadhim, Ansam, i Salah Al-Darraji. "Face Recognition System Against Adversarial Attack Using Convolutional Neural Network". Iraqi Journal for Electrical and Electronic Engineering 18, nr 1 (6.11.2021): 1–8. http://dx.doi.org/10.37917/ijeee.18.1.1.
Pełny tekst źródłaPervin, Mst Tasnim, Linmi Tao i Aminul Huq. "Adversarial attack driven data augmentation for medical images". International Journal of Electrical and Computer Engineering (IJECE) 13, nr 6 (1.12.2023): 6285. http://dx.doi.org/10.11591/ijece.v13i6.pp6285-6292.
Pełny tekst źródłaVillegas-Ch, William, Angel Jaramillo-Alcázar i Sergio Luján-Mora. "Evaluating the Robustness of Deep Learning Models against Adversarial Attacks: An Analysis with FGSM, PGD and CW". Big Data and Cognitive Computing 8, nr 1 (16.01.2024): 8. http://dx.doi.org/10.3390/bdcc8010008.
Pełny tekst źródłaKurniawan S, Putu Widiarsa, Yosi Kristian i Joan Santoso. "Pemanfaatan Deep Convulutional Auto-encoder untuk Mitigasi Serangan Adversarial Attack pada Citra Digital". J-INTECH 11, nr 1 (4.07.2023): 50–59. http://dx.doi.org/10.32664/j-intech.v11i1.845.
Pełny tekst źródłaKumari, Rekha, Tushar Bhatia, Peeyush Kumar Singh i Kanishk Vikram Singh. "Dissecting Adversarial Attacks: A Comparative Analysis of Adversarial Perturbation Effects on Pre-Trained Deep Learning Models". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, nr 11 (1.11.2023): 1–11. http://dx.doi.org/10.55041/ijsrem27337.
Pełny tekst źródłaPal, Biprodip, Debashis Gupta, Md Rashed-Al-Mahfuz, Salem A. Alyami i Mohammad Ali Moni. "Vulnerability in Deep Transfer Learning Models to Adversarial Fast Gradient Sign Attack for COVID-19 Prediction from Chest Radiography Images". Applied Sciences 11, nr 9 (7.05.2021): 4233. http://dx.doi.org/10.3390/app11094233.
Pełny tekst źródłaKim, Hoki, Woojin Lee i Jaewook Lee. "Understanding Catastrophic Overfitting in Single-step Adversarial Training". Proceedings of the AAAI Conference on Artificial Intelligence 35, nr 9 (18.05.2021): 8119–27. http://dx.doi.org/10.1609/aaai.v35i9.16989.
Pełny tekst źródłaLu, Fan. "Adversarial attack against deep learning algorithms for gun category detection". Applied and Computational Engineering 53, nr 1 (28.03.2024): 190–96. http://dx.doi.org/10.54254/2755-2721/53/20241368.
Pełny tekst źródłaCui, Chenrui. "Adversarial attack study on VGG16 for cat and dog image classification task". Applied and Computational Engineering 50, nr 1 (25.03.2024): 170–75. http://dx.doi.org/10.54254/2755-2721/50/20241438.
Pełny tekst źródłaMohamed, Mahmoud, i Mohamed Bilal. "Comparing the Performance of Deep Denoising Sparse Autoencoder with Other Defense Methods Against Adversarial Attacks for Arabic letters". Jordan Journal of Electrical Engineering 10, nr 1 (2024): 122. http://dx.doi.org/10.5455/jjee.204-1687363297.
Pełny tekst źródłaNavjot Kaur. "Robustness and Security in Deep Learning: Adversarial Attacks and Countermeasures". Journal of Electrical Systems 20, nr 3s (4.04.2024): 1250–57. http://dx.doi.org/10.52783/jes.1436.
Pełny tekst źródłaZhang, Qikun, Yuzhi Zhang, Yanling Shao, Mengqi Liu, Jianyong Li, Junling Yuan i Ruifang Wang. "Boosting Adversarial Attacks with Nadam Optimizer". Electronics 12, nr 6 (20.03.2023): 1464. http://dx.doi.org/10.3390/electronics12061464.
Pełny tekst źródłaYang, Bo, Kaiyong Xu, Hengjun Wang i Hengwei Zhang. "Random Transformation of image brightness for adversarial attack". Journal of Intelligent & Fuzzy Systems 42, nr 3 (2.02.2022): 1693–704. http://dx.doi.org/10.3233/jifs-211157.
Pełny tekst źródłaVyas, Dhairya, i Viral V. Kapadia. "Designing defensive techniques to handle adversarial attack on deep learning based model". PeerJ Computer Science 10 (8.03.2024): e1868. http://dx.doi.org/10.7717/peerj-cs.1868.
Pełny tekst źródłaZou, Junhua, Yexin Duan, Boyu Li, Wu Zhang, Yu Pan i Zhisong Pan. "Making Adversarial Examples More Transferable and Indistinguishable". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 3 (28.06.2022): 3662–70. http://dx.doi.org/10.1609/aaai.v36i3.20279.
Pełny tekst źródłaUtomo, Sapdo, Adarsh Rouniyar, Hsiu-Chun Hsu i Pao-Ann Hsiung. "Federated Adversarial Training Strategies for Achieving Privacy and Security in Sustainable Smart City Applications". Future Internet 15, nr 11 (20.11.2023): 371. http://dx.doi.org/10.3390/fi15110371.
Pełny tekst źródłaHan, Dong, Reza Babaei, Shangqing Zhao i Samuel Cheng. "Exploring the Efficacy of Learning Techniques in Model Extraction Attacks on Image Classifiers: A Comparative Study". Applied Sciences 14, nr 9 (29.04.2024): 3785. http://dx.doi.org/10.3390/app14093785.
Pełny tekst źródłaTrinh Quang Kien. "Improving the robustness of binarized neural network using the EFAT method". Journal of Military Science and Technology, CSCE5 (15.12.2021): 14–23. http://dx.doi.org/10.54939/1859-1043.j.mst.csce5.2021.14-23.
Pełny tekst źródłaRudd-Orthner, Richard N. M., i Lyudmila Mihaylova. "Deep ConvNet: Non-Random Weight Initialization for Repeatable Determinism, Examined with FSGM". Sensors 21, nr 14 (13.07.2021): 4772. http://dx.doi.org/10.3390/s21144772.
Pełny tekst źródłaXu, Wei, i Veerawat Sirivesmas. "Study on Network Virtual Printing Sculpture Design using Artificial Intelligence". International Journal of Communication Networks and Information Security (IJCNIS) 15, nr 1 (30.05.2023): 132–45. http://dx.doi.org/10.17762/ijcnis.v15i1.5694.
Pełny tekst źródłaGuan, Dejian, i Wentao Zhao . "Adversarial Detection Based on Inner-Class Adjusted Cosine Similarity". Applied Sciences 12, nr 19 (20.09.2022): 9406. http://dx.doi.org/10.3390/app12199406.
Pełny tekst źródłaZhao, Weimin, Sanaa Alwidian i Qusay H. Mahmoud. "Adversarial Training Methods for Deep Learning: A Systematic Review". Algorithms 15, nr 8 (12.08.2022): 283. http://dx.doi.org/10.3390/a15080283.
Pełny tekst źródłaLi, Xinyu, Shaogang Dai i Zhijin Zhao. "Unsupervised Learning-Based Spectrum Sensing Algorithm with Defending Adversarial Attacks". Applied Sciences 13, nr 16 (9.08.2023): 9101. http://dx.doi.org/10.3390/app13169101.
Pełny tekst źródłaZhu, Min-Ling, Liang-Liang Zhao i Li Xiao. "Image Denoising Based on GAN with Optimization Algorithm". Electronics 11, nr 15 (5.08.2022): 2445. http://dx.doi.org/10.3390/electronics11152445.
Pełny tekst źródłaLee , Jungeun, i Hoeseok Yang . "Performance Improvement of Image-Reconstruction-Based Defense against Adversarial Attack". Electronics 11, nr 15 (28.07.2022): 2372. http://dx.doi.org/10.3390/electronics11152372.
Pełny tekst źródłaWu, Fei, Wenxue Yang, Limin Xiao i Jinbin Zhu. "Adaptive Wiener Filter and Natural Noise to Eliminate Adversarial Perturbation". Electronics 9, nr 10 (3.10.2020): 1634. http://dx.doi.org/10.3390/electronics9101634.
Pełny tekst źródłaBhandari, Mohan, Tej Bahadur Shahi i Arjun Neupane. "Evaluating Retinal Disease Diagnosis with an Interpretable Lightweight CNN Model Resistant to Adversarial Attacks". Journal of Imaging 9, nr 10 (11.10.2023): 219. http://dx.doi.org/10.3390/jimaging9100219.
Pełny tekst źródłaSu, Guanpeng. "Analysis of the attack effect of adversarial attacks on machine learning models". Applied and Computational Engineering 6, nr 1 (14.06.2023): 1212–18. http://dx.doi.org/10.54254/2755-2721/6/20230607.
Pełny tekst źródłaHuang, Bowen, Ruoheng Feng i Jiahao Yuan. "Exploiting ensembled neural network model for social platform rumor detection". Applied and Computational Engineering 20, nr 1 (23.10.2023): 231–39. http://dx.doi.org/10.54254/2755-2721/20/20231103.
Pełny tekst źródłaKwon, Hyun. "MedicalGuard: U-Net Model Robust against Adversarially Perturbed Images". Security and Communication Networks 2021 (9.08.2021): 1–8. http://dx.doi.org/10.1155/2021/5595026.
Pełny tekst źródłaHaroon, Muhammad Shahzad, i Husnain Mansoor Ali. "Ensemble adversarial training based defense against adversarial attacks for machine learning-based intrusion detection system". Neural Network World 33, nr 5 (2023): 317–36. http://dx.doi.org/10.14311/nnw.2023.33.018.
Pełny tekst źródłaShi, Lin, Teyi Liao i Jianfeng He. "Defending Adversarial Attacks against DNN Image Classification Models by a Noise-Fusion Method". Electronics 11, nr 12 (8.06.2022): 1814. http://dx.doi.org/10.3390/electronics11121814.
Pełny tekst źródłaSun, Guangling, Yuying Su, Chuan Qin, Wenbo Xu, Xiaofeng Lu i Andrzej Ceglowski. "Complete Defense Framework to Protect Deep Neural Networks against Adversarial Examples". Mathematical Problems in Engineering 2020 (11.05.2020): 1–17. http://dx.doi.org/10.1155/2020/8319249.
Pełny tekst źródłaSaxena, Rishabh, Amit Sanjay Adate i Don Sasikumar. "A Comparative Study on Adversarial Noise Generation for Single Image Classification". International Journal of Intelligent Information Technologies 16, nr 1 (styczeń 2020): 75–87. http://dx.doi.org/10.4018/ijiit.2020010105.
Pełny tekst źródłaAn, Tong, Tao Zhang, Yanzhang Geng i Haiquan Jiao. "Normalized Combinations of Proportionate Affine Projection Sign Subband Adaptive Filter". Scientific Programming 2021 (26.08.2021): 1–12. http://dx.doi.org/10.1155/2021/8826868.
Pełny tekst źródłaHirano, Hokuto, i Kazuhiro Takemoto. "Simple Iterative Method for Generating Targeted Universal Adversarial Perturbations". Algorithms 13, nr 11 (22.10.2020): 268. http://dx.doi.org/10.3390/a13110268.
Pełny tekst źródłaZhang, Xingyu, Xiongwei Zhang, Xia Zou, Haibo Liu i Meng Sun. "Towards Generating Adversarial Examples on Combined Systems of Automatic Speaker Verification and Spoofing Countermeasure". Security and Communication Networks 2022 (31.07.2022): 1–12. http://dx.doi.org/10.1155/2022/2666534.
Pełny tekst źródłaPapadopoulos, Pavlos, Oliver Thornewill von Essen, Nikolaos Pitropakis, Christos Chrysoulas, Alexios Mylonas i William J. Buchanan. "Launching Adversarial Attacks against Network Intrusion Detection Systems for IoT". Journal of Cybersecurity and Privacy 1, nr 2 (23.04.2021): 252–73. http://dx.doi.org/10.3390/jcp1020014.
Pełny tekst źródłaDing, Ning, i Knut Möller. "Using adaptive learning rate to generate adversarial images". Current Directions in Biomedical Engineering 9, nr 1 (1.09.2023): 359–62. http://dx.doi.org/10.1515/cdbme-2023-1090.
Pełny tekst źródłaYang, Zhongguo, Irshad Ahmed Abbasi, Fahad Algarni, Sikandar Ali i Mingzhu Zhang. "An IoT Time Series Data Security Model for Adversarial Attack Based on Thermometer Encoding". Security and Communication Networks 2021 (9.03.2021): 1–11. http://dx.doi.org/10.1155/2021/5537041.
Pełny tekst źródłaSantana, Everton Jose, Ricardo Petri Silva, Bruno Bogaz Zarpelão i Sylvio Barbon Junior. "Detecting and Mitigating Adversarial Examples in Regression Tasks: A Photovoltaic Power Generation Forecasting Case Study". Information 12, nr 10 (26.09.2021): 394. http://dx.doi.org/10.3390/info12100394.
Pełny tekst źródłaPantiukhin, D. V. "Educational and methodological materials of the master class “Adversarial attacks on image recognition neural networks” for students and schoolchildren". Informatics and education 38, nr 1 (16.04.2023): 55–63. http://dx.doi.org/10.32517/0234-0453-2023-38-1-55-63.
Pełny tekst źródłaKumar, P. Sathish, i K. V. D. Kiran. "Momentum Iterative Fast Gradient Sign Algorithm for Adversarial Attacks and Defenses". Research Journal of Engineering and Technology, 30.06.2023, 7–24. http://dx.doi.org/10.52711/2321-581x.2023.00002.
Pełny tekst źródłaNaseem, Muhammad Luqman. "Trans-IFFT-FGSM: a novel fast gradient sign method for adversarial attacks". Multimedia Tools and Applications, 9.02.2024. http://dx.doi.org/10.1007/s11042-024-18475-7.
Pełny tekst źródłaXie, Pengfei, Shuhao Shi, Shuai Yang, Kai Qiao, Ningning Liang, Linyuan Wang, Jian Chen, Guoen Hu i Bin Yan. "Improving the Transferability of Adversarial Examples With a Noise Data Enhancement Framework and Random Erasing". Frontiers in Neurorobotics 15 (9.12.2021). http://dx.doi.org/10.3389/fnbot.2021.784053.
Pełny tekst źródłaZhang, Junjian, Hao Tan, Le Wang, Yaguan Qian i Zhaoquan Gu. "Rethinking multi‐spatial information for transferable adversarial attacks on speaker recognition systems". CAAI Transactions on Intelligence Technology, 29.03.2024. http://dx.doi.org/10.1049/cit2.12295.
Pełny tekst źródła