Academic literature on the topic 'Fast Gradient Sign Method (FGSM)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Fast Gradient Sign Method (FGSM).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Fast Gradient Sign Method (FGSM)"

1

Hong, Dian, Deng Chen, Yanduo Zhang, Huabing Zhou, and Liang Xie. "Attacking Robot Vision Models Efficiently Based on Improved Fast Gradient Sign Method." Applied Sciences 14, no. 3 (February 2, 2024): 1257. http://dx.doi.org/10.3390/app14031257.

Full text
Abstract:
The robot vision model is the basis for the robot to perceive and understand the environment and make correct decisions. However, the security and stability of robot vision models are seriously threatened by adversarial examples. In this study, we propose an adversarial attack algorithm, RMS-FGSM, for robot vision models based on root-mean-square propagation (RMSProp). RMS-FGSM uses an exponentially weighted moving average (EWMA) to reduce the weight of the historical cumulative squared gradient. Additionally, it can suppress the gradient growth based on an adaptive learning rate. By integrating with the RMSProp, RMS-FGSM is more likely to generate optimal adversarial examples, and a high attack success rate can be achieved. Experiments on two datasets (MNIST and CIFAR-100) and several models (LeNet, Alexnet, and Resnet-101) show that the attack success rate of RMS-FGSM is higher than the state-of-the-art methods. Above all, our generated adversarial examples have a smaller perturbation than those generated by existing methods under the same attack success rate.
APA, Harvard, Vancouver, ISO, and other styles
2

Long, Sheng, Wei Tao, Shuohao LI, Jun Lei, and Jun Zhang. "On the Convergence of an Adaptive Momentum Method for Adversarial Attacks." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 13 (March 24, 2024): 14132–40. http://dx.doi.org/10.1609/aaai.v38i13.29323.

Full text
Abstract:
Adversarial examples are commonly created by solving a constrained optimization problem, typically using sign-based methods like Fast Gradient Sign Method (FGSM). These attacks can benefit from momentum with a constant parameter, such as Momentum Iterative FGSM (MI-FGSM), to enhance black-box transferability. However, the monotonic time-varying momentum parameter is required to guarantee convergence in theory, creating a theory-practice gap. Additionally, recent work shows that sign-based methods fail to converge to the optimum in several convex settings, exacerbating the issue. To address these concerns, we propose a novel method which incorporates both an innovative adaptive momentum parameter without monotonicity assumptions and an adaptive step-size scheme that replaces the sign operation. Furthermore, we derive a regret upper bound for general convex functions. Experiments on multiple models demonstrate the efficacy of our method in generating adversarial examples with human-imperceptible noise while achieving high attack success rates, indicating its superiority over previous adversarial example generation methods.
APA, Harvard, Vancouver, ISO, and other styles
3

Pan, Chao, Qing Li, and Xin Yao. "Adversarial Initialization with Universal Adversarial Perturbation: A New Approach to Fast Adversarial Training." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 19 (March 24, 2024): 21501–9. http://dx.doi.org/10.1609/aaai.v38i19.30147.

Full text
Abstract:
Traditional adversarial training, while effective at improving machine learning model robustness, is computationally intensive. Fast Adversarial Training (FAT) addresses this by using a single-step attack to generate adversarial examples more efficiently. Nonetheless, FAT is susceptible to a phenomenon known as catastrophic overfitting, wherein the model's adversarial robustness abruptly collapses to zero during the training phase. To address this challenge, recent studies have suggested adopting adversarial initialization with Fast Gradient Sign Method Adversarial Training (FGSM-AT), which recycles adversarial perturbations from prior epochs by computing gradient momentum. However, our research has uncovered a flaw in this approach. Given that data augmentation is employed during the training phase, the samples in each epoch are not identical. Consequently, the method essentially yields not the adversarial perturbation of a singular sample, but rather the Universal Adversarial Perturbation (UAP) of a sample and its data augmentation. This insight has led us to explore the potential of using UAPs for adversarial initialization within the context of FGSM-AT. We have devised various strategies for adversarial initialization utilizing UAPs, including single, class-based, and feature-based UAPs. Experiments conducted on three distinct datasets demonstrate that our method achieves an improved trade-off among robustness, computational cost, and memory footprint. Code is available at https://github.com/fzjcdt/fgsm-uap.
APA, Harvard, Vancouver, ISO, and other styles
4

Wibawa, Sigit. "Analysis of Adversarial Attacks on AI-based With Fast Gradient Sign Method." International Journal of Engineering Continuity 2, no. 2 (August 1, 2023): 72–79. http://dx.doi.org/10.58291/ijec.v2i2.120.

Full text
Abstract:
Artificial intelligence (AI) has become a key driving force in sectors from transportation to healthcare, and is opening up tremendous opportunities for technological advancement. However, behind this promising potential, AI also presents serious security challenges. This article aims to investigate attacks on AI and security challenges that must be faced in the era of artificial intelligence, this research aims to simulate and test the security of AI systems due to adversarial attacks. We can use the Python programming language for this, using several libraries and tools. One that is very popular for testing the security of AI models is CleverHans, and by understanding those threats we can protect the positive developments of AI in the future. this research provides a thorough understanding of attacks in AI technology especially in neural networks and machine learning, and the security challenge we face is that adding a little interference to the input data causes the AI ​​model to produce wrong predictions in adversarial attacks there is the FGSM model which with an epsilon value of 0.1 causes the model suffered a drastic reduction in accuracy of around 66%, which means that the attack managed to mislead the model and lead to incorrect predictions. in the future understanding this threat is the key to protecting the positive development of AI. With a thorough understanding of AI attacks and the security challenges we address, we can build a solid foundation to effectively address these threats.
APA, Harvard, Vancouver, ISO, and other styles
5

Kadhim, Ansam, and Salah Al-Darraji. "Face Recognition System Against Adversarial Attack Using Convolutional Neural Network." Iraqi Journal for Electrical and Electronic Engineering 18, no. 1 (November 6, 2021): 1–8. http://dx.doi.org/10.37917/ijeee.18.1.1.

Full text
Abstract:
Face recognition is the technology that verifies or recognizes faces from images, videos, or real-time streams. It can be used in security or employee attendance systems. Face recognition systems may encounter some attacks that reduce their ability to recognize faces properly. So, many noisy images mixed with original ones lead to confusion in the results. Various attacks that exploit this weakness affect the face recognition systems such as Fast Gradient Sign Method (FGSM), Deep Fool, and Projected Gradient Descent (PGD). This paper proposes a method to protect the face recognition system against these attacks by distorting images through different attacks, then training the recognition deep network model, specifically Convolutional Neural Network (CNN), using the original and distorted images. Diverse experiments have been conducted using combinations of original and distorted images to test the effectiveness of the system. The system showed an accuracy of 93% using FGSM attack, 97% using deep fool, and 95% using PGD.
APA, Harvard, Vancouver, ISO, and other styles
6

Pervin, Mst Tasnim, Linmi Tao, and Aminul Huq. "Adversarial attack driven data augmentation for medical images." International Journal of Electrical and Computer Engineering (IJECE) 13, no. 6 (December 1, 2023): 6285. http://dx.doi.org/10.11591/ijece.v13i6.pp6285-6292.

Full text
Abstract:
An important stage in medical image analysis is segmentation, which aids in focusing on the required area of an image and speeds up findings. Fortunately, deep learning models have taken over with their high-performing capabilities, making this process simpler. The deep learning model’s reliance on vast data, however, makes it difficult to utilize for medical image analysis due to the scarcity of data samples. Too far, a number of data augmentations techniques have been employed to address the issue of data unavailability. Here, we present a novel method of augmentation that enabled the UNet model to segment the input dataset with about 90% accuracy in just 30 epochs. We describe the us- age of fast gradient sign method (FGSM) as an augmentation tool for adversarial machine learning attack methods. Besides, we have developed the method of Inverse FGSM, which im- proves performance by operating in the opposite way from FGSM adversarial attacks. In comparison to the conventional FGSM methodology, our strategy boosted performance up to 6% to 7% on average. The model became more resilient to hostile attacks because to these two strategies. An innovative implementation of adversarial machine learning and resilience augmentation is revealed by the overall analysis of this study.
APA, Harvard, Vancouver, ISO, and other styles
7

Villegas-Ch, William, Angel Jaramillo-Alcázar, and Sergio Luján-Mora. "Evaluating the Robustness of Deep Learning Models against Adversarial Attacks: An Analysis with FGSM, PGD and CW." Big Data and Cognitive Computing 8, no. 1 (January 16, 2024): 8. http://dx.doi.org/10.3390/bdcc8010008.

Full text
Abstract:
This study evaluated the generation of adversarial examples and the subsequent robustness of an image classification model. The attacks were performed using the Fast Gradient Sign method, the Projected Gradient Descent method, and the Carlini and Wagner attack to perturb the original images and analyze their impact on the model’s classification accuracy. Additionally, image manipulation techniques were investigated as defensive measures against adversarial attacks. The results highlighted the model’s vulnerability to conflicting examples: the Fast Gradient Signed Method effectively altered the original classifications, while the Carlini and Wagner method proved less effective. Promising approaches such as noise reduction, image compression, and Gaussian blurring were presented as effective countermeasures. These findings underscore the importance of addressing the vulnerability of machine learning models and the need to develop robust defenses against adversarial examples. This article emphasizes the urgency of addressing the threat posed by harmful standards in machine learning models, highlighting the relevance of implementing effective countermeasures and image manipulation techniques to mitigate the effects of adversarial attacks. These efforts are crucial to safeguarding model integrity and trust in an environment marked by constantly evolving hostile threats. An average 25% decrease in accuracy was observed for the VGG16 model when exposed to the Fast Gradient Signed Method and Projected Gradient Descent attacks, and an even more significant 35% decrease with the Carlini and Wagner method.
APA, Harvard, Vancouver, ISO, and other styles
8

Kurniawan S, Putu Widiarsa, Yosi Kristian, and Joan Santoso. "Pemanfaatan Deep Convulutional Auto-encoder untuk Mitigasi Serangan Adversarial Attack pada Citra Digital." J-INTECH 11, no. 1 (July 4, 2023): 50–59. http://dx.doi.org/10.32664/j-intech.v11i1.845.

Full text
Abstract:
Serangan adversarial pada citra digital merupakan ancaman serius bagi penggunaan teknologi machine learning dalam berbagai aplikasi kehidupan sehari-hari. Teknik Fast Gradient Sign Method (FGSM) telah terbukti efektif dalam melakukan serangan pada model machine learning, termasuk pada citra digital yang terdapat dalam dataset ImageNet. Penelitian ini bertujuan untuk mengatasi permasalahan tersebut dengan memanfaatkan teknik Deep Convolutional Auto-encoder (AE) sebagai metode mitigasi serangan adversarial pada citra digital. Penelitian dilakukan dengan cara melakukan serangan FGSM pada dataset ImageNet dan melakukan mitigasi dengan menerapkan teknik AE pada citra digital yang telah diberi serangan. Hasil penelitian menunjukkan bahwa serangan FGSM dapat dilakukan pada sebagian besar citra digital, namun ada beberapa citra yang lebih tahan terhadap serangan. Selain itu, teknik mitigasi AE efektif dalam mengurangi dampak dari serangan adversarial pada sebagian besar citra digital. Akurasi model serangan dan mitigasi masing-masing sebesar 85.42% dan 87.50%. Meskipun masih ada beberapa citra yang rentan terhadap serangan meskipun telah diterapkan teknik mitigasi.
APA, Harvard, Vancouver, ISO, and other styles
9

Kumari, Rekha, Tushar Bhatia, Peeyush Kumar Singh, and Kanishk Vikram Singh. "Dissecting Adversarial Attacks: A Comparative Analysis of Adversarial Perturbation Effects on Pre-Trained Deep Learning Models." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, no. 11 (November 1, 2023): 1–11. http://dx.doi.org/10.55041/ijsrem27337.

Full text
Abstract:
It is well known that the majority of neural networks widely employed today are extremely susceptible to adversarial perturbations which causes the misclassification of the output. This, in turn, can cause severe security concerns. In this paper, we meticulously evaluate the robustness of prominent pre-trained deep learning models against images that are modified with the Fast Gradient Sign Method (FGSM) attack. For this purpose, we have selected the following models: InceptionV3, InceptionResNetV2, ResNet152V2, Xception, DenseNet121, and MobileNetV2. All these models are pre-trained on ImageNet, and hence, we use our custom 10- animals test dataset to produce clean as well as misclassified output. Rather than focusing solely on prediction accuracy, our study uniquely quantifies the perturbation required to alter output labels, shedding light on the models' susceptibility to misclassification. The outcomes underscore varying vulnerabilities among the models to FGSM attacks, providing nuanced insights crucial for fortifying neural networks against adversarial threats. Key Words: Adversarial Perturbations, Deep Learning, ImageNet, FGSM Attack, Neural Networks, Pre-trained Models
APA, Harvard, Vancouver, ISO, and other styles
10

Pal, Biprodip, Debashis Gupta, Md Rashed-Al-Mahfuz, Salem A. Alyami, and Mohammad Ali Moni. "Vulnerability in Deep Transfer Learning Models to Adversarial Fast Gradient Sign Attack for COVID-19 Prediction from Chest Radiography Images." Applied Sciences 11, no. 9 (May 7, 2021): 4233. http://dx.doi.org/10.3390/app11094233.

Full text
Abstract:
The COVID-19 pandemic requires the rapid isolation of infected patients. Thus, high-sensitivity radiology images could be a key technique to diagnose patients besides the polymerase chain reaction approach. Deep learning algorithms are proposed in several studies to detect COVID-19 symptoms due to the success in chest radiography image classification, cost efficiency, lack of expert radiologists, and the need for faster processing in the pandemic area. Most of the promising algorithms proposed in different studies are based on pre-trained deep learning models. Such open-source models and lack of variation in the radiology image-capturing environment make the diagnosis system vulnerable to adversarial attacks such as fast gradient sign method (FGSM) attack. This study therefore explored the potential vulnerability of pre-trained convolutional neural network algorithms to the FGSM attack in terms of two frequently used models, VGG16 and Inception-v3. Firstly, we developed two transfer learning models for X-ray and CT image-based COVID-19 classification and analyzed the performance extensively in terms of accuracy, precision, recall, and AUC. Secondly, our study illustrates that misclassification can occur with a very minor perturbation magnitude, such as 0.009 and 0.003 for the FGSM attack in these models for X-ray and CT images, respectively, without any effect on the visual perceptibility of the perturbation. In addition, we demonstrated that successful FGSM attack can decrease the classification performance to 16.67% and 55.56% for X-ray images, as well as 36% and 40% in the case of CT images for VGG16 and Inception-v3, respectively, without any human-recognizable perturbation effects in the adversarial images. Finally, we analyzed that correct class probability of any test image which is supposed to be 1, can drop for both considered models and with increased perturbation; it can drop to 0.24 and 0.17 for the VGG16 model in cases of X-ray and CT images, respectively. Thus, despite the need for data sharing and automated diagnosis, practical deployment of such program requires more robustness.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Fast Gradient Sign Method (FGSM)"

1

Darwaish, Asim. "Adversary-aware machine learning models for malware detection systems." Electronic Thesis or Diss., Université Paris Cité, 2022. http://www.theses.fr/2022UNIP7283.

Full text
Abstract:
La popularisation des smartphones et leur caractère indispensable les rendent aujourd'hui indéniables. Leur croissance exponentielle est également à l'origine de l'apparition de nombreux logiciels malveillants et fait trembler le prospère écosystème mobile. Parmi tous les systèmes d'exploitation des smartphones, Android est le plus ciblé par les auteurs de logiciels malveillants en raison de sa popularité, de sa disponibilité en tant que logiciel libre, et de sa capacité intrinsèque à accéder aux ressources internes. Les approches basées sur l'apprentissage automatique ont été déployées avec succès pour combattre les logiciels malveillants polymorphes et évolutifs. Au fur et à mesure que le classificateur devient populaire et largement adopté, l'intérêt d'échapper au classificateur augmente également. Les chercheurs et les adversaires se livrent à une course sans fin pour renforcer le système de détection des logiciels malveillants androïd et y échapper. Afin de lutter contre ces logiciels malveillants et de contrer les attaques adverses, nous proposons dans cette thèse un système de détection de logiciels malveillants android basé sur le codage d'images, un système qui a prouvé sa robustesse contre diverses attaques adverses. La plateforme proposée construit d'abord le système de détection des logiciels malveillants android en transformant intelligemment le fichier Android Application Packaging (APK) en une image RGB légère et en entraînant un réseau neuronal convolutif (CNN) pour la détection des logiciels malveillants et la classification des familles. Notre nouvelle méthode de transformation génère des modèles pour les APK bénins et malveillants plus faciles à classifier en images de couleur. Le système de détection ainsi conçu donne une excellente précision de 99,37% avec un Taux de Faux Négatifs (FNR) de 0,8% et un Taux de Faux Positifs (FPR) de 0,39% pour les anciennes et les nouvelles variantes de logiciels malveillants. Dans la deuxième phase, nous avons évalué la robustesse de notre système de détection de logiciels malveillants android basé sur l'image. Pour valider son efficacité contre les attaques adverses, nous avons créé trois nouveaux modèles d'attaques. Notre évaluation révèle que les systèmes de détection de logiciels malveillants basés sur l'apprentissage les plus récents sont faciles à contourner, avec un taux d'évasion de plus de 50 %. Cependant, le système que nous avons proposé construit un mécanisme robuste contre les perturbations adverses en utilisant son espace continu intrinsèque obtenu après la transformation intelligente des fichiers Dex et Manifest, ce qui rend le système de détection difficile à contourner
The exhilarating proliferation of smartphones and their indispensability to human life is inevitable. The exponential growth is also triggering widespread malware and stumbling the prosperous mobile ecosystem. Among all handheld devices, Android is the most targeted hive for malware authors due to its popularity, open-source availability, and intrinsic infirmity to access internal resources. Machine learning-based approaches have been successfully deployed to combat evolving and polymorphic malware campaigns. As the classifier becomes popular and widely adopted, the incentive to evade the classifier also increases. Researchers and adversaries are in a never-ending race to strengthen and evade the android malware detection system. To combat malware campaigns and counter adversarial attacks, we propose a robust image-based android malware detection system that has proven its robustness against various adversarial attacks. The proposed platform first constructs the android malware detection system by intelligently transforming the Android Application Packaging (APK) file into a lightweight RGB image and training a convolutional neural network (CNN) for malware detection and family classification. Our novel transformation method generates evident patterns for benign and malware APKs in color images, making the classification easier. The detection system yielded an excellent accuracy of 99.37% with a False Negative Rate (FNR) of 0.8% and a False Positive Rate (FPR) of 0.39% for legacy and new malware variants. In the second phase, we evaluate the robustness of our secured image-based android malware detection system. To validate its hardness and effectiveness against evasion, we have crafted three novel adversarial attack models. Our thorough evaluation reveals that state-of-the-art learning-based malware detection systems are easy to evade, with more than a 50% evasion rate. However, our proposed system builds a secure mechanism against adversarial perturbations using its intrinsic continuous space obtained after the intelligent transformation of Dex and Manifest files which makes the detection system strenuous to bypass
APA, Harvard, Vancouver, ISO, and other styles
2

Vivek, B. S. "Towards Learning Adversarially Robust Deep Learning Models." Thesis, 2019. https://etd.iisc.ac.in/handle/2005/4488.

Full text
Abstract:
Deep learning models have shown impressive performance across a wide spectrum of computer vision applications, including medical diagnosis and autonomous driving. One of the major concerns that these models face is their susceptibility to adversarial samples: samples with small, crafted noise designed to manipulate the model’s prediction. A defense mechanism named Adversarial Training (AT) shows promising results against these attacks. This training regime augments mini-batches with adversaries. However, to scale this training to large networks and datasets, fast and simple methods (e.g., single-step methods such as Fast Gradient Sign Method (FGSM)), are essential for generating these adversaries. But, single-step adversarial training (e.g., FGSM adversarial training) converges to a degenerate minimum, where the model merely appears to be robust. As a result, models are vulnerable to simple black-box attacks. In this thesis, we explore the following aspects of adversarial training: Failure of Single-step Adversarial Training: In the first part of the thesis, we will demonstrate that the pseudo robustness of an adversarially trained model is due to the limitations in the existing evaluation procedure. Further, we introduce novel variants of white-box and black-box attacks, dubbed “gray-box adversarial attacks”, based on which we propose a novel evaluation method to assess the robustness of the learned models. A novel variant of adversarial training named “Gray-box Adversarial Training” that uses intermediate versions of the model to seed the adversaries is proposed to improve the model’s robustness. Regularizers for Single-step Adversarial Training: In this part of the thesis, we will discuss various regularizers that could help to learn robust models using single-step adversarial training methods. (i) Regularizer that enforces logits for FGSM and I-FGSM (iterative-FGSM) of a clean sample, to be similar (imposed on only one pair of an adversarial sample in a mini-batch), (ii) Regularizer that enforces logits for FGSM and R-FGSM (Random+FGSM) of a clean sample, to be similar, (iii) Monotonic loss constraint: Enforces the loss to increase monotonically with an increase in the perturbation size of the FGSM attack, and (iv) Dropout with decaying dropout probability: Introduces dropout layer with decaying dropout probability, after each nonlinear layer of a network. Incorporating Domain Knowledge to Improve Model’s Adversarial Robustness: In this final part of the thesis, we show that the existing normal training method fails to incorporate domain knowledge into the learned feature representation of the network. Further, we show that incorporating domain knowledge into the learned feature representation of the network results in a significant improvement in the robustness of the network against adversarial attacks, within normal training regime.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Fast Gradient Sign Method (FGSM)"

1

Muncsan, Tamás, and Attila Kiss. "Transferability of Fast Gradient Sign Method." In Advances in Intelligent Systems and Computing, 23–34. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-55187-2_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Xia, Xiaoyan, Wei Xue, Pengcheng Wan, Hui Zhang, Xinyu Wang, and Zhiting Zhang. "FCGSM: Fast Conjugate Gradient Sign Method for Adversarial Attack on Image Classification." In Lecture Notes in Electrical Engineering, 709–16. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-2287-1_98.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hong, Dian, and Deng Chen. "Gradient-Based Adversarial Example Generation with Root Mean Square Propagation." In Artificial Intelligence and Human-Computer Interaction. IOS Press, 2024. http://dx.doi.org/10.3233/faia240141.

Full text
Abstract:
Nowadays, the security of neural networks has attracted more and more attention. Adversarial examples are one of the problems that affect the security of neural networks. The gradient-based attack method is a typical attack method, and the Momentum Iterative Fast Gradient Sign Method (MI-FGSM) is a typical attack algorithm among the gradient-based attack algorithms. However, this method may suffer from the problems of excessive gradient growth and low efficiency. In this paper, we propose a gradient-based attack algorithm RMS-FGSM based on Root Mean Square Propagation (RMSProp). RMS-FGSM algorithm avoids excessive gradient growth by Exponential Weighted Moving Average method and adaptive learning rate when gradient updates. Experiments on MNIST and CIFAR-100 and several models show that the attack success rate of our approach is higher than the baseline methods. Above all, our generated adversarial examples have a smaller perturbation under the same attack success rate.
APA, Harvard, Vancouver, ISO, and other styles
4

Knaup, Julian, Christoph-Alexander Holst, and Volker Lohweg. "Robust Training with Adversarial Examples on Industrial Data." In Proceedings - 33. Workshop Computational Intelligence: Berlin, 23.-24. November 2023, 123–42. KIT Scientific Publishing, 2023. http://dx.doi.org/10.58895/ksp/1000162754-9.

Full text
Abstract:
In an era where deep learning models are increasingly deployed in safetycritical domains, ensuring their reliability is paramount. The emergence of adversarial examples, which can lead to severe model misbehavior, underscores this need for robustness. Adversarial training, a technique aimed at fortifying models against such threats, is of particular interest. This paper presents an approach tailored to adversarial training on tabular data within industrial environments. The approach encompasses various components, including data preprocessing, techniques for stabilizing the training process, and an exploration of diverse adversarial training variants, such as Fast Gradient Sign Method (FGSM), Jacobian-based Saliency Map Attack (JSMA), DeepFool, Carlini & Wagner (C&W), and Projected Gradient Descent (PGD). Additionally, the paper delves into an extensive review and comparison of methods for generating adversarial examples, highlighting their impact on tabular data in adversarial settings. Furthermore, the paper identifies open research questions and hints at future developments, particularly in the realm of semantic adversarials. This work contributes to the ongoing effort to enhance the robustness of deep learning models, with a focus on their deployment in safety-critical industrial contexts.
APA, Harvard, Vancouver, ISO, and other styles
5

Sen, Jaydip, and Subhasis Dasgupta. "Adversarial Attacks on Image Classification Models: FGSM and Patch Attacks and Their Impact." In Information Security and Privacy in the Digital World - Some Selected Topics [Working Title]. IntechOpen, 2023. http://dx.doi.org/10.5772/intechopen.112442.

Full text
Abstract:
This chapter introduces the concept of adversarial attacks on image classification models built on convolutional neural networks (CNN). CNNs are very popular deep-learning models which are used in image classification tasks. However, very powerful and pre-trained CNN models working very accurately on image datasets for image classification tasks may perform disastrously when the networks are under adversarial attacks. In this work, two very well-known adversarial attacks are discussed and their impact on the performance of image classifiers is analyzed. These two adversarial attacks are the fast gradient sign method (FGSM) and adversarial patch attack. These attacks are launched on three powerful pre-trained image classifier architectures, ResNet-34, GoogleNet, and DenseNet-161. The classification accuracy of the models in the absence and presence of the two attacks are computed on images from the publicly accessible ImageNet dataset. The results are analyzed to evaluate the impact of the attacks on the image classification task.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Fast Gradient Sign Method (FGSM)"

1

Hassan, Muhammad, Shahzad Younis, Ahmed Rasheed, and Muhammad Bilal. "Integrating single-shot Fast Gradient Sign Method (FGSM) with classical image processing techniques for generating adversarial attacks on deep learning classifiers." In Fourteenth International Conference on Machine Vision (ICMV 2021), edited by Wolfgang Osten, Dmitry Nikolaev, and Jianhong Zhou. SPIE, 2022. http://dx.doi.org/10.1117/12.2623585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Reyes-Amezcua, Ivan, Gilberto Ochoa-Ruiz, and Andres Mendez-Vazquez. "Transfer Robustness to Downstream Tasks Through Sampling Adversarial Perturbations." In LatinX in AI at Computer Vision and Pattern Recognition Conference 2023. Journal of LatinX in AI Research, 2023. http://dx.doi.org/10.52591/lxai2023061811.

Full text
Abstract:
Due to the vulnerability of deep neural networks to adversarial attacks, adversarial robustness has grown to be a crucial problem in deep learning. Recent research has demonstrated that even small perturbations to the input data can have a large impact on the model’s output, exposing them susceptible to malicious attacks. In this work, we propose Delta Data Augmentation (DDA), a data augmentation method for enhancing transfer robustness by sampling extracted perturbations from trained models against adversarial attacks. The main idea of our work is to generate adversarial perturbations and to apply them to downstream datasets in a data augmentation fashion. Here we demonstrate, through extensive experimentation the advantages of our data augmentation method over the current State-of-the-Art in Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) attacks for CIFAR10 dataset.
APA, Harvard, Vancouver, ISO, and other styles
3

Silva, Gabriel H. N. Espindola da, Rodrigo Sanches Miani, and Bruno Bogaz Zarpelão. "Investigando o Impacto de Amostras Adversárias na Detecção de Intrusões em um Sistema Ciberfísico." In Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos. Sociedade Brasileira de Computação - SBC, 2023. http://dx.doi.org/10.5753/sbrc.2023.488.

Full text
Abstract:
Neste artigo, investigamos o impacto que amostras adversárias causam em algoritmos de aprendizado de máquina supervisionado utilizados para detectar ataques em um sistema ciberfísico. O estudo leva em consideração o cenário onde um atacante consegue obter acesso a dados do sistema alvo que podem ser utilizados para o treinamento do modelo adversário. O objetivo do atacante é gerar amostras maliciosas utilizando aprendizado de máquina adversário para enganar os modelos implementados para detecção de intrusão. Foi observado através dos ataques FGSM (Fast Gradient Sign Method) e JSMA (Jacobian Saliency Map Attack) que o conhecimento prévio da arquitetura do algoritmo alvo pode levar a ataques mais severos, e que os algoritmos alvo testados sofrem diferentes impactos conforme se varia o volume de dados roubados pelo atacante. Por fim, o método FGSM produziu ataques com maior severidade média que o JSMA, mas o JSMA apresenta a vantagem de ser menos invasivo e, possivelmente, mais difícil de ser detectado.
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Yujie, Shuai Mao, Xiang Mei, Tao Yang, and Xuran Zhao. "Sensitivity of Adversarial Perturbation in Fast Gradient Sign Method." In 2019 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 2019. http://dx.doi.org/10.1109/ssci44817.2019.9002856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Xu, Jin. "Generate Adversarial Examples by Nesterov-momentum Iterative Fast Gradient Sign Method." In 2020 IEEE 11th International Conference on Software Engineering and Service Science (ICSESS). IEEE, 2020. http://dx.doi.org/10.1109/icsess49938.2020.9237700.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Taozheng, and Jiajian Meng. "STFGSM: Intelligent Image Classification Model Based on Swin Transformer and Fast Gradient Sign Method." In ICDLT 2023: 2023 7th International Conference on Deep Learning Technologies. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3613330.3613339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hong, In-pyo, Gyu-ho Choi, Pan-koo Kim, and Chang Choi. "Security Verification Software Platform of Data-efficient Image Transformer Based on Fast Gradient Sign Method." In SAC '23: 38th ACM/SIGAPP Symposium on Applied Computing. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3555776.3577731.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography