Journal articles on the topic 'Fast Gradient Sign Method'

To see the other types of publications on this topic, follow the link: Fast Gradient Sign Method.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Fast Gradient Sign Method.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zou, Junhua, Yexin Duan, Boyu Li, Wu Zhang, Yu Pan, and Zhisong Pan. "Making Adversarial Examples More Transferable and Indistinguishable." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (June 28, 2022): 3662–70. http://dx.doi.org/10.1609/aaai.v36i3.20279.

Full text
Abstract:
Fast gradient sign attack series are popular methods that are used to generate adversarial examples. However, most of the approaches based on fast gradient sign attack series cannot balance the indistinguishability and transferability due to the limitations of the basic sign structure. To address this problem, we propose a method, called Adam Iterative Fast Gradient Tanh Method (AI-FGTM), to generate indistinguishable adversarial examples with high transferability. Besides, smaller kernels and dynamic step size are also applied to generate adversarial examples for further increasing the attack success rates. Extensive experiments on an ImageNet-compatible dataset show that our method generates more indistinguishable adversarial examples and achieves higher attack success rates without extra running time and resource. Our best transfer-based attack NI-TI-DI-AITM can fool six classic defense models with an average success rate of 89.3% and three advanced defense models with an average success rate of 82.7%, which are higher than the state-of-the-art gradient-based attacks. Additionally, our method can also reduce nearly 20% mean perturbation. We expect that our method will serve as a new baseline for generating adversarial examples with better transferability and indistinguishability.
APA, Harvard, Vancouver, ISO, and other styles
2

Wibawa, Sigit. "Analysis of Adversarial Attacks on AI-based With Fast Gradient Sign Method." International Journal of Engineering Continuity 2, no. 2 (August 1, 2023): 72–79. http://dx.doi.org/10.58291/ijec.v2i2.120.

Full text
Abstract:
Artificial intelligence (AI) has become a key driving force in sectors from transportation to healthcare, and is opening up tremendous opportunities for technological advancement. However, behind this promising potential, AI also presents serious security challenges. This article aims to investigate attacks on AI and security challenges that must be faced in the era of artificial intelligence, this research aims to simulate and test the security of AI systems due to adversarial attacks. We can use the Python programming language for this, using several libraries and tools. One that is very popular for testing the security of AI models is CleverHans, and by understanding those threats we can protect the positive developments of AI in the future. this research provides a thorough understanding of attacks in AI technology especially in neural networks and machine learning, and the security challenge we face is that adding a little interference to the input data causes the AI ​​model to produce wrong predictions in adversarial attacks there is the FGSM model which with an epsilon value of 0.1 causes the model suffered a drastic reduction in accuracy of around 66%, which means that the attack managed to mislead the model and lead to incorrect predictions. in the future understanding this threat is the key to protecting the positive development of AI. With a thorough understanding of AI attacks and the security challenges we address, we can build a solid foundation to effectively address these threats.
APA, Harvard, Vancouver, ISO, and other styles
3

Sun, Guangling, Yuying Su, Chuan Qin, Wenbo Xu, Xiaofeng Lu, and Andrzej Ceglowski. "Complete Defense Framework to Protect Deep Neural Networks against Adversarial Examples." Mathematical Problems in Engineering 2020 (May 11, 2020): 1–17. http://dx.doi.org/10.1155/2020/8319249.

Full text
Abstract:
Although Deep Neural Networks (DNNs) have achieved great success on various applications, investigations have increasingly shown DNNs to be highly vulnerable when adversarial examples are used as input. Here, we present a comprehensive defense framework to protect DNNs against adversarial examples. First, we present statistical and minor alteration detectors to filter out adversarial examples contaminated by noticeable and unnoticeable perturbations, respectively. Then, we ensemble the detectors, a deep Residual Generative Network (ResGN), and an adversarially trained targeted network, to construct a complete defense framework. In this framework, the ResGN is our previously proposed network which is used to remove adversarial perturbations, and the adversarially trained targeted network is a network that is learned through adversarial training. Specifically, once the detectors determine an input example to be adversarial, it is cleaned by ResGN and then classified by the adversarially trained targeted network; otherwise, it is directly classified by this network. We empirically evaluate the proposed complete defense on ImageNet dataset. The results confirm the robustness against current representative attacking methods including fast gradient sign method, randomized fast gradient sign method, basic iterative method, universal adversarial perturbations, DeepFool method, and Carlini & Wagner method.
APA, Harvard, Vancouver, ISO, and other styles
4

Kim, Hoki, Woojin Lee, and Jaewook Lee. "Understanding Catastrophic Overfitting in Single-step Adversarial Training." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (May 18, 2021): 8119–27. http://dx.doi.org/10.1609/aaai.v35i9.16989.

Full text
Abstract:
Although fast adversarial training has demonstrated both robustness and efficiency, the problem of "catastrophic overfitting" has been observed. This is a phenomenon in which, during single-step adversarial training, the robust accuracy against projected gradient descent (PGD) suddenly decreases to 0% after a few epochs, whereas the robust accuracy against fast gradient sign method (FGSM) increases to 100%. In this paper, we demonstrate that catastrophic overfitting is very closely related to the characteristic of single-step adversarial training which uses only adversarial examples with the maximum perturbation, and not all adversarial examples in the adversarial direction, which leads to decision boundary distortion and a highly curved loss surface. Based on this observation, we propose a simple method that not only prevents catastrophic overfitting, but also overrides the belief that it is difficult to prevent multi-step adversarial attacks with single-step adversarial training.
APA, Harvard, Vancouver, ISO, and other styles
5

Saxena, Rishabh, Amit Sanjay Adate, and Don Sasikumar. "A Comparative Study on Adversarial Noise Generation for Single Image Classification." International Journal of Intelligent Information Technologies 16, no. 1 (January 2020): 75–87. http://dx.doi.org/10.4018/ijiit.2020010105.

Full text
Abstract:
With the rise of neural network-based classifiers, it is evident that these algorithms are here to stay. Even though various algorithms have been developed, these classifiers still remain vulnerable to misclassification attacks. This article outlines a new noise layer attack based on adversarial learning and compares the proposed method to other such attacking methodologies like Fast Gradient Sign Method, Jacobian-Based Saliency Map Algorithm and DeepFool. This work deals with comparing these algorithms for the use case of single image classification and provides a detailed analysis of how each algorithm compares to each other.
APA, Harvard, Vancouver, ISO, and other styles
6

Yang, Bo, Kaiyong Xu, Hengjun Wang, and Hengwei Zhang. "Random Transformation of image brightness for adversarial attack." Journal of Intelligent & Fuzzy Systems 42, no. 3 (February 2, 2022): 1693–704. http://dx.doi.org/10.3233/jifs-211157.

Full text
Abstract:
Deep neural networks (DNNs) are vulnerable to adversarial examples, which are crafted by adding small, human-imperceptible perturbations to the original images, but make the model output inaccurate predictions. Before DNNs are deployed, adversarial attacks can thus be an important method to evaluate and select robust models in safety-critical applications. However, under the challenging black-box setting, the attack success rate, i.e., the transferability of adversarial examples, still needs to be improved. Based on image augmentation methods, this paper found that random transformation of image brightness can eliminate overfitting in the generation of adversarial examples and improve their transferability. In light of this phenomenon, this paper proposes an adversarial example generation method, which can be integrated with Fast Gradient Sign Method (FGSM)-related methods to build a more robust gradient-based attack and to generate adversarial examples with better transferability. Extensive experiments on the ImageNet dataset have demonstrated the effectiveness of the aforementioned method. Whether on normally or adversarially trained networks, our method has a higher success rate for black-box attacks than other attack methods based on data augmentation. It is hoped that this method can help evaluate and improve the robustness of models.
APA, Harvard, Vancouver, ISO, and other styles
7

Trinh Quang Kien. "Improving the robustness of binarized neural network using the EFAT method." Journal of Military Science and Technology, CSCE5 (December 15, 2021): 14–23. http://dx.doi.org/10.54939/1859-1043.j.mst.csce5.2021.14-23.

Full text
Abstract:
In recent years with the explosion of research in artificial intelligence, deep learning models based on convolutional neural networks (CNNs) are one of the promising architectures for practical applications thanks to their reasonably good achievable accuracy. However, CNNs characterized by convolutional layers often have a large number of parameters and computational workload, leading to large energy consumption for training and network inference. The binarized neural network (BNN) model has been recently proposed to overcome that drawback. The BNNs use binary representation for the inputs and weights, which inherently reduces memory requirements and simplifies computations while still maintaining acceptable accuracy. BNN thereby is very suited for the practical realization of Edge-AI application on resource- and energy-constrained devices such as embedded or mobile devices. As CNN and BNN both compose linear transformations layers, they can be fooled by adversarial attack patterns. This topic has been actively studied recently but most of them are for CNN. In this work, we examine the impact of the adversarial attack on BNNs and propose a solution to improve the accuracy of BNN against this type of attack. Specifically, we use an Enhanced Fast Adversarial Training (EFAT) method to train the network that helps the BNN be more robust against major adversarial attack models with a very short training time. Experimental results with Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) attack models on our trained BNN network with MNIST dataset increased accuracy from 31.34% and 0.18% to 96.96% and 85.08%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
8

Hirano, Hokuto, and Kazuhiro Takemoto. "Simple Iterative Method for Generating Targeted Universal Adversarial Perturbations." Algorithms 13, no. 11 (October 22, 2020): 268. http://dx.doi.org/10.3390/a13110268.

Full text
Abstract:
Deep neural networks (DNNs) are vulnerable to adversarial attacks. In particular, a single perturbation known as the universal adversarial perturbation (UAP) can foil most classification tasks conducted by DNNs. Thus, different methods for generating UAPs are required to fully evaluate the vulnerability of DNNs. A realistic evaluation would be with cases that consider targeted attacks; wherein the generated UAP causes the DNN to classify an input into a specific class. However, the development of UAPs for targeted attacks has largely fallen behind that of UAPs for non-targeted attacks. Therefore, we propose a simple iterative method to generate UAPs for targeted attacks. Our method combines the simple iterative method for generating non-targeted UAPs and the fast gradient sign method for generating a targeted adversarial perturbation for an input. We applied the proposed method to state-of-the-art DNN models for image classification and proved the existence of almost imperceptible UAPs for targeted attacks; further, we demonstrated that such UAPs can be easily generated.
APA, Harvard, Vancouver, ISO, and other styles
9

An, Tong, Tao Zhang, Yanzhang Geng, and Haiquan Jiao. "Normalized Combinations of Proportionate Affine Projection Sign Subband Adaptive Filter." Scientific Programming 2021 (August 26, 2021): 1–12. http://dx.doi.org/10.1155/2021/8826868.

Full text
Abstract:
The proportionate affine projection sign subband adaptive filter (PAP-SSAF) has a better performance than the affine projection sign subband adaptive filter (AP-SSAF) when we eliminate the echoes. Still, the robustness of the PAP-SSAF algorithm is insufficient under unknown environmental conditions. Besides, the best balance remains to be found between low steady-state misalignment and fast convergence rate. In order to solve this problem, we propose a normalized combination of PAP-SSAF (NCPAP-SSAF) based on the normalized adaption schema. In this paper, a power normalization adaptive rule for mixing parameters is proposed to further improve the performance of the NCPAP-SSAF algorithm. By using Nesterov’s accelerated gradient (NAG) method, the mixing parameter of the control combination can be obtained with less time consumed when we take the l1-norm of the subband error as the cost function. We also test the algorithmic complexity and memory requirements to illustrate the rationality of our method. In brief, our study contributes a novel adaptive filter algorithm, accelerating the convergence speed, reducing the steady-state error, and improving the robustness. Thus, the proposed method can be utilized to improve the performance of echo cancellation. We will optimize the combination structure and simplify unnecessary calculations to reduce the algorithm’s computational complexity in future research.
APA, Harvard, Vancouver, ISO, and other styles
10

Kadhim, Ansam, and Salah Al-Darraji. "Face Recognition System Against Adversarial Attack Using Convolutional Neural Network." Iraqi Journal for Electrical and Electronic Engineering 18, no. 1 (November 6, 2021): 1–8. http://dx.doi.org/10.37917/ijeee.18.1.1.

Full text
Abstract:
Face recognition is the technology that verifies or recognizes faces from images, videos, or real-time streams. It can be used in security or employee attendance systems. Face recognition systems may encounter some attacks that reduce their ability to recognize faces properly. So, many noisy images mixed with original ones lead to confusion in the results. Various attacks that exploit this weakness affect the face recognition systems such as Fast Gradient Sign Method (FGSM), Deep Fool, and Projected Gradient Descent (PGD). This paper proposes a method to protect the face recognition system against these attacks by distorting images through different attacks, then training the recognition deep network model, specifically Convolutional Neural Network (CNN), using the original and distorted images. Diverse experiments have been conducted using combinations of original and distorted images to test the effectiveness of the system. The system showed an accuracy of 93% using FGSM attack, 97% using deep fool, and 95% using PGD.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhang, Qikun, Yuzhi Zhang, Yanling Shao, Mengqi Liu, Jianyong Li, Junling Yuan, and Ruifang Wang. "Boosting Adversarial Attacks with Nadam Optimizer." Electronics 12, no. 6 (March 20, 2023): 1464. http://dx.doi.org/10.3390/electronics12061464.

Full text
Abstract:
Deep neural networks are extremely vulnerable to attacks and threats from adversarial examples. These adversarial examples deliberately crafted by attackers can easily fool classification models by adding imperceptibly tiny perturbations on clean images. This brings a great challenge to image security for deep learning. Therefore, studying and designing attack algorithms for generating adversarial examples is essential for building robust models. Moreover, adversarial examples are transferable in that they can mislead multiple different classifiers across models. This makes black-box attacks feasible for practical applications. However, most attack methods have low success rates and weak transferability against black-box models. This is because they often overfit the model during the production of adversarial examples. To address this issue, we propose a Nadam iterative fast gradient method (NAI-FGM), which combines an improved Nadam optimizer with gradient-based iterative attacks. Specifically, we introduce the look-ahead momentum vector and the adaptive learning rate component based on the Momentum Iterative Fast Gradient Sign Method (MI-FGSM). The look-ahead momentum vector is dedicated to making the loss function converge faster and get rid of the poor local maximum. Additionally, the adaptive learning rate component is used to help the adversarial example to converge to a better extreme point by obtaining adaptive update directions according to the current parameters. Furthermore, we also carry out different input transformations to further enhance the attack performance before using NAI-FGM for attack. Finally, we consider attacking the ensemble model. Extensive experiments show that the NAI-FGM has stronger transferability and black-box attack capability than advanced momentum-based iterative attacks. In particular, when using the adversarial examples produced by way of ensemble attack to test the adversarially trained models, the NAI-FGM improves the success rate by 8% to 11% over the other attack methods. Last but not least, the NAI-DI-TI-SI-FGM combined with the input transformation achieves a success rate of 91.3% on average.
APA, Harvard, Vancouver, ISO, and other styles
12

Pal, Biprodip, Debashis Gupta, Md Rashed-Al-Mahfuz, Salem A. Alyami, and Mohammad Ali Moni. "Vulnerability in Deep Transfer Learning Models to Adversarial Fast Gradient Sign Attack for COVID-19 Prediction from Chest Radiography Images." Applied Sciences 11, no. 9 (May 7, 2021): 4233. http://dx.doi.org/10.3390/app11094233.

Full text
Abstract:
The COVID-19 pandemic requires the rapid isolation of infected patients. Thus, high-sensitivity radiology images could be a key technique to diagnose patients besides the polymerase chain reaction approach. Deep learning algorithms are proposed in several studies to detect COVID-19 symptoms due to the success in chest radiography image classification, cost efficiency, lack of expert radiologists, and the need for faster processing in the pandemic area. Most of the promising algorithms proposed in different studies are based on pre-trained deep learning models. Such open-source models and lack of variation in the radiology image-capturing environment make the diagnosis system vulnerable to adversarial attacks such as fast gradient sign method (FGSM) attack. This study therefore explored the potential vulnerability of pre-trained convolutional neural network algorithms to the FGSM attack in terms of two frequently used models, VGG16 and Inception-v3. Firstly, we developed two transfer learning models for X-ray and CT image-based COVID-19 classification and analyzed the performance extensively in terms of accuracy, precision, recall, and AUC. Secondly, our study illustrates that misclassification can occur with a very minor perturbation magnitude, such as 0.009 and 0.003 for the FGSM attack in these models for X-ray and CT images, respectively, without any effect on the visual perceptibility of the perturbation. In addition, we demonstrated that successful FGSM attack can decrease the classification performance to 16.67% and 55.56% for X-ray images, as well as 36% and 40% in the case of CT images for VGG16 and Inception-v3, respectively, without any human-recognizable perturbation effects in the adversarial images. Finally, we analyzed that correct class probability of any test image which is supposed to be 1, can drop for both considered models and with increased perturbation; it can drop to 0.24 and 0.17 for the VGG16 model in cases of X-ray and CT images, respectively. Thus, despite the need for data sharing and automated diagnosis, practical deployment of such program requires more robustness.
APA, Harvard, Vancouver, ISO, and other styles
13

Zhao, Weimin, Sanaa Alwidian, and Qusay H. Mahmoud. "Adversarial Training Methods for Deep Learning: A Systematic Review." Algorithms 15, no. 8 (August 12, 2022): 283. http://dx.doi.org/10.3390/a15080283.

Full text
Abstract:
Deep neural networks are exposed to the risk of adversarial attacks via the fast gradient sign method (FGSM), projected gradient descent (PGD) attacks, and other attack algorithms. Adversarial training is one of the methods used to defend against the threat of adversarial attacks. It is a training schema that utilizes an alternative objective function to provide model generalization for both adversarial data and clean data. In this systematic review, we focus particularly on adversarial training as a method of improving the defensive capacities and robustness of machine learning models. Specifically, we focus on adversarial sample accessibility through adversarial sample generation methods. The purpose of this systematic review is to survey state-of-the-art adversarial training and robust optimization methods to identify the research gaps within this field of applications. The literature search was conducted using Engineering Village (Engineering Village is an engineering literature search tool, which provides access to 14 engineering literature and patent databases), where we collected 238 related papers. The papers were filtered according to defined inclusion and exclusion criteria, and information was extracted from these papers according to a defined strategy. A total of 78 papers published between 2016 and 2021 were selected. Data were extracted and categorized using a defined strategy, and bar plots and comparison tables were used to show the data distribution. The findings of this review indicate that there are limitations to adversarial training methods and robust optimization. The most common problems are related to data generalization and overfitting.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhang, Xingyu, Xiongwei Zhang, Xia Zou, Haibo Liu, and Meng Sun. "Towards Generating Adversarial Examples on Combined Systems of Automatic Speaker Verification and Spoofing Countermeasure." Security and Communication Networks 2022 (July 31, 2022): 1–12. http://dx.doi.org/10.1155/2022/2666534.

Full text
Abstract:
The security of unprotected automatic speaker verification (ASV) system is vulnerable to a variety of spoofing attacks where an attacker (adversary) disguises him/herself as a specific targeted user. It is a common practice to use spoofing countermeasure (CM) to improve the security of ASV systems so as to avoid illegal access. However, recent studies have shown that both ASV and CM systems are vulnerable to adversarial attacks. Previous researches mainly focus on adversarial attacks on a single ASV or CM system. But in practical scenarios, ASVs are typically deployed in conjunction with CM. In this paper, we investigate attacking the tandem system of ASV and CM with adversarial examples. The joint objective function is designed to restrict the generating process of adversarial examples. The joint gradient of the ASV and CM system is derived to generate adversarial examples. Fast Gradient Sign Method (FSGM) and Projected Gradient Descent (PGD) are utilized to study the vulnerability of tandem verification systems against white-box adversarial attacks. Through our attack, audio samples whose original labels are spoof or nontarget can be successfully accepted by the tandem system. Experimental results on the ASVSpoof2019 dataset show that the tandem system is vulnerable to our proposed attack.
APA, Harvard, Vancouver, ISO, and other styles
15

Rudd-Orthner, Richard N. M., and Lyudmila Mihaylova. "Deep ConvNet: Non-Random Weight Initialization for Repeatable Determinism, Examined with FSGM." Sensors 21, no. 14 (July 13, 2021): 4772. http://dx.doi.org/10.3390/s21144772.

Full text
Abstract:
A repeatable and deterministic non-random weight initialization method in convolutional layers of neural networks examined with the Fast Gradient Sign Method (FSGM). Using the FSGM approach as a technique to measure the initialization effect with controlled distortions in transferred learning, varying the dataset numerical similarity. The focus is on convolutional layers with induced earlier learning through the use of striped forms for image classification. Which provided a higher performing accuracy in the first epoch, with improvements of between 3–5% in a well known benchmark model, and also ~10% in a color image dataset (MTARSI2), using a dissimilar model architecture. The proposed method is robust to limit optimization approaches like Glorot/Xavier and He initialization. Arguably the approach is within a new category of weight initialization methods, as a number sequence substitution of random numbers, without a tether to the dataset. When examined under the FGSM approach with transferred learning, the proposed method when used with higher distortions (numerically dissimilar datasets), is less compromised against the original cross-validation dataset, at ~31% accuracy instead of ~9%. This is an indication of higher retention of the original fitting in transferred learning.
APA, Harvard, Vancouver, ISO, and other styles
16

Guan, Dejian, and Wentao Zhao . "Adversarial Detection Based on Inner-Class Adjusted Cosine Similarity." Applied Sciences 12, no. 19 (September 20, 2022): 9406. http://dx.doi.org/10.3390/app12199406.

Full text
Abstract:
Deep neural networks (DNNs) have attracted extensive attention because of their excellent performance in many areas; however, DNNs are vulnerable to adversarial examples. In this paper, we propose a similarity metric called inner-class adjusted cosine similarity (IACS) and apply it to detect adversarial examples. Motivated by the fast gradient sign method (FGSM), we propose to utilize an adjusted cosine similarity which takes both the feature angle and scale information into consideration and therefore is able to effectively discriminate subtle differences. Given the predicted label, the proposed IACS is measured between the features of the test sample and those of the normal samples with the same label. Unlike other detection methods, we can extend our method to extract disentangled features with different deep network models but are not limited to the target model (the adversarial attack model). Furthermore, the proposed method is able to detect adversarial examples crossing attacks, that is, a detector learned with one type of attack can effectively detect other types. Extensive experimental results show that the proposed IACS features can well distinguish adversarial examples and normal examples and achieve state-of-the-art performance.
APA, Harvard, Vancouver, ISO, and other styles
17

A, Jayaprakash, and C. Kezi Selva Vijila. "Detection and Recognition of Traffic Sign using FCM with SVM." JOURNAL OF ADVANCES IN CHEMISTRY 13, no. 6 (February 25, 2017): 6285–89. http://dx.doi.org/10.24297/jac.v13i6.5773.

Full text
Abstract:
This paper mainly focuses on Traffic Sign and board Detection systems that have been placed on roads and highway. This system aims to deal with real-time traffic sign and traffic board recognition, i.e. localizing what type of traffic sign and traffic board are appears in which area of an input image at a fast processing time. Our detection module is based on proposed extraction and classification of traffic signs built upon a color probability model using HAAR feature Extraction and color Histogram of Orientated Gradients (HOG).HOG technique is used to convert original image into gray color then applies RGB for foreground. Then the Support Vector Machine (SVM) fetches the object from the above result and compares with database. At the same time Fuzzy Cmeans cluster (FCM) technique get the same output from above result and then to compare with the database images. By using this method, accuracy of identifying the signs could be improved. Also the dynamic updating of new signals can be done. The goal of this work is to provide optimized prediction on the given sign.
APA, Harvard, Vancouver, ISO, and other styles
18

Zhu, Min-Ling, Liang-Liang Zhao, and Li Xiao. "Image Denoising Based on GAN with Optimization Algorithm." Electronics 11, no. 15 (August 5, 2022): 2445. http://dx.doi.org/10.3390/electronics11152445.

Full text
Abstract:
Image denoising has been a knotty issue in the computer vision field, although the developing deep learning technology has brought remarkable improvements in image denoising. Denoising networks based on deep learning technology still face some problems, such as in their accuracy and robustness. This paper constructs a robust denoising network based on a generative adversarial network (GAN). Since the neural network has the phenomena of gradient dispersion and feature disappearance, the global residual is added to the autoencoder in the generator network, to extract and learn the features of the input image, so as to ensure the stability of the network. On this basis, we proposed an optimization algorithm (OA), to train and optimize the mean and variance of noise on each node of the generator. Then the robustness of the denoising network was improved through back propagation. Experimental results showed that the model’s denoising effect is remarkable. The accuracy of the proposed model was over 99% in the MNIST data set and over 90% in the CIFAR10 data set. The peak signal to noise ratio (PSNR) and structural similarity (SSIM) values of the proposed model were better than the state-of-the-art models in the BDS500 data set. Moreover, an anti-interference test of the model showed that the defense capacities of both the fast gradient sign method (FGSM) and project gradient descent (PGD) attacks were significantly improved, with PSNR and SSIM values decreased by less than 2%.
APA, Harvard, Vancouver, ISO, and other styles
19

Xu, Wei, and Veerawat Sirivesmas. "Study on Network Virtual Printing Sculpture Design using Artificial Intelligence." International Journal of Communication Networks and Information Security (IJCNIS) 15, no. 1 (May 30, 2023): 132–45. http://dx.doi.org/10.17762/ijcnis.v15i1.5694.

Full text
Abstract:
Sculptures are visionaries of a country’s culture from time immemorial. Chinese sculptures hold an aesthetic value in the global market, catalysed by opening the country's gates. On the other hand, this paved the way for many duplicates and replicates of the original sculptures, defaming the entire artwork. This work proposes a defrauding model that deploys a Siamese-based Convolutional Neural Network (S-CNN) that effectively detects the mimicked sculpture images. Nevertheless, adversarial attacks are gaining momentum, compromising the deep learning models to make predictions for faked or forged images. The work uses a Simplified Graph Convolutional Network (SGCN) to misclassify the adversarial images generated by the Fast Gradient Sign Method (FGSM) to combat this attack. The model's training is done with adversarial images of the Imagenet dataset. By transfer learning, the model is rested for its efficacy in identifying the adversarial examples of the Chinese God images dataset. The results showed that the proposed model could detect the generated adversarial examples with a reasonable misclassification rate.
APA, Harvard, Vancouver, ISO, and other styles
20

Kurniawan S, Putu Widiarsa, Yosi Kristian, and Joan Santoso. "Pemanfaatan Deep Convulutional Auto-encoder untuk Mitigasi Serangan Adversarial Attack pada Citra Digital." J-INTECH 11, no. 1 (July 4, 2023): 50–59. http://dx.doi.org/10.32664/j-intech.v11i1.845.

Full text
Abstract:
Serangan adversarial pada citra digital merupakan ancaman serius bagi penggunaan teknologi machine learning dalam berbagai aplikasi kehidupan sehari-hari. Teknik Fast Gradient Sign Method (FGSM) telah terbukti efektif dalam melakukan serangan pada model machine learning, termasuk pada citra digital yang terdapat dalam dataset ImageNet. Penelitian ini bertujuan untuk mengatasi permasalahan tersebut dengan memanfaatkan teknik Deep Convolutional Auto-encoder (AE) sebagai metode mitigasi serangan adversarial pada citra digital. Penelitian dilakukan dengan cara melakukan serangan FGSM pada dataset ImageNet dan melakukan mitigasi dengan menerapkan teknik AE pada citra digital yang telah diberi serangan. Hasil penelitian menunjukkan bahwa serangan FGSM dapat dilakukan pada sebagian besar citra digital, namun ada beberapa citra yang lebih tahan terhadap serangan. Selain itu, teknik mitigasi AE efektif dalam mengurangi dampak dari serangan adversarial pada sebagian besar citra digital. Akurasi model serangan dan mitigasi masing-masing sebesar 85.42% dan 87.50%. Meskipun masih ada beberapa citra yang rentan terhadap serangan meskipun telah diterapkan teknik mitigasi.
APA, Harvard, Vancouver, ISO, and other styles
21

Yang, Zhongguo, Irshad Ahmed Abbasi, Fahad Algarni, Sikandar Ali, and Mingzhu Zhang. "An IoT Time Series Data Security Model for Adversarial Attack Based on Thermometer Encoding." Security and Communication Networks 2021 (March 9, 2021): 1–11. http://dx.doi.org/10.1155/2021/5537041.

Full text
Abstract:
Nowadays, an Internet of Things (IoT) device consists of algorithms, datasets, and models. Due to good performance of deep learning methods, many devices integrated well-trained models in them. IoT empowers users to communicate and control physical devices to achieve vital information. However, these models are vulnerable to adversarial attacks, which largely bring potential risks to the normal application of deep learning methods. For instance, very little changes even one point in the IoT time-series data could lead to unreliable or wrong decisions. Moreover, these changes could be deliberately generated by following an adversarial attack strategy. We propose a robust IoT data classification model based on an encode-decode joint training model. Furthermore, thermometer encoding is taken as a nonlinear transformation to the original training examples that are used to reconstruct original time series examples through the encode-decode model. The trained ResNet model based on reconstruction examples is more robust to the adversarial attack. Experiments show that the trained model can successfully resist to fast gradient sign method attack to some extent and improve the security of the time series data classification model.
APA, Harvard, Vancouver, ISO, and other styles
22

Papadopoulos, Pavlos, Oliver Thornewill von Essen, Nikolaos Pitropakis, Christos Chrysoulas, Alexios Mylonas, and William J. Buchanan. "Launching Adversarial Attacks against Network Intrusion Detection Systems for IoT." Journal of Cybersecurity and Privacy 1, no. 2 (April 23, 2021): 252–73. http://dx.doi.org/10.3390/jcp1020014.

Full text
Abstract:
As the internet continues to be populated with new devices and emerging technologies, the attack surface grows exponentially. Technology is shifting towards a profit-driven Internet of Things market where security is an afterthought. Traditional defending approaches are no longer sufficient to detect both known and unknown attacks to high accuracy. Machine learning intrusion detection systems have proven their success in identifying unknown attacks with high precision. Nevertheless, machine learning models are also vulnerable to attacks. Adversarial examples can be used to evaluate the robustness of a designed model before it is deployed. Further, using adversarial examples is critical to creating a robust model designed for an adversarial environment. Our work evaluates both traditional machine learning and deep learning models’ robustness using the Bot-IoT dataset. Our methodology included two main approaches. First, label poisoning, used to cause incorrect classification by the model. Second, the fast gradient sign method, used to evade detection measures. The experiments demonstrated that an attacker could manipulate or circumvent detection with significant probability.
APA, Harvard, Vancouver, ISO, and other styles
23

Lee , Jungeun, and Hoeseok Yang . "Performance Improvement of Image-Reconstruction-Based Defense against Adversarial Attack." Electronics 11, no. 15 (July 28, 2022): 2372. http://dx.doi.org/10.3390/electronics11152372.

Full text
Abstract:
Deep Neural Networks (DNNs) used for image classification are vulnerable to adversarial examples, which are images that are intentionally generated to predict an incorrect output for a deep learning model. Various defense methods have been proposed to defend against such adversarial attacks, among which, image-reconstruction-based defense methods, such as DIPDefend, are known to be effective in getting rid of the adversarial perturbations injected in the image. However, this image-reconstruction-based defense approach suffers from a long execution time due to its iterative and time-consuming image reconstruction. The trade-off between the execution time and the robustness/accuracy of the defense method should be carefully explored, which is the main focus of this paper. In this work, we aim to improve the execution time of the existing state-of-the-art image-reconstruction-based defense method, DIPDefend, against the Fast Gradient Sign Method (FGSM). In doing so, we propose to take the input-specific properties into consideration when deciding the stopping point of the image reconstruction of DIPDefend. For that, we first applied a low-pass filter to the input image with various kernel sizes to make a prediction of the true label. Then, based on that, the parameters of the image reconstruction procedure were adaptively chosen. Experiments with 500 randomly chosen ImageNet validation set images show that we can obtain an approximately 40% improvement in execution time while keeping the accuracy drop as small as 0.4–3.9%.
APA, Harvard, Vancouver, ISO, and other styles
24

Wu, Fei, Wenxue Yang, Limin Xiao, and Jinbin Zhu. "Adaptive Wiener Filter and Natural Noise to Eliminate Adversarial Perturbation." Electronics 9, no. 10 (October 3, 2020): 1634. http://dx.doi.org/10.3390/electronics9101634.

Full text
Abstract:
Deep neural network has been widely used in pattern recognition and speech processing, but its vulnerability to adversarial attacks also proverbially demonstrated. These attacks perform unstructured pixel-wise perturbation to fool the classifier, which does not affect the human visual system. The role of adversarial examples in the information security field has received increased attention across a number of disciplines in recent years. An alternative approach is “like cures like”. In this paper, we propose to utilize common noise and adaptive wiener filtering to mitigate the perturbation. Our method includes two operations: noise addition, which adds natural noise to input adversarial examples, and adaptive wiener filtering, which denoising the images in the previous step. Based on the study of the distribution of attacks, adding natural noise has an impact on adversarial examples to a certain extent and then they can be removed through adaptive wiener filter, which is an optimal estimator for the local variance of the image. The proposed improved adaptive wiener filter can automatically select the optimal window size between the given multiple alternative windows based on the features of different images. Based on lots of experiments, the result demonstrates that the proposed method is capable of defending against adversarial attacks, such as FGSM (Fast Gradient Sign Method), C&W, Deepfool, and JSMA (Jacobian-based Saliency Map Attack). By compared experiments, our method outperforms or is comparable to state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
25

Santana, Everton Jose, Ricardo Petri Silva, Bruno Bogaz Zarpelão, and Sylvio Barbon Junior. "Detecting and Mitigating Adversarial Examples in Regression Tasks: A Photovoltaic Power Generation Forecasting Case Study." Information 12, no. 10 (September 26, 2021): 394. http://dx.doi.org/10.3390/info12100394.

Full text
Abstract:
With data collected by Internet of Things sensors, deep learning (DL) models can forecast the generation capacity of photovoltaic (PV) power plants. This functionality is especially relevant for PV power operators and users as PV plants exhibit irregular behavior related to environmental conditions. However, DL models are vulnerable to adversarial examples, which may lead to increased predictive error and wrong operational decisions. This work proposes a new scheme to detect adversarial examples and mitigate their impact on DL forecasting models. This approach is based on one-class classifiers and features extracted from the data inputted to the forecasting models. Tests were performed using data collected from a real-world PV power plant along with adversarial samples generated by the Fast Gradient Sign Method under multiple attack patterns and magnitudes. One-class Support Vector Machine and Local Outlier Factor were evaluated as detectors of attacks to Long-Short Term Memory and Temporal Convolutional Network forecasting models. According to the results, the proposed scheme showed a high capability of detecting adversarial samples with an average F1-score close to 90%. Moreover, the detection and mitigation approach strongly reduced the prediction error increase caused by adversarial samples.
APA, Harvard, Vancouver, ISO, and other styles
26

Kwon, Hyun. "MedicalGuard: U-Net Model Robust against Adversarially Perturbed Images." Security and Communication Networks 2021 (August 9, 2021): 1–8. http://dx.doi.org/10.1155/2021/5595026.

Full text
Abstract:
Deep neural networks perform well for image recognition, speech recognition, and pattern analysis. This type of neural network has also been used in the medical field, where it has displayed good performance in predicting or classifying patient diagnoses. An example is the U-Net model, which has demonstrated good performance in data segmentation, an important technology in the field of medical imaging. However, deep neural networks are vulnerable to adversarial examples. Adversarial examples are samples created by adding a small amount of noise to an original data sample in such a way that to human perception they appear to be normal data but they will be incorrectly classified by the classification model. Adversarial examples pose a significant threat in the medical field, as they can cause models to misidentify or misclassify patient diagnoses. In this paper, I propose an advanced adversarial training method to defend against such adversarial examples. An advantage of the proposed method is that it creates a wide variety of adversarial examples for use in training, which are generated by the fast gradient sign method (FGSM) for a range of epsilon values. A U-Net model trained on these diverse adversarial examples will be more robust to unknown adversarial examples. Experiments were conducted using the ISBI 2012 dataset, with TensorFlow as the machine learning library. According to the experimental results, the proposed method builds a model that demonstrates segmentation robustness against adversarial examples by reducing the pixel error between the original labels and the adversarial examples to an average of 1.45.
APA, Harvard, Vancouver, ISO, and other styles
27

Li, Xinyu, Shaogang Dai, and Zhijin Zhao. "Unsupervised Learning-Based Spectrum Sensing Algorithm with Defending Adversarial Attacks." Applied Sciences 13, no. 16 (August 9, 2023): 9101. http://dx.doi.org/10.3390/app13169101.

Full text
Abstract:
Although the spectrum sensing algorithms based on deep learning have achieved remarkable detection performance, the sensing performance is easily affected by adversarial attacks due to the fragility of neural networks. Even slight adversarial perturbations lead to a sharp deterioration of the model detection performance. To enhance the defense capability of the spectrum sensing model against such attacks, an unsupervised learning-based spectrum sensing algorithm with defending adversarial attacks (USDAA) is proposed, which is divided into two stages: adversarial pre-training and fine-tuning. In the adversarial pre-training stage, encoders are used to extract the features of adversarial samples and clean samples, respectively, and then decoders are used to reconstruct the samples, and comparison loss and reconstruction loss are designed to optimize the network parameters. It can reduce the dependence of model training on labeled samples and improve the robustness of the model to attack perturbations. In the fine-tuning stage, a small number of adversarial samples are used to fine-tune the pre-trained encoder and classification layer to obtain the spectrum sensing defense model. The experimental results show that the USDAA algorithm is better than the denoising autoencoder and distillation defense algorithm (DAED) against FGSM and PGD adversarial attacks. The number of labeled samples used in USDAA is only 11% of the DAED. When the false alarm probability is 0.1 and the SNR is −10 dB, the detection probability of the USDAA algorithm for the fast gradient sign method (FGSM) and the projected gradient descent (PGD) attack samples with random perturbations is above 88%, while the detection probability of the DAED algorithm for both attack samples is lower than 69%. Additionally, the USDAA algorithm has better robustness to attack with unknown perturbations.
APA, Harvard, Vancouver, ISO, and other styles
28

Pantiukhin, D. V. "Educational and methodological materials of the master class “Adversarial attacks on image recognition neural networks” for students and schoolchildren." Informatics and education 38, no. 1 (April 16, 2023): 55–63. http://dx.doi.org/10.32517/0234-0453-2023-38-1-55-63.

Full text
Abstract:
The problem of neural network vulnerability has been the subject of scientific research and experiments for several years. Adversarial attacks are one of the ways to “trick” a neural network, to force it to make incorrect classification decisions. The very possibility of adversarial attack lies in the peculiarities of machine learning of neural networks. The article shows how the properties of neural networks become a source of problems and limitations in their use. The materials of the corresponding researches of the author were used as a basis for the master class “Adversarial attacks on image recognition neural networks”.The article presents the educational materials of the master class: the theoretical background of the class, practical materials (in particular, the attack on a single neuron is described, the fast gradient sign method for attacking a neural network is considered), examples of experiments and calculations (the author uses the convolutional network VGG, Torch and CleverHans libraries), as well as a set of typical errors of students and the teacher’s explanations of how to eliminate these errors. In addition, the result of the experiment is given in the article, and its full code and examples of approbation of the master class materials are available at the above links.The master class is intended for both high school and university students who have learned the basics of neural networks and the Python language, and can also be of practical interest to computer science teachers, to developers of courses on machine learning and artificial intelligence as well as to university teachers.
APA, Harvard, Vancouver, ISO, and other styles
29

Lung, Rodica Ioana. "A game theoretic decision-making approach for fast gradient sign attacks." Procedia Computer Science 220 (2023): 1015–20. http://dx.doi.org/10.1016/j.procs.2023.03.141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Perne, Matija, Samo Gerkšič, and Boštjan Pregelj. "Soft inequality constraints in gradient method and fast gradient method for quadratic programming." Optimization and Engineering 20, no. 3 (December 18, 2018): 749–67. http://dx.doi.org/10.1007/s11081-018-9416-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Florea, Mihai I., and Sergiy A. Vorobyov. "A Generalized Accelerated Composite Gradient Method: Uniting Nesterov's Fast Gradient Method and FISTA." IEEE Transactions on Signal Processing 68 (2020): 3033–48. http://dx.doi.org/10.1109/tsp.2020.2988614.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Yao, Q., B. Tan, and Y. Huang. "FAST DRAWING OF TRAFFIC SIGN USING MOBILE MAPPING SYSTEM." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 10, 2016): 937–44. http://dx.doi.org/10.5194/isprs-archives-xli-b3-937-2016.

Full text
Abstract:
Traffic sign provides road users with the specified instruction and information to enhance traffic safety. Automatic detection of traffic sign is important for navigation, autonomous driving, transportation asset management, etc. With the advance of laser and imaging sensors, Mobile Mapping System (MMS) becomes widely used in transportation agencies to map the transportation infrastructure. Although many algorithms of traffic sign detection are developed in the literature, they are still a tradeoff between the detection speed and accuracy, especially for the large-scale mobile mapping of both the rural and urban roads. This paper is motivated to efficiently survey traffic signs while mapping the road network and the roadside landscape. Inspired by the manual delineation of traffic sign, a drawing strategy is proposed to quickly approximate the boundary of traffic sign. Both the shape and color prior of the traffic sign are simultaneously involved during the drawing process. The most common speed-limit sign circle and the statistic color model of traffic sign are studied in this paper. Anchor points of traffic sign edge are located with the local maxima of color and gradient difference. Starting with the anchor points, contour of traffic sign is drawn smartly along the most significant direction of color and intensity consistency. The drawing process is also constrained by the curvature feature of the traffic sign circle. The drawing of linear growth is discarded immediately if it fails to form an arc over some steps. The Kalman filter principle is adopted to predict the temporal context of traffic sign. Based on the estimated point,we can predict and double check the traffic sign in consecutive frames.The event probability of having a traffic sign over the consecutive observations is compared with the null hypothesis of no perceptible traffic sign. The temporally salient traffic sign is then detected statistically and automatically as the rare event of having a traffic sign.The proposed algorithm is tested with a diverse set of images that are taken inWuhan, China with theMMS ofWuhan University. Experimental results demonstrate that the proposed algorithm can detect traffic signs at the rate of over 80% in around 10 milliseconds. It is promising for the large-scale traffic sign survey and change detection using the mobile mapping system.
APA, Harvard, Vancouver, ISO, and other styles
33

Yao, Q., B. Tan, and Y. Huang. "FAST DRAWING OF TRAFFIC SIGN USING MOBILE MAPPING SYSTEM." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 10, 2016): 937–44. http://dx.doi.org/10.5194/isprsarchives-xli-b3-937-2016.

Full text
Abstract:
Traffic sign provides road users with the specified instruction and information to enhance traffic safety. Automatic detection of traffic sign is important for navigation, autonomous driving, transportation asset management, etc. With the advance of laser and imaging sensors, Mobile Mapping System (MMS) becomes widely used in transportation agencies to map the transportation infrastructure. Although many algorithms of traffic sign detection are developed in the literature, they are still a tradeoff between the detection speed and accuracy, especially for the large-scale mobile mapping of both the rural and urban roads. This paper is motivated to efficiently survey traffic signs while mapping the road network and the roadside landscape. Inspired by the manual delineation of traffic sign, a drawing strategy is proposed to quickly approximate the boundary of traffic sign. Both the shape and color prior of the traffic sign are simultaneously involved during the drawing process. The most common speed-limit sign circle and the statistic color model of traffic sign are studied in this paper. Anchor points of traffic sign edge are located with the local maxima of color and gradient difference. Starting with the anchor points, contour of traffic sign is drawn smartly along the most significant direction of color and intensity consistency. The drawing process is also constrained by the curvature feature of the traffic sign circle. The drawing of linear growth is discarded immediately if it fails to form an arc over some steps. The Kalman filter principle is adopted to predict the temporal context of traffic sign. Based on the estimated point,we can predict and double check the traffic sign in consecutive frames.The event probability of having a traffic sign over the consecutive observations is compared with the null hypothesis of no perceptible traffic sign. The temporally salient traffic sign is then detected statistically and automatically as the rare event of having a traffic sign.The proposed algorithm is tested with a diverse set of images that are taken inWuhan, China with theMMS ofWuhan University. Experimental results demonstrate that the proposed algorithm can detect traffic signs at the rate of over 80% in around 10 milliseconds. It is promising for the large-scale traffic sign survey and change detection using the mobile mapping system.
APA, Harvard, Vancouver, ISO, and other styles
34

Kenshimov, Chingiz, Zholdas Buribayev, Yedilkhan Amirgaliyev, Aisulyu Ataniyazova, and Askhat Aitimov. "Sign language dactyl recognition based on machine learning algorithms." Eastern-European Journal of Enterprise Technologies 4, no. 2(112) (August 31, 2021): 58–72. http://dx.doi.org/10.15587/1729-4061.2021.239253.

Full text
Abstract:
In the course of our research work, the American, Russian and Turkish sign languages were analyzed. The program of recognition of the Kazakh dactylic sign language with the use of machine learning methods is implemented. A dataset of 5000 images was formed for each gesture, gesture recognition algorithms were applied, such as Random Forest, Support Vector Machine, Extreme Gradient Boosting, while two data types were combined into one database, which caused a change in the architecture of the system as a whole. The quality of the algorithms was also evaluated. The research work was carried out due to the fact that scientific work in the field of developing a system for recognizing the Kazakh language of sign dactyls is currently insufficient for a complete representation of the language. There are specific letters in the Kazakh language, because of the peculiarities of the spelling of the language, problems arise when developing recognition systems for the Kazakh sign language. The results of the work showed that the Support Vector Machine and Extreme Gradient Boosting algorithms are superior in real-time performance, but the Random Forest algorithm has high recognition accuracy. As a result, the accuracy of the classification algorithms was 98.86 % for Random Forest, 98.68 % for Support Vector Machine and 98.54 % for Extreme Gradient Boosting. Also, the evaluation of the quality of the work of classical algorithms has high indicators. The practical significance of this work lies in the fact that scientific research in the field of gesture recognition with the updated alphabet of the Kazakh language has not yet been conducted and the results of this work can be used by other researchers to conduct further research related to the recognition of the Kazakh dactyl sign language, as well as by researchers, engaged in the development of the international sign language
APA, Harvard, Vancouver, ISO, and other styles
35

Guo, Shaocui, and Xu Yang. "Fast recognition algorithm for static traffic sign information." Open Physics 16, no. 1 (December 31, 2018): 1149–56. http://dx.doi.org/10.1515/phys-2018-0135.

Full text
Abstract:
Abstract Aiming at the low recognition rate, low recognition efficiency, poor anti-interference and high missing detection rate of current traffic sign recognition methods, a fast recognition algorithm based on SURF for static traffic sign information of highway is proposed. The expansion of the digital morphological method is used to connect the cracks in the traffic sign. Traffic sign images are corroded according to the corrosion, and the connected areas are contracted or refined. Regions of interest are detected by region filling. According to the result of traffic sign image processing, the scale of traffic sign image is normalized by bilinear interpolation method, and the SURF feature points of traffic sign image are extracted. The FLANN algorithm is used to realize feature point matching, and the threshold is set to determine the best matching point. The matching result is output and the traffic sign information is recognized. Experimental results show that the algorithm has high recognition rate and recognition efficiency, strong anti-interference, and can control the rate of missing detection in a certain range.
APA, Harvard, Vancouver, ISO, and other styles
36

Tyurin, Alexander Igorevich. "Primal-dual fast gradient method with a model." Computer Research and Modeling 12, no. 2 (April 2020): 263–74. http://dx.doi.org/10.20537/2076-7633-2020-12-2-263-274.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Bloom, Veronica, Igor Griva, and Fabio Quijada. "Fast projected gradient method for support vector machines." Optimization and Engineering 17, no. 4 (August 11, 2016): 651–62. http://dx.doi.org/10.1007/s11081-016-9328-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Polyak, Roman A., James Costa, and Saba Neyshabouri. "Dual fast projected gradient method for quadratic programming." Optimization Letters 7, no. 4 (April 21, 2012): 631–45. http://dx.doi.org/10.1007/s11590-012-0476-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Iyengar, Garud, and Alfred Ka Chun Ma. "Fast gradient descent method for Mean-CVaR optimization." Annals of Operations Research 205, no. 1 (February 7, 2013): 203–12. http://dx.doi.org/10.1007/s10479-012-1245-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Han, Fang Fang, and Guo Qiang Xu. "Robust Memory Gradient Blind Equalization Algorithm Based on Error Sign Decision." Applied Mechanics and Materials 347-350 (August 2013): 1997–2000. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.1997.

Full text
Abstract:
As CMA blind equalization algorithm based on stochastic gradient descend method has slow convergence rate and big steady state residual error, a robust memory gradient blind equalization algorithm based on error sign decision was proposed. Compared with stochastic gradient algorithm, memory gradient algorithm can make full use of the iteration gradient information of current and last step to accelerate convergence speed and effectively avoid the error convergence algorithm to a certain extent. However, transient noise interference may result in inconsistent direction of current iteration gradient information and last step that may cause the algorithm unstable, if the iteration error sign decision of current and last step is consistent then adopts memory gradient stochastic gradient descent algorithm, this method can ensure the robustness of memory gradient algorithm .The computer simulation results prove the validity of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
41

Nishimura, Jun, and Shinji Shimasaki. "Unification of the complex Langevin method and the Lefschetzthimble method." EPJ Web of Conferences 175 (2018): 07018. http://dx.doi.org/10.1051/epjconf/201817507018.

Full text
Abstract:
Recently there has been remarkable progress in solving the sign problem, which occurs in investigating statistical systems with a complex weight. The two promising methods, the complex Langevin method and the Lefschetz thimble method, share the idea of complexifying the dynamical variables, but their relationship has not been clear. Here we propose a unified formulation, in which the sign problem is taken care of by both the Langevin dynamics and the holomorphic gradient flow. We apply our formulation to a simple model in three different ways and show that one of them interpolates the two methods by changing the flow time.
APA, Harvard, Vancouver, ISO, and other styles
42

Anescu, George. "A Heuristic Fast Gradient Descent Method for Unimodal Optimization." Journal of Advances in Mathematics and Computer Science 26, no. 5 (February 28, 2018): 1–20. http://dx.doi.org/10.9734/jamcs/2018/39798.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

A.M, Raid, Khedr W.M, El-dosuky M.A, and Mona Aoud. "Fast NAS-RIF Algorithm Using Iterative Conjugate Gradient Method." Signal & Image Processing : An International Journal 5, no. 2 (April 30, 2014): 63–72. http://dx.doi.org/10.5121/sipij.2014.5206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Kögel, Markus, and Rolf Findeisen. "A Fast Gradient method for embedded linear predictive control." IFAC Proceedings Volumes 44, no. 1 (January 2011): 1362–67. http://dx.doi.org/10.3182/20110828-6-it-1002.03322.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Zhang, Yue, Seong-Yoon Shin, Xujie Tan, and Bin Xiong. "A Self-Adaptive Approximated-Gradient-Simulation Method for Black-Box Adversarial Sample Generation." Applied Sciences 13, no. 3 (January 18, 2023): 1298. http://dx.doi.org/10.3390/app13031298.

Full text
Abstract:
Deep neural networks (DNNs) have famously been applied in various ordinary duties. However, DNNs are sensitive to adversarial attacks which, by adding imperceptible perturbation samples to an original image, can easily alter the output. In state-of-the-art white-box attack methods, perturbation samples can successfully fool DNNs through the network gradient. In addition, they generate perturbation samples by only considering the sign information of the gradient and by dropping the magnitude. Accordingly, gradients of different magnitudes may adopt the same sign to construct perturbation samples, resulting in inefficiency. Unfortunately, it is often impractical to acquire the gradient in real-world scenarios. Consequently, we propose a self-adaptive approximated-gradient-simulation method for black-box adversarial attacks (SAGM) to generate efficient perturbation samples. Our proposed method uses knowledge-based differential evolution to simulate gradients and the self-adaptive momentum gradient to generate adversarial samples. To estimate the efficiency of the proposed SAGM, a series of experiments were carried out on two datasets, namely MNIST and CIFAR-10. Compared to state-of-the-art attack techniques, our proposed method can quickly and efficiently search for perturbation samples to misclassify the original samples. The results reveal that the SAGM is an effective and efficient technique for generating perturbation samples.
APA, Harvard, Vancouver, ISO, and other styles
46

Ogal’tsov, A. V., and A. I. Tyurin. "A Heuristic Adaptive Fast Gradient Method in Stochastic Optimization Problems." Computational Mathematics and Mathematical Physics 60, no. 7 (July 2020): 1108–15. http://dx.doi.org/10.1134/s0965542520070088.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Ren, Dongwei, Wangmeng Zuo, Xiaofei Zhao, Zhouchen Lin, and David Zhang. "Fast gradient vector flow computation based on augmented Lagrangian method." Pattern Recognition Letters 34, no. 2 (January 2013): 219–25. http://dx.doi.org/10.1016/j.patrec.2012.09.017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Jirigalatu, Jörg, and Ebbing. "A fast equivalent source method for airborne gravity gradient data." GEOPHYSICS 84, no. 5 (September 1, 2019): G75—G82. http://dx.doi.org/10.1190/geo2018-0366.1.

Full text
Abstract:
Airborne gravity gradiometry measures several gravity gradient components unlike conventional gravimetry taking only the vertical gravity component into account. However, processing of multicomponent airborne gravity gradient (AGG) data without corrupting their internal consistency is often challenging. Therefore, we have developed an equivalent source technique to tackle this challenge. With a combination of Gauss-fast Fourier transform and the Landweber iteration, we have developed an efficient way to compute equivalent sources for AGG data. This method can handle two components simultaneously. We first examined its viability by applying this approach to a synthetic example. Afterward, we applied our method to real AGG data collected in the area of Karasjok, Norway. Our result is almost the same as the results that meet the industry standard but with great efficiency.
APA, Harvard, Vancouver, ISO, and other styles
49

Jin-bo, Zhang, Xu Jing-wen, and Li Yuan-xiang. "Gradient Gene Algorithm: a fast optimization method to MST problem." Wuhan University Journal of Natural Sciences 6, no. 1-2 (March 2001): 535–40. http://dx.doi.org/10.1007/bf03160298.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Choi, Young-Jae, and In-Sik Choi. "A novel fast clean algorithm using the gradient descent method." Microwave and Optical Technology Letters 59, no. 5 (March 27, 2017): 1018–22. http://dx.doi.org/10.1002/mop.30448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography