Статті в журналах з теми "Black-box attack"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Black-box attack.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Black-box attack".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Chen, Jinghui, Dongruo Zhou, Jinfeng Yi, and Quanquan Gu. "A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3486–94. http://dx.doi.org/10.1609/aaai.v34i04.5753.

Повний текст джерела
Анотація:
Depending on how much information an adversary can access to, adversarial attacks can be classified as white-box attack and black-box attack. For white-box attack, optimization-based attack algorithms such as projected gradient descent (PGD) can achieve relatively high attack success rates within moderate iterates. However, they tend to generate adversarial examples near or upon the boundary of the perturbation set, resulting in large distortion. Furthermore, their corresponding black-box attack algorithms also suffer from high query complexities, thereby limiting their practical usefulness. In this paper, we focus on the problem of developing efficient and effective optimization-based adversarial attack algorithms. In particular, we propose a novel adversarial attack framework for both white-box and black-box settings based on a variant of Frank-Wolfe algorithm. We show in theory that the proposed attack algorithms are efficient with an O(1/√T) convergence rate. The empirical results of attacking the ImageNet and MNIST datasets also verify the efficiency and effectiveness of the proposed algorithms. More specifically, our proposed algorithms attain the best attack performances in both white-box and black-box attacks among all baselines, and are more time and query efficient than the state-of-the-art.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Jiang, Yi, and Dengpan Ye. "Black-Box Adversarial Attacks against Audio Forensics Models." Security and Communication Networks 2022 (January 17, 2022): 1–8. http://dx.doi.org/10.1155/2022/6410478.

Повний текст джерела
Анотація:
Speech synthesis technology has made great progress in recent years and is widely used in the Internet of things, but it also brings the risk of being abused by criminals. Therefore, a series of researches on audio forensics models have arisen to reduce or eliminate these negative effects. In this paper, we propose a black-box adversarial attack method that only relies on output scores of audio forensics models. To improve the transferability of adversarial attacks, we utilize the ensemble-model method. A defense method is also designed against our proposed attack method under the view of the huge threat of adversarial examples to audio forensics models. Our experimental results on 4 forensics models trained on the LA part of the ASVspoof 2019 dataset show that our attacks can get a 99 % attack success rate on score-only black-box models, which is competitive to the best of white-box attacks, and 60 % attack success rate on decision-only black-box models. Finally, our defense method reduces the attack success rate to 16 % and guarantees 98 % detection accuracy of forensics models.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Park, Hosung, Gwonsang Ryu, and Daeseon Choi. "Partial Retraining Substitute Model for Query-Limited Black-Box Attacks." Applied Sciences 10, no. 20 (October 14, 2020): 7168. http://dx.doi.org/10.3390/app10207168.

Повний текст джерела
Анотація:
Black-box attacks against deep neural network (DNN) classifiers are receiving increasing attention because they represent a more practical approach in the real world than white box attacks. In black-box environments, adversaries have limited knowledge regarding the target model. This makes it difficult to estimate gradients for crafting adversarial examples, such that powerful white-box algorithms cannot be directly applied to black-box attacks. Therefore, a well-known black-box attack strategy creates local DNNs, called substitute models, to emulate the target model. The adversaries then craft adversarial examples using the substitute models instead of the unknown target model. The substitute models repeat the query process and are trained by observing labels from the target model’s responses to queries. However, emulating a target model usually requires numerous queries because new DNNs are trained from the beginning. In this study, we propose a new training method for substitute models to minimize the number of queries. We consider the number of queries as an important factor for practical black-box attacks because real-world systems often restrict queries for security and financial purposes. To decrease the number of queries, the proposed method does not emulate the entire target model and only adjusts the partial classification boundary based on a current attack. Furthermore, it does not use queries in the pre-training phase and creates queries only in the retraining phase. The experimental results indicate that the proposed method is effective in terms of the number of queries and attack success ratio against MNIST, VGGFace2, and ImageNet classifiers in query-limited black-box environments. Further, we demonstrate a black-box attack against a commercial classifier, Google AutoML Vision.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Zhao, Pu, Pin-yu Chen, Siyue Wang, and Xue Lin. "Towards Query-Efficient Black-Box Adversary with Zeroth-Order Natural Gradient Descent." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6909–16. http://dx.doi.org/10.1609/aaai.v34i04.6173.

Повний текст джерела
Анотація:
Despite the great achievements of the modern deep neural networks (DNNs), the vulnerability/robustness of state-of-the-art DNNs raises security concerns in many application domains requiring high reliability. Various adversarial attacks are proposed to sabotage the learning performance of DNN models. Among those, the black-box adversarial attack methods have received special attentions owing to their practicality and simplicity. Black-box attacks usually prefer less queries in order to maintain stealthy and low costs. However, most of the current black-box attack methods adopt the first-order gradient descent method, which may come with certain deficiencies such as relatively slow convergence and high sensitivity to hyper-parameter settings. In this paper, we propose a zeroth-order natural gradient descent (ZO-NGD) method to design the adversarial attacks, which incorporates the zeroth-order gradient estimation technique catering to the black-box attack scenario and the second-order natural gradient descent to achieve higher query efficiency. The empirical evaluations on image classification datasets demonstrate that ZO-NGD can obtain significantly lower model query complexities compared with state-of-the-art attack methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Duan, Mingxing, Kenli Li, Jiayan Deng, Bin Xiao, and Qi Tian. "A Novel Multi-Sample Generation Method for Adversarial Attacks." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 4 (November 30, 2022): 1–21. http://dx.doi.org/10.1145/3506852.

Повний текст джерела
Анотація:
Deep learning models are widely used in daily life, which bring great convenience to our lives, but they are vulnerable to attacks. How to build an attack system with strong generalization ability to test the robustness of deep learning systems is a hot issue in current research, among which the research on black-box attacks is extremely challenging. Most current research on black-box attacks assumes that the input dataset is known. However, in fact, it is difficult for us to obtain detailed information for those datasets. In order to solve the above challenges, we propose a multi-sample generation model for black-box model attacks, called MsGM. MsGM is mainly composed of three parts: multi-sample generation, substitute model training, and adversarial sample generation and attack. Firstly, we design a multi-task generation model to learn the distribution of the original dataset. The model first converts an arbitrary signal of a certain distribution into the shared features of the original dataset through deconvolution operations, and then according to different input conditions, multiple identical sub-networks generate the corresponding targeted samples. Secondly, the generated sample features achieve different outputs through querying the black-box model and training the substitute model, which are used to construct different loss functions to optimize and update the generator and substitute model. Finally, some common white-box attack methods are used to attack the substitute model to generate corresponding adversarial samples, which are utilized to attack the black-box model. We conducted a large number of experiments on the MNIST and CIFAR-10 datasets. The experimental results show that under the same settings and attack algorithms, MsGM achieves better performance than the based models.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Chen, Zhiyu, Jianyu Ding, Fei Wu, Chi Zhang, Yiming Sun, Jing Sun, Shangdong Liu, and Yimu Ji. "An Optimized Black-Box Adversarial Simulator Attack Based on Meta-Learning." Entropy 24, no. 10 (September 27, 2022): 1377. http://dx.doi.org/10.3390/e24101377.

Повний текст джерела
Анотація:
Abstract: Much research on adversarial attacks has proved that deep neural networks have certain security vulnerabilities. Among potential attacks, black-box adversarial attacks are considered the most realistic based on the the natural hidden nature of deep neural networks. Such attacks havebecome a critical academic emphasis in the current security field. However, current black-box attack methods still have shortcomings, resulting in incomplete utilization of query information. Our research, based on the newly proposed Simulator Attack, proves the correctness and usability of feature layer information in a simulator model obtained by meta-learning for the first time. Then, we propose an optimized Simulator Attack+ based on this discovery. Our optimization methods used in Simulator Attack+ include: (1) a feature attentional boosting module that uses the feature layer information of the simulator to enhance the attack and accelerate the generation of adversarial examples; (2) a linear self-adaptive simulator-predict interval mechanism that allows the simulator model to be fully fine-tuned in the early stage of the attack and dynamically adjusts the interval for querying the black-box model; and (3) an unsupervised clustering module to provide a warm-start for targeted attacks. Results from experiments on the CIFAR-10 and CIFAR-100 datasets clearly show that Simulator Attack+ can further reduce the number of consuming queries to improve query efficiency while maintaining the attack.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Xiang, Fengtao, Jiahui Xu, Wanpeng Zhang, and Weidong Wang. "A Distributed Biased Boundary Attack Method in Black-Box Attack." Applied Sciences 11, no. 21 (November 8, 2021): 10479. http://dx.doi.org/10.3390/app112110479.

Повний текст джерела
Анотація:
The adversarial samples threaten the effectiveness of machine learning (ML) models and algorithms in many applications. In particular, black-box attack methods are quite close to actual scenarios. Research on black-box attack methods and the generation of adversarial samples is helpful to discover the defects of machine learning models. It can strengthen the robustness of machine learning algorithms models. Such methods require queries frequently, which are less efficient. This paper has made improvements in the initial generation and the search for the most effective adversarial examples. Besides, it is found that some indicators can be used to detect attacks, which is a new foundation compared with our previous studies. Firstly, the paper proposed an algorithm to generate initial adversarial samples with a smaller L2 norm; secondly, a combination between particle swarm optimization (PSO) and biased boundary adversarial attack (BBA) is proposed. It is the PSO-BBA. Experiments are conducted on the ImageNet. The PSO-BBA is compared with the baseline method. Experimental comparison results certificate that: (1) A distributed framework for adversarial attack methods is proposed; (2) The proposed initial point selection method can reduces query numbers effectively; (3) Compared to the original BBA, the proposed PSO-BBA algorithm accelerates the convergence speed and improves the accuracy of attack accuracy; (4) The improved PSO-BBA algorithm has preferable performance on targeted and non-targeted attacks; (5) The mean structural similarity (MSSIM) can be used as the indicators of adversarial attack.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Wang, Qiuhua, Hui Yang, Guohua Wu, Kim-Kwang Raymond Choo, Zheng Zhang, Gongxun Miao, and Yizhi Ren. "Black-box adversarial attacks on XSS attack detection model." Computers & Security 113 (February 2022): 102554. http://dx.doi.org/10.1016/j.cose.2021.102554.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Wang, Lu, Huan Zhang, Jinfeng Yi, Cho-Jui Hsieh, and Yuan Jiang. "Spanning attack: reinforce black-box attacks with unlabeled data." Machine Learning 109, no. 12 (October 29, 2020): 2349–68. http://dx.doi.org/10.1007/s10994-020-05916-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Gao, Xianfeng, Yu-an Tan, Hongwei Jiang, Quanxin Zhang, and Xiaohui Kuang. "Boosting Targeted Black-Box Attacks via Ensemble Substitute Training and Linear Augmentation." Applied Sciences 9, no. 11 (June 3, 2019): 2286. http://dx.doi.org/10.3390/app9112286.

Повний текст джерела
Анотація:
These years, Deep Neural Networks (DNNs) have shown unprecedented performance in many areas. However, some recent studies revealed their vulnerability to small perturbations added on source inputs. Furthermore, we call the ways to generate these perturbations’ adversarial attacks, which contain two types, black-box and white-box attacks, according to the adversaries’ access to target models. In order to overcome the problem of black-box attackers’ unreachabilities to the internals of target DNN, many researchers put forward a series of strategies. Previous works include a method of training a local substitute model for the target black-box model via Jacobian-based augmentation and then use the substitute model to craft adversarial examples using white-box methods. In this work, we improve the dataset augmentation to make the substitute models better fit the decision boundary of the target model. Unlike the previous work that just performed the non-targeted attack, we make it first to generate targeted adversarial examples via training substitute models. Moreover, to boost the targeted attacks, we apply the idea of ensemble attacks to the substitute training. Experiments on MNIST and GTSRB, two common datasets for image classification, demonstrate our effectiveness and efficiency of boosting a targeted black-box attack, and we finally attack the MNIST and GTSRB classifiers with the success rates of 97.7% and 92.8%.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Dionysiou, Antreas, Vassilis Vassiliades, and Elias Athanasopoulos. "Exploring Model Inversion Attacks in the Black-box Setting." Proceedings on Privacy Enhancing Technologies 2023, no. 1 (January 2023): 190–206. http://dx.doi.org/10.56553/popets-2023-0012.

Повний текст джерела
Анотація:
Model Inversion (MI) attacks, that aim to recover semantically meaningful reconstructions for each target class, have been extensively studied and demonstrated to be successful in the white-box setting. On the other hand, black-box MI attacks demonstrate low performance in terms of both effectiveness, i.e., reconstructing samples which are identifiable as their ground-truth, and efficiency, i.e., time or queries required for completing the attack process. Whether or not effective and efficient black-box MI attacks can be conducted on complex targets, such as Convolutional Neural Networks (CNNs), currently remains unclear. In this paper, we present a feasibility study in regards to the effectiveness and efficiency of MI attacks in the black-box setting. In this context, we introduce Deep-BMI (Deep Black-box Model Inversion), a framework that supports various black-box optimizers for conducting MI attacks on deep CNNs used for image recognition. Deep-BMI’s most efficient optimizer is based on an adaptive hill climbing algorithm, whereas its most effective optimizer is based on an evolutionary algorithm capable of performing an all-class attack and returning a diversity of images in a single run. For assessing the severity of this threat, we utilize all three evaluation approaches found in the literature. In particular, we (a) conduct a user study with human participants, (b) demonstrate our actual reconstructions along with their ground-truth, and (c) use relevant quantitative metrics. Surprisingly, our results suggest that black-box MI attacks, and for complex models, are comparable, in some cases, to those reported so far in the white-box setting.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Khalel Ibrahem Al-Ubaidy, Mahmood. "BLACK-BOX ATTACK USING NEURO-IDENTIFIER." Cryptologia 28, no. 4 (October 2004): 358–72. http://dx.doi.org/10.1080/0161-110491892980.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Tu, Chun-Chen, Paishun Ting, Pin-Yu Chen, Sijia Liu, Huan Zhang, Jinfeng Yi, Cho-Jui Hsieh, and Shin-Ming Cheng. "AutoZOOM: Autoencoder-Based Zeroth Order Optimization Method for Attacking Black-Box Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 742–49. http://dx.doi.org/10.1609/aaai.v33i01.3301742.

Повний текст джерела
Анотація:
Recent studies have shown that adversarial examples in state-of-the-art image classifiers trained by deep neural networks (DNN) can be easily generated when the target model is transparent to an attacker, known as the white-box setting. However, when attacking a deployed machine learning service, one can only acquire the input-output correspondences of the target model; this is the so-called black-box attack setting. The major drawback of existing black-box attacks is the need for excessive model queries, which may give a false sense of model robustness due to inefficient query designs. To bridge this gap, we propose a generic framework for query-efficient blackbox attacks. Our framework, AutoZOOM, which is short for Autoencoder-based Zeroth Order Optimization Method, has two novel building blocks towards efficient black-box attacks: (i) an adaptive random gradient estimation strategy to balance query counts and distortion, and (ii) an autoencoder that is either trained offline with unlabeled data or a bilinear resizing operation for attack acceleration. Experimental results suggest that, by applying AutoZOOM to a state-of-the-art black-box attack (ZOO), a significant reduction in model queries can be achieved without sacrificing the attack success rate and the visual quality of the resulting adversarial examples. In particular, when compared to the standard ZOO method, AutoZOOM can consistently reduce the mean query counts in finding successful adversarial examples (or reaching the same distortion level) by at least 93% on MNIST, CIFAR-10 and ImageNet datasets, leading to novel insights on adversarial robustness.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Xu, Jiarong, Yizhou Sun, Xin Jiang, Yanhao Wang, Chunping Wang, Jiangang Lu, and Yang Yang. "Blindfolded Attackers Still Threatening: Strict Black-Box Adversarial Attacks on Graphs." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 4 (June 28, 2022): 4299–307. http://dx.doi.org/10.1609/aaai.v36i4.20350.

Повний текст джерела
Анотація:
Adversarial attacks on graphs have attracted considerable research interests. Existing works assume the attacker is either (partly) aware of the victim model, or able to send queries to it. These assumptions are, however, unrealistic. To bridge the gap between theoretical graph attacks and real-world scenarios, in this work, we propose a novel and more realistic setting: strict black-box graph attack, in which the attacker has no knowledge about the victim model at all and is not allowed to send any queries. To design such an attack strategy, we first propose a generic graph filter to unify different families of graph-based models. The strength of attacks can then be quantified by the change in the graph filter before and after attack. By maximizing this change, we are able to find an effective attack strategy, regardless of the underlying model. To solve this optimization problem, we also propose a relaxation technique and approximation theories to reduce the difficulty as well as the computational expense. Experiments demonstrate that, even with no exposure to the model, the Macro-F1 drops 6.4% in node classification and 29.5% in graph classification, which is a significant result compared with existent works.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Yu, Mengran, and Shiliang Sun. "Natural Black-Box Adversarial Examples against Deep Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 8936–44. http://dx.doi.org/10.1609/aaai.v36i8.20876.

Повний текст джерела
Анотація:
Black-box attacks in deep reinforcement learning usually retrain substitute policies to mimic behaviors of target policies as well as craft adversarial examples, and attack the target policies with these transferable adversarial examples. However, the transferability of adversarial examples is not always guaranteed. Moreover, current methods of crafting adversarial examples only utilize simple pixel space metrics which neglect semantics in the whole images, and thus generate unnatural adversarial examples. To address these problems, we propose an advRL-GAN framework to directly generate semantically natural adversarial examples in the black-box setting, bypassing the transferability requirement of adversarial examples. It formalizes the black-box attack as a reinforcement learning (RL) agent, which explores natural and aggressive adversarial examples with generative adversarial networks and the feedback of target agents. To the best of our knowledge, it is the first RL-based adversarial attack on a deep RL agent. Experimental results on multiple environments demonstrate the effectiveness of advRL-GAN in terms of reward reductions and magnitudes of perturbations, and validate the sparse and targeted property of adversarial perturbations through visualization.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Kong, Zixiao, Jingfeng Xue, Zhenyan Liu, Yong Wang, and Weijie Han. "MalDBA: Detection for Query-Based Malware Black-Box Adversarial Attacks." Electronics 12, no. 7 (April 6, 2023): 1751. http://dx.doi.org/10.3390/electronics12071751.

Повний текст джерела
Анотація:
The increasing popularity of Industry 4.0 has led to more and more security risks, and malware adversarial attacks emerge in an endless stream, posing great challenges to user data security and privacy protection. In this paper, we investigate the stateful detection method for artificial intelligence deep learning-based malware black-box attacks, i.e., determining the presence of adversarial attacks rather than detecting whether the input samples are malicious or not. To this end, we propose the MalDBA method for experiments on the VirusShare dataset. We find that query-based black-box attacks produce a series of highly similar historical query results (also known as intermediate samples). By comparing the similarity among these intermediate samples and the trend of prediction scores returned by the detector, we can detect the presence of adversarial samples in indexed samples and thus determine whether an adversarial attack has occurred, and then protect user data security and privacy. The experimental results show that the attack detection rate can reach 100%. Compared to similar studies, our method does not require heavy feature extraction tasks or image conversion and can be operated on complete PE files without requiring a strong hardware platform.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Che, Zhaohui, Ali Borji, Guangtao Zhai, Suiyi Ling, Jing Li, and Patrick Le Callet. "A New Ensemble Adversarial Attack Powered by Long-Term Gradient Memories." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3405–13. http://dx.doi.org/10.1609/aaai.v34i04.5743.

Повний текст джерела
Анотація:
Deep neural networks are vulnerable to adversarial attacks. More importantly, some adversarial examples crafted against an ensemble of pre-trained source models can transfer to other new target models, thus pose a security threat to black-box applications (when the attackers have no access to the target models). Despite adopting diverse architectures and parameters, source and target models often share similar decision boundaries. Therefore, if an adversary is capable of fooling several source models concurrently, it can potentially capture intrinsic transferable adversarial information that may allow it to fool a broad class of other black-box target models. Current ensemble attacks, however, only consider a limited number of source models to craft an adversary, and obtain poor transferability. In this paper, we propose a novel black-box attack, dubbed Serial-Mini-Batch-Ensemble-Attack (SMBEA). SMBEA divides a large number of pre-trained source models into several mini-batches. For each single batch, we design 3 new ensemble strategies to improve the intra-batch transferability. Besides, we propose a new algorithm that recursively accumulates the “long-term” gradient memories of the previous batch to the following batch. This way, the learned adversarial information can be preserved and the inter-batch transferability can be improved. Experiments indicate that our method outperforms state-of-the-art ensemble attacks over multiple pixel-to-pixel vision tasks including image translation and salient region prediction. Our method successfully fools two online black-box saliency prediction systems including DeepGaze-II (Kummerer 2017) and SALICON (Huang et al. 2017). Finally, we also contribute a new repository to promote the research on adversarial attack and defense over pixel-to-pixel tasks: https://github.com/CZHQuality/AAA-Pix2pix.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Wei, Zhipeng, Jingjing Chen, Zuxuan Wu, and Yu-Gang Jiang. "Boosting the Transferability of Video Adversarial Examples via Temporal Translation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (June 28, 2022): 2659–67. http://dx.doi.org/10.1609/aaai.v36i3.20168.

Повний текст джерела
Анотація:
Although deep-learning based video recognition models have achieved remarkable success, they are vulnerable to adversarial examples that are generated by adding human-imperceptible perturbations on clean video samples. As indicated in recent studies, adversarial examples are transferable, which makes it feasible for black-box attacks in real-world applications. Nevertheless, most existing adversarial attack methods have poor transferability when attacking other video models and transfer-based attacks on video models are still unexplored. To this end, we propose to boost the transferability of video adversarial examples for black-box attacks on video recognition models. Through extensive analysis, we discover that different video recognition models rely on different discriminative temporal patterns, leading to the poor transferability of video adversarial examples. This motivates us to introduce a temporal translation attack method, which optimizes the adversarial perturbations over a set of temporal translated video clips. By generating adversarial examples over translated videos, the resulting adversarial examples are less sensitive to temporal patterns existed in the white-box model being attacked and thus can be better transferred. Extensive experiments on the Kinetics-400 dataset and the UCF-101 dataset demonstrate that our method can significantly boost the transferability of video adversarial examples. For transfer-based attack against video recognition models, it achieves a 61.56% average attack success rate on the Kinetics-400 and 48.60% on the UCF-101.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Zhang, Qikun, Yuzhi Zhang, Yanling Shao, Mengqi Liu, Jianyong Li, Junling Yuan, and Ruifang Wang. "Boosting Adversarial Attacks with Nadam Optimizer." Electronics 12, no. 6 (March 20, 2023): 1464. http://dx.doi.org/10.3390/electronics12061464.

Повний текст джерела
Анотація:
Deep neural networks are extremely vulnerable to attacks and threats from adversarial examples. These adversarial examples deliberately crafted by attackers can easily fool classification models by adding imperceptibly tiny perturbations on clean images. This brings a great challenge to image security for deep learning. Therefore, studying and designing attack algorithms for generating adversarial examples is essential for building robust models. Moreover, adversarial examples are transferable in that they can mislead multiple different classifiers across models. This makes black-box attacks feasible for practical applications. However, most attack methods have low success rates and weak transferability against black-box models. This is because they often overfit the model during the production of adversarial examples. To address this issue, we propose a Nadam iterative fast gradient method (NAI-FGM), which combines an improved Nadam optimizer with gradient-based iterative attacks. Specifically, we introduce the look-ahead momentum vector and the adaptive learning rate component based on the Momentum Iterative Fast Gradient Sign Method (MI-FGSM). The look-ahead momentum vector is dedicated to making the loss function converge faster and get rid of the poor local maximum. Additionally, the adaptive learning rate component is used to help the adversarial example to converge to a better extreme point by obtaining adaptive update directions according to the current parameters. Furthermore, we also carry out different input transformations to further enhance the attack performance before using NAI-FGM for attack. Finally, we consider attacking the ensemble model. Extensive experiments show that the NAI-FGM has stronger transferability and black-box attack capability than advanced momentum-based iterative attacks. In particular, when using the adversarial examples produced by way of ensemble attack to test the adversarially trained models, the NAI-FGM improves the success rate by 8% to 11% over the other attack methods. Last but not least, the NAI-DI-TI-SI-FGM combined with the input transformation achieves a success rate of 91.3% on average.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Koga, Kazuki, and Kazuhiro Takemoto. "Simple Black-Box Universal Adversarial Attacks on Deep Neural Networks for Medical Image Classification." Algorithms 15, no. 5 (April 22, 2022): 144. http://dx.doi.org/10.3390/a15050144.

Повний текст джерела
Анотація:
Universal adversarial attacks, which hinder most deep neural network (DNN) tasks using only a single perturbation called universal adversarial perturbation (UAP), are a realistic security threat to the practical application of a DNN for medical imaging. Given that computer-based systems are generally operated under a black-box condition in which only input queries are allowed and outputs are accessible, the impact of UAPs seems to be limited because well-used algorithms for generating UAPs are limited to white-box conditions in which adversaries can access model parameters. Nevertheless, we propose a method for generating UAPs using a simple hill-climbing search based only on DNN outputs to demonstrate that UAPs are easily generatable using a relatively small dataset under black-box conditions with representative DNN-based medical image classifications. Black-box UAPs can be used to conduct both nontargeted and targeted attacks. Overall, the black-box UAPs showed high attack success rates (40–90%). The vulnerability of the black-box UAPs was observed in several model architectures. The results indicate that adversaries can also generate UAPs through a simple procedure under the black-box condition to foil or control diagnostic medical imaging systems based on DNNs, and that UAPs are a more serious security threat.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Du, Xiaohu, Jie Yu, Zibo Yi, Shasha Li, Jun Ma, Yusong Tan, and Qinbo Wu. "A Hybrid Adversarial Attack for Different Application Scenarios." Applied Sciences 10, no. 10 (May 21, 2020): 3559. http://dx.doi.org/10.3390/app10103559.

Повний текст джерела
Анотація:
Adversarial attack against natural language has been a hot topic in the field of artificial intelligence security in recent years. It is mainly to study the methods and implementation of generating adversarial examples. The purpose is to better deal with the vulnerability and security of deep learning systems. According to whether the attacker understands the deep learning model structure, the adversarial attack is divided into black-box attack and white-box attack. In this paper, we propose a hybrid adversarial attack for different application scenarios. Firstly, we propose a novel black-box attack method of generating adversarial examples to trick the word-level sentiment classifier, which is based on differential evolution (DE) algorithm to generate semantically and syntactically similar adversarial examples. Compared with existing genetic algorithm based adversarial attacks, our algorithm can achieve a higher attack success rate while maintaining a lower word replacement rate. At the 10% word substitution threshold, we have increased the attack success rate from 58.5% to 63%. Secondly, when we understand the model architecture and parameters, etc., we propose a white-box attack with gradient-based perturbation against the same sentiment classifier. In this attack, we use a Euclidean distance and cosine distance combined metric to find the most semantically and syntactically similar substitution, and we introduce the coefficient of variation (CV) factor to control the dispersion of the modified words in the adversarial examples. More dispersed modifications can increase human imperceptibility and text readability. Compared with the existing global attack, our attack can increase the attack success rate and make modification positions in generated examples more dispersed. We’ve increased the global search success rate from 75.8% to 85.8%. Finally, we can deal with different application scenarios by using these two attack methods, that is, whether we understand the internal structure and parameters of the model, we can all generate good adversarial examples.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Chitic, Raluca, Ali Osman Topal, and Franck Leprévost. "Empirical Perturbation Analysis of Two Adversarial Attacks: Black Box versus White Box." Applied Sciences 12, no. 14 (July 21, 2022): 7339. http://dx.doi.org/10.3390/app12147339.

Повний текст джерела
Анотація:
Through the addition of humanly imperceptible noise to an image classified as belonging to a category ca, targeted adversarial attacks can lead convolutional neural networks (CNNs) to classify a modified image as belonging to any predefined target class ct≠ca. To achieve a better understanding of the inner workings of adversarial attacks, this study analyzes the adversarial images created by two completely opposite attacks against 10 ImageNet-trained CNNs. A total of 2×437 adversarial images are created by EAtarget,C, a black-box evolutionary algorithm (EA), and by the basic iterative method (BIM), a white-box, gradient-based attack. We inspect and compare these two sets of adversarial images from different perspectives: the behavior of CNNs at smaller image regions, the image noise frequency, the adversarial image transferability, the image texture change, and penultimate CNN layer activations. We find that texture change is a side effect rather than a means for the attacks and that ct-relevant features only build up significantly from image regions of size 56×56 onwards. In the penultimate CNN layers, both attacks increase the activation of units that are positively related to ct and units that are negatively related to ca. In contrast to EAtarget,C’s white noise nature, BIM predominantly introduces low-frequency noise. BIM affects the original ca features more than EAtarget,C, thus producing slightly more transferable adversarial images. However, the transferability with both attacks is low, since the attacks’ ct-related information is specific to the output layers of the targeted CNN. We find that the adversarial images are actually more transferable at regions with sizes of 56×56 than at full scale.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Ding, Daizong, Mi Zhang, Fuli Feng, Yuanmin Huang, Erling Jiang, and Min Yang. "Black-Box Adversarial Attack on Time Series Classification." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (June 26, 2023): 7358–68. http://dx.doi.org/10.1609/aaai.v37i6.25896.

Повний текст джерела
Анотація:
With the increasing use of deep neural network (DNN) in time series classification (TSC), recent work reveals the threat of adversarial attack, where the adversary can construct adversarial examples to cause model mistakes. However, existing researches on the adversarial attack of TSC typically adopt an unrealistic white-box setting with model details transparent to the adversary. In this work, we study a more rigorous black-box setting with attack detection applied, which restricts gradient access and requires the adversarial example to be also stealthy. Theoretical analyses reveal that the key lies in: estimating black-box gradient with diversity and non-convexity of TSC models resolved, and restricting the l0 norm of the perturbation to construct adversarial samples. Towards this end, we propose a new framework named BlackTreeS, which solves the hard optimization issue for adversarial example construction with two simple yet effective modules. In particular, we propose a tree search strategy to find influential positions in a sequence, and independently estimate the black-box gradients for these positions. Extensive experiments on three real-world TSC datasets and five DNN based models validate the effectiveness of BlackTreeS, e.g., it improves the attack success rate from 19.3% to 27.3%, and decreases the detection success rate from 90.9% to 6.8% for LSTM on the UWave dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Park, Sanglee, and Jungmin So. "On the Effectiveness of Adversarial Training in Defending against Adversarial Example Attacks for Image Classification." Applied Sciences 10, no. 22 (November 14, 2020): 8079. http://dx.doi.org/10.3390/app10228079.

Повний текст джерела
Анотація:
State-of-the-art neural network models are actively used in various fields, but it is well-known that they are vulnerable to adversarial example attacks. Throughout the efforts to make the models robust against adversarial example attacks, it has been found to be a very difficult task. While many defense approaches were shown to be not effective, adversarial training remains as one of the promising methods. In adversarial training, the training data are augmented by “adversarial” samples generated using an attack algorithm. If the attacker uses a similar attack algorithm to generate adversarial examples, the adversarially trained network can be quite robust to the attack. However, there are numerous ways of creating adversarial examples, and the defender does not know what algorithm the attacker may use. A natural question is: Can we use adversarial training to train a model robust to multiple types of attack? Previous work have shown that, when a network is trained with adversarial examples generated from multiple attack methods, the network is still vulnerable to white-box attacks where the attacker has complete access to the model parameters. In this paper, we study this question in the context of black-box attacks, which can be a more realistic assumption for practical applications. Experiments with the MNIST dataset show that adversarially training a network with an attack method helps defending against that particular attack method, but has limited effect for other attack methods. In addition, even if the defender trains a network with multiple types of adversarial examples and the attacker attacks with one of the methods, the network could lose accuracy to the attack if the attacker uses a different data augmentation strategy on the target network. These results show that it is very difficult to make a robust network using adversarial training, even for black-box settings where the attacker has restricted information on the target network.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Wei, Zhipeng, Jingjing Chen, Xingxing Wei, Linxi Jiang, Tat-Seng Chua, Fengfeng Zhou, and Yu-Gang Jiang. "Heuristic Black-Box Adversarial Attacks on Video Recognition Models." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 12338–45. http://dx.doi.org/10.1609/aaai.v34i07.6918.

Повний текст джерела
Анотація:
We study the problem of attacking video recognition models in the black-box setting, where the model information is unknown and the adversary can only make queries to detect the predicted top-1 class and its probability. Compared with the black-box attack on images, attacking videos is more challenging as the computation cost for searching the adversarial perturbations on a video is much higher due to its high dimensionality. To overcome this challenge, we propose a heuristic black-box attack model that generates adversarial perturbations only on the selected frames and regions. More specifically, a heuristic-based algorithm is proposed to measure the importance of each frame in the video towards generating the adversarial examples. Based on the frames' importance, the proposed algorithm heuristically searches a subset of frames where the generated adversarial example has strong adversarial attack ability while keeps the perturbations lower than the given bound. Besides, to further boost the attack efficiency, we propose to generate the perturbations only on the salient regions of the selected frames. In this way, the generated perturbations are sparse in both temporal and spatial domains. Experimental results of attacking two mainstream video recognition methods on the UCF-101 dataset and the HMDB-51 dataset demonstrate that the proposed heuristic black-box adversarial attack method can significantly reduce the computation cost and lead to more than 28% reduction in query numbers for the untargeted attack on both datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Fu, Zhongwang, and Xiaohui Cui. "ELAA: An Ensemble-Learning-Based Adversarial Attack Targeting Image-Classification Model." Entropy 25, no. 2 (January 22, 2023): 215. http://dx.doi.org/10.3390/e25020215.

Повний текст джерела
Анотація:
The research on image-classification-adversarial attacks is crucial in the realm of artificial intelligence (AI) security. Most of the image-classification-adversarial attack methods are for white-box settings, demanding target model gradients and network architectures, which is less practical when facing real-world cases. However, black-box adversarial attacks immune to the above limitations and reinforcement learning (RL) seem to be a feasible solution to explore an optimized evasion policy. Unfortunately, existing RL-based works perform worse than expected in the attack success rate. In light of these challenges, we propose an ensemble-learning-based adversarial attack (ELAA) targeting image-classification models which aggregate and optimize multiple reinforcement learning (RL) base learners, which further reveals the vulnerabilities of learning-based image-classification models. Experimental results show that the attack success rate for the ensemble model is about 35% higher than for a single model. The attack success rate of ELAA is 15% higher than those of the baseline methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Lapid, Raz, Zvika Haramaty, and Moshe Sipper. "An Evolutionary, Gradient-Free, Query-Efficient, Black-Box Algorithm for Generating Adversarial Instances in Deep Convolutional Neural Networks." Algorithms 15, no. 11 (October 31, 2022): 407. http://dx.doi.org/10.3390/a15110407.

Повний текст джерела
Анотація:
Deep neural networks (DNNs) are sensitive to adversarial data in a variety of scenarios, including the black-box scenario, where the attacker is only allowed to query the trained model and receive an output. Existing black-box methods for creating adversarial instances are costly, often using gradient estimation or training a replacement network. This paper introduces Qu ery-Efficient Evolutionary Attack—QuEry Attack—an untargeted, score-based, black-box attack. QuEry Attack is based on a novel objective function that can be used in gradient-free optimization problems. The attack only requires access to the output logits of the classifier and is thus not affected by gradient masking. No additional information is needed, rendering our method more suitable to real-life situations. We test its performance with three different, commonly used, pretrained image-classifications models—Inception-v3, ResNet-50, and VGG-16-BN—against three benchmark datasets: MNIST, CIFAR10 and ImageNet. Furthermore, we evaluate QuEry Attack’s performance on non-differential transformation defenses and robust models. Our results demonstrate the superior performance of QuEry Attack, both in terms of accuracy score and query efficiency.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Guan, Yuting, Junjiang He, Tao Li, Hui Zhao, and Baoqiang Ma. "SSQLi: A Black-Box Adversarial Attack Method for SQL Injection Based on Reinforcement Learning." Future Internet 15, no. 4 (March 30, 2023): 133. http://dx.doi.org/10.3390/fi15040133.

Повний текст джерела
Анотація:
SQL injection is a highly detrimental web attack technique that can result in significant data leakage and compromise system integrity. To counteract the harm caused by such attacks, researchers have devoted much attention to the examination of SQL injection detection techniques, which have progressed from traditional signature-based detection methods to machine- and deep-learning-based detection models. These detection techniques have demonstrated promising results on existing datasets; however, most studies have overlooked the impact of adversarial attacks, particularly black-box adversarial attacks, on detection methods. This study addressed the shortcomings of current SQL injection detection techniques and proposed a reinforcement-learning-based black-box adversarial attack method. The proposal included an innovative vector transformation approach for the original SQL injection payload, a comprehensive attack-rule matrix, and a reinforcement-learning-based method for the adaptive generation of adversarial examples. Our approach was evaluated on existing web application firewalls (WAF) and detection models based on machine- and deep-learning methods, and the generated adversarial examples successfully bypassed the detection method at a rate of up to 97.39%. Furthermore, there was a substantial decrease in the detection accuracy of the model after multiple attacks had been carried out on the detection model via the adversarial examples.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Chang, Heng, Yu Rong, Tingyang Xu, Wenbing Huang, Honglei Zhang, Peng Cui, Wenwu Zhu, and Junzhou Huang. "A Restricted Black-Box Adversarial Framework Towards Attacking Graph Embedding Models." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3389–96. http://dx.doi.org/10.1609/aaai.v34i04.5741.

Повний текст джерела
Анотація:
With the great success of graph embedding model on both academic and industry area, the robustness of graph embedding against adversarial attack inevitably becomes a central problem in graph learning domain. Regardless of the fruitful progress, most of the current works perform the attack in a white-box fashion: they need to access the model predictions and labels to construct their adversarial loss. However, the inaccessibility of model predictions in real systems makes the white-box attack impractical to real graph learning system. This paper promotes current frameworks in a more general and flexible sense – we demand to attack various kinds of graph embedding model with black-box driven. To this end, we begin by investigating the theoretical connections between graph signal processing and graph embedding models in a principled way and formulate the graph embedding model as a general graph signal process with corresponding graph filter. As such, a generalized adversarial attacker: GF-Attack is constructed by the graph filter and feature matrix. Instead of accessing any knowledge of the target classifiers used in graph embedding, GF-Attack performs the attack only on the graph filter in a black-box attack fashion. To validate the generalization of GF-Attack, we construct the attacker on four popular graph embedding models. Extensive experimental results validate the effectiveness of our attacker on several benchmark datasets. Particularly by using our attack, even small graph perturbations like one-edge flip is able to consistently make a strong attack in performance to different graph embedding models.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Yang, Bo, Kaiyong Xu, Hengjun Wang, and Hengwei Zhang. "Random Transformation of image brightness for adversarial attack." Journal of Intelligent & Fuzzy Systems 42, no. 3 (February 2, 2022): 1693–704. http://dx.doi.org/10.3233/jifs-211157.

Повний текст джерела
Анотація:
Deep neural networks (DNNs) are vulnerable to adversarial examples, which are crafted by adding small, human-imperceptible perturbations to the original images, but make the model output inaccurate predictions. Before DNNs are deployed, adversarial attacks can thus be an important method to evaluate and select robust models in safety-critical applications. However, under the challenging black-box setting, the attack success rate, i.e., the transferability of adversarial examples, still needs to be improved. Based on image augmentation methods, this paper found that random transformation of image brightness can eliminate overfitting in the generation of adversarial examples and improve their transferability. In light of this phenomenon, this paper proposes an adversarial example generation method, which can be integrated with Fast Gradient Sign Method (FGSM)-related methods to build a more robust gradient-based attack and to generate adversarial examples with better transferability. Extensive experiments on the ImageNet dataset have demonstrated the effectiveness of the aforementioned method. Whether on normally or adversarially trained networks, our method has a higher success rate for black-box attacks than other attack methods based on data augmentation. It is hoped that this method can help evaluate and improve the robustness of models.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Suryanto, Naufal, Hyoeun Kang, Yongsu Kim, Youngyeo Yun, Harashta Tatimma Larasati, and Howon Kim. "A Distributed Black-Box Adversarial Attack Based on Multi-Group Particle Swarm Optimization." Sensors 20, no. 24 (December 14, 2020): 7158. http://dx.doi.org/10.3390/s20247158.

Повний текст джерела
Анотація:
Adversarial attack techniques in deep learning have been studied extensively due to its stealthiness to human eyes and potentially dangerous consequences when applied to real-life applications. However, current attack methods in black-box settings mainly employ a large number of queries for crafting their adversarial examples, hence making them very likely to be detected and responded by the target system (e.g., artificial intelligence (AI) service provider) due to its high traffic volume. A recent proposal able to address the large query problem utilizes a gradient-free approach based on Particle Swarm Optimization (PSO) algorithm. Unfortunately, this original approach tends to have a low attack success rate, possibly due to the model’s difficulty of escaping local optima. This obstacle can be overcome by employing a multi-group approach for PSO algorithm, by which the PSO particles can be redistributed, preventing them from being trapped in local optima. In this paper, we present a black-box adversarial attack which can significantly increase the success rate of PSO-based attack while maintaining a low number of query by launching the attack in a distributed manner. Attacks are executed from multiple nodes, disseminating queries among the nodes, hence reducing the possibility of being recognized by the target system while also increasing scalability. Furthermore, we utilize Multi-Group PSO with Random Redistribution (MGRR-PSO) for perturbation generation, performing better than the original approach against local optima, thus achieving a higher success rate. Additionally, we propose to efficiently remove excessive perturbation (i.e, perturbation pruning) by utilizing again the MGRR-PSO rather than a standard iterative method as used in the original approach. We perform five different experiments: comparing our attack’s performance with existing algorithms, testing in high-dimensional space in ImageNet dataset, examining our hyperparameters (i.e., particle size, number of clients, search boundary), and testing on real digital attack to Google Cloud Vision. Our attack proves to obtain a 100% success rate on MNIST and CIFAR-10 datasets and able to successfully fool Google Cloud Vision as a proof of the real digital attack by maintaining a lower query and wide applicability.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Maheshwary, Rishabh, Saket Maheshwary, and Vikram Pudi. "Generating Natural Language Attacks in a Hard Label Black Box Setting." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 15 (May 18, 2021): 13525–33. http://dx.doi.org/10.1609/aaai.v35i15.17595.

Повний текст джерела
Анотація:
We study an important and challenging task of attacking natural language processing models in a hard label black box setting. We propose a decision-based attack strategy that crafts high quality adversarial examples on text classification and entailment tasks. Our proposed attack strategy leverages population-based optimization algorithm to craft plausible and semantically similar adversarial examples by observing only the top label predicted by the target model. At each iteration, the optimization procedure allow word replacements that maximizes the overall semantic similarity between the original and the adversarial text. Further, our approach does not rely on using substitute models or any kind of training data. We demonstrate the efficacy of our proposed approach through extensive experimentation and ablation studies on five state-of-the-art target models across seven benchmark datasets. In comparison to attacks proposed in prior literature, we are able to achieve a higher success rate with lower word perturbation percentage that too in a highly restricted setting.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Chen, Yiding, and Xiaojin Zhu. "Optimal Attack against Autoregressive Models by Manipulating the Environment." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3545–52. http://dx.doi.org/10.1609/aaai.v34i04.5760.

Повний текст джерела
Анотація:
We describe an optimal adversarial attack formulation against autoregressive time series forecast using Linear Quadratic Regulator (LQR). In this threat model, the environment evolves according to a dynamical system; an autoregressive model observes the current environment state and predicts its future values; an attacker has the ability to modify the environment state in order to manipulate future autoregressive forecasts. The attacker's goal is to force autoregressive forecasts into tracking a target trajectory while minimizing its attack expenditure. In the white-box setting where the attacker knows the environment and forecast models, we present the optimal attack using LQR for linear models, and Model Predictive Control (MPC) for nonlinear models. In the black-box setting, we combine system identification and MPC. Experiments demonstrate the effectiveness of our attacks.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Cao, Han, Chengxiang Si, Qindong Sun, Yanxiao Liu, Shancang Li, and Prosanta Gope. "ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers." Entropy 24, no. 3 (March 15, 2022): 412. http://dx.doi.org/10.3390/e24030412.

Повний текст джерела
Анотація:
The vulnerability of deep neural network (DNN)-based systems makes them susceptible to adversarial perturbation and may cause classification task failure. In this work, we propose an adversarial attack model using the Artificial Bee Colony (ABC) algorithm to generate adversarial samples without the need for a further gradient evaluation and training of the substitute model, which can further improve the chance of task failure caused by adversarial perturbation. In untargeted attacks, the proposed method obtained 100%, 98.6%, and 90.00% success rates on the MNIST, CIFAR-10 and ImageNet datasets, respectively. The experimental results show that the proposed ABCAttack can not only obtain a high attack success rate with fewer queries in the black-box setting, but also break some existing defenses to a large extent, and is not limited by model structure or size, which provides further research directions for deep learning evasion attacks and defenses.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Lin, Bin, Jixin Chen, Zhihong Zhang, Yanlin Lai, Xinlong Wu, Lulu Tian, and Wangchi Cheng. "An Adversarial Network-based Multi-model Black-box Attack." Intelligent Automation & Soft Computing 29, no. 3 (2021): 641–49. http://dx.doi.org/10.32604/iasc.2021.016818.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Mun, Hyunjun, Sunggwan Seo, Baehoon Son, and Joobeom Yun. "Black-Box Audio Adversarial Attack Using Particle Swarm Optimization." IEEE Access 10 (2022): 23532–44. http://dx.doi.org/10.1109/access.2022.3152526.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Haq, Ijaz Ul, Zahid Younas Khan, Arshad Ahmad, Bashir Hayat, Asif Khan, Ye-Eun Lee, and Ki-Il Kim. "Evaluating and Enhancing the Robustness of Sustainable Neural Relationship Classifiers Using Query-Efficient Black-Box Adversarial Attacks." Sustainability 13, no. 11 (May 24, 2021): 5892. http://dx.doi.org/10.3390/su13115892.

Повний текст джерела
Анотація:
Neural relation extraction (NRE) models are the backbone of various machine learning tasks, including knowledge base enrichment, information extraction, and document summarization. Despite the vast popularity of these models, their vulnerabilities remain unknown; this is of high concern given their growing use in security-sensitive applications such as question answering and machine translation in the aspects of sustainability. In this study, we demonstrate that NRE models are inherently vulnerable to adversarially crafted text that contains imperceptible modifications of the original but can mislead the target NRE model. Specifically, we propose a novel sustainable term frequency-inverse document frequency (TFIDF) based black-box adversarial attack to evaluate the robustness of state-of-the-art CNN, CGN, LSTM, and BERT-based models on two benchmark RE datasets. Compared with white-box adversarial attacks, black-box attacks impose further constraints on the query budget; thus, efficient black-box attacks remain an open problem. By applying TFIDF to the correctly classified sentences of each class label in the test set, the proposed query-efficient method achieves a reduction of up to 70% in the number of queries to the target model for identifying important text items. Based on these items, we design both character- and word-level perturbations to generate adversarial examples. The proposed attack successfully reduces the accuracy of six representative models from an average F1 score of 80% to below 20%. The generated adversarial examples were evaluated by humans and are considered semantically similar. Moreover, we discuss defense strategies that mitigate such attacks, and the potential countermeasures that could be deployed in order to improve sustainability of the proposed scheme.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Croce, Francesco, Maksym Andriushchenko, Naman D. Singh, Nicolas Flammarion, and Matthias Hein. "Sparse-RS: A Versatile Framework for Query-Efficient Sparse Black-Box Adversarial Attacks." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (June 28, 2022): 6437–45. http://dx.doi.org/10.1609/aaai.v36i6.20595.

Повний текст джерела
Анотація:
We propose a versatile framework based on random search, Sparse-RS, for score-based sparse targeted and untargeted attacks in the black-box setting. Sparse-RS does not rely on substitute models and achieves state-of-the-art success rate and query efficiency for multiple sparse attack models: L0-bounded perturbations, adversarial patches, and adversarial frames. The L0-version of untargeted Sparse-RS outperforms all black-box and even all white-box attacks for different models on MNIST, CIFAR-10, and ImageNet. Moreover, our untargeted Sparse-RS achieves very high success rates even for the challenging settings of 20x20 adversarial patches and 2-pixel wide adversarial frames for 224x224 images. Finally, we show that Sparse-RS can be applied to generate targeted universal adversarial patches where it significantly outperforms the existing approaches. Our code is available at https://github.com/fra31/sparse-rs.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Wang, Fangwei, Yuanyuan Lu, Changguang Wang, and Qingru Li. "Binary Black-Box Adversarial Attacks with Evolutionary Learning against IoT Malware Detection." Wireless Communications and Mobile Computing 2021 (August 30, 2021): 1–9. http://dx.doi.org/10.1155/2021/8736946.

Повний текст джерела
Анотація:
5G is about to open Pandora’s box of security threats to the Internet of Things (IoT). Key technologies, such as network function virtualization and edge computing introduced by the 5G network, bring new security threats and risks to the Internet infrastructure. Therefore, higher detection and defense against malware are required. Nowadays, deep learning (DL) is widely used in malware detection. Recently, research has demonstrated that adversarial attacks have posed a hazard to DL-based models. The key issue of enhancing the antiattack performance of malware detection systems that are used to detect adversarial attacks is to generate effective adversarial samples. However, numerous existing methods to generate adversarial samples are manual feature extraction or using white-box models, which makes it not applicable in the actual scenarios. This paper presents an effective binary manipulation-based attack framework, which generates adversarial samples with an evolutionary learning algorithm. The framework chooses some appropriate action sequences to modify malicious samples. Thus, the modified malware can successfully circumvent the detection system. The evolutionary algorithm can adaptively simplify the modification actions and make the adversarial sample more targeted. Our approach can efficiently generate adversarial samples without human intervention. The generated adversarial samples can effectively combat DL-based malware detection models while preserving the consistency of the executable and malicious behavior of the original malware samples. We apply the generated adversarial samples to attack the detection engines of VirusTotal. Experimental results illustrate that the adversarial samples generated by our method reach an evasion success rate of 47.8%, which outperforms other attack methods. By adding adversarial samples in the training process, the MalConv network is retrained. We show that the detection accuracy is improved by 10.3%.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Fang, Yong, Cheng Huang, Yijia Xu, and Yang Li. "RLXSS: Optimizing XSS Detection Model to Defend Against Adversarial Attacks Based on Reinforcement Learning." Future Internet 11, no. 8 (August 14, 2019): 177. http://dx.doi.org/10.3390/fi11080177.

Повний текст джерела
Анотація:
With the development of artificial intelligence, machine learning algorithms and deep learning algorithms are widely applied to attack detection models. Adversarial attacks against artificial intelligence models become inevitable problems when there is a lack of research on the cross-site scripting (XSS) attack detection model for defense against attacks. It is extremely important to design a method that can effectively improve the detection model against attack. In this paper, we present a method based on reinforcement learning (called RLXSS), which aims to optimize the XSS detection model to defend against adversarial attacks. First, the adversarial samples of the detection model are mined by the adversarial attack model based on reinforcement learning. Secondly, the detection model and the adversarial model are alternately trained. After each round, the newly-excavated adversarial samples are marked as a malicious sample and are used to retrain the detection model. Experimental results show that the proposed RLXSS model can successfully mine adversarial samples that escape black-box and white-box detection and retain aggressive features. What is more, by alternately training the detection model and the confrontation attack model, the escape rate of the detection model is continuously reduced, which indicates that the model can improve the ability of the detection model to defend against attacks.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Du, Meng, Yuxin Sun, Bing Sun, Zilong Wu, Lan Luo, Daping Bi, and Mingyang Du. "TAN: A Transferable Adversarial Network for DNN-Based UAV SAR Automatic Target Recognition Models." Drones 7, no. 3 (March 16, 2023): 205. http://dx.doi.org/10.3390/drones7030205.

Повний текст джерела
Анотація:
Recently, the unmanned aerial vehicle (UAV) synthetic aperture radar (SAR) has become a highly sought-after topic for its wide applications in target recognition, detection, and tracking. However, SAR automatic target recognition (ATR) models based on deep neural networks (DNN) are suffering from adversarial examples. Generally, non-cooperators rarely disclose any SAR-ATR model information, making adversarial attacks challenging. To tackle this issue, we propose a novel attack method called Transferable Adversarial Network (TAN). It can craft highly transferable adversarial examples in real time and attack SAR-ATR models without any prior knowledge, which is of great significance for real-world black-box attacks. The proposed method improves the transferability via a two-player game, in which we simultaneously train two encoder–decoder models: a generator that crafts malicious samples through a one-step forward mapping from original data, and an attenuator that weakens the effectiveness of malicious samples by capturing the most harmful deformations. Particularly, compared to traditional iterative methods, the encoder–decoder model can one-step map original samples to adversarial examples, thus enabling real-time attacks. Experimental results indicate that our approach achieves state-of-the-art transferability with acceptable adversarial perturbations and minimum time costs compared to existing attack methods, making real-time black-box attacks without any prior knowledge a reality.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Wang, Fangwei, Zerou Ma, Xiaohan Zhang, Qingru Li, and Changguang Wang. "DDSG-GAN: Generative Adversarial Network with Dual Discriminators and Single Generator for Black-Box Attacks." Mathematics 11, no. 4 (February 16, 2023): 1016. http://dx.doi.org/10.3390/math11041016.

Повний текст джерела
Анотація:
As one of the top ten security threats faced by artificial intelligence, the adversarial attack has caused scholars to think deeply from theory to practice. However, in the black-box attack scenario, how to raise the visual quality of an adversarial example (AE) and perform a more efficient query should be further explored. This study aims to use the architecture of GAN combined with the model-stealing attack to train surrogate models and generate high-quality AE. This study proposes an image AE generation method based on the generative adversarial networks with dual discriminators and a single generator (DDSG-GAN) and designs the corresponding loss function for each model. The generator can generate adversarial perturbation, and two discriminators constrain the perturbation, respectively, to ensure the visual quality and attack effect of the generated AE. We extensively experiment on MNIST, CIFAR10, and Tiny-ImageNet datasets. The experimental results illustrate that our method can effectively use query feedback to generate an AE, which significantly reduces the number of queries on the target model and can implement effective attacks.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Sauka, Kudzai, Gun-Yoo Shin, Dong-Wook Kim, and Myung-Mook Han. "Adversarial Robust and Explainable Network Intrusion Detection Systems Based on Deep Learning." Applied Sciences 12, no. 13 (June 25, 2022): 6451. http://dx.doi.org/10.3390/app12136451.

Повний текст джерела
Анотація:
The ever-evolving cybersecurity environment has given rise to sophisticated adversaries who constantly explore new ways to attack cyberinfrastructure. Recently, the use of deep learning-based intrusion detection systems has been on the rise. This rise is due to deep neural networks (DNN) complexity and efficiency in making anomaly detection activities more accurate. However, the complexity of these models makes them black-box models, as they lack explainability and interpretability. Not only is the DNN perceived as a black-box model, but recent research evidence has also shown that they are vulnerable to adversarial attacks. This paper developed an adversarial robust and explainable network intrusion detection system based on deep learning by applying adversarial training and implementing explainable AI techniques. In our experiments with the NSL-KDD dataset, the PGD adversarial-trained model was a more robust model than DeepFool adversarial-trained and FGSM adversarial-trained models, with a ROC-AUC of 0.87. The FGSM attack did not affect the PGD adversarial-trained model’s ROC-AUC, while the DeepFool attack caused a minimal 9.20% reduction in PGD adversarial-trained model’s ROC-AUC. PGD attack caused a 15.12% reduction in the DeepFool adversarial-trained model’s ROC-AUC and a 12.79% reduction in FGSM trained model’s ROC-AUC.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Ilham Firdaus, Januar Al Amien, and Soni Soni. "String Matching untuk Mendeteksi Serangan Sniffing (ARP Spoofing) pada IDS Snort." Jurnal CoSciTech (Computer Science and Information Technology) 1, no. 2 (October 31, 2020): 44–49. http://dx.doi.org/10.37859/coscitech.v1i2.2180.

Повний текст джерела
Анотація:
Sniffing technique (ARP Spoofing) is an attack that sends fake ARP packets or ARP packets that have been modified according to the network address attacker's to poison the victim's ARP cache table. ARP spoofing attack is a dangerous attack because it can monitor the activities of victims in searching the browser and can steal social logins, office and other accounts. This attack supports the occurrence of other computer network attacks such as Denial of service, Man in the middle attack, host impersonating and others. Sniffing attacks are generally found in places that provide public Wi-Fi such as campus, libraries, cafes, and others. IDS Snort can detect sniffing attacks (Arp Spoofing). String Matching Method KMP algorithm is applied to detect attacks on snort logging files to provide alerts (messages) to users. Tests carried out are black box testing to test application functionality, and accuracy testing. All application functionality was successful, and testing the accuracy of the match between manual calculations for string matching and accurate application.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Zhang, Yue, Seong-Yoon Shin, Xujie Tan, and Bin Xiong. "A Self-Adaptive Approximated-Gradient-Simulation Method for Black-Box Adversarial Sample Generation." Applied Sciences 13, no. 3 (January 18, 2023): 1298. http://dx.doi.org/10.3390/app13031298.

Повний текст джерела
Анотація:
Deep neural networks (DNNs) have famously been applied in various ordinary duties. However, DNNs are sensitive to adversarial attacks which, by adding imperceptible perturbation samples to an original image, can easily alter the output. In state-of-the-art white-box attack methods, perturbation samples can successfully fool DNNs through the network gradient. In addition, they generate perturbation samples by only considering the sign information of the gradient and by dropping the magnitude. Accordingly, gradients of different magnitudes may adopt the same sign to construct perturbation samples, resulting in inefficiency. Unfortunately, it is often impractical to acquire the gradient in real-world scenarios. Consequently, we propose a self-adaptive approximated-gradient-simulation method for black-box adversarial attacks (SAGM) to generate efficient perturbation samples. Our proposed method uses knowledge-based differential evolution to simulate gradients and the self-adaptive momentum gradient to generate adversarial samples. To estimate the efficiency of the proposed SAGM, a series of experiments were carried out on two datasets, namely MNIST and CIFAR-10. Compared to state-of-the-art attack techniques, our proposed method can quickly and efficiently search for perturbation samples to misclassify the original samples. The results reveal that the SAGM is an effective and efficient technique for generating perturbation samples.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Bai, Yang, Yisen Wang, Yuyuan Zeng, Yong Jiang, and Shu-Tao Xia. "Query efficient black-box adversarial attack on deep neural networks." Pattern Recognition 133 (January 2023): 109037. http://dx.doi.org/10.1016/j.patcog.2022.109037.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Li, Siyuan, Guangji Huang, Xing Xu, and Huimin Lu. "Query-based black-box attack against medical image segmentation model." Future Generation Computer Systems 133 (August 2022): 331–37. http://dx.doi.org/10.1016/j.future.2022.03.008.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Ding, Kangyi, Xiaolei Liu, Weina Niu, Teng Hu, Yanping Wang, and Xiaosong Zhang. "A low-query black-box adversarial attack based on transferability." Knowledge-Based Systems 226 (August 2021): 107102. http://dx.doi.org/10.1016/j.knosys.2021.107102.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Zhu, Jiaqi, Feng Dai, Lingyun Yu, Hongtao Xie, Lidong Wang, Bo Wu, and Yongdong Zhang. "Attention‐guided transformation‐invariant attack for black‐box adversarial examples." International Journal of Intelligent Systems 37, no. 5 (January 11, 2022): 3142–65. http://dx.doi.org/10.1002/int.22808.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Wang, Yajie, Yu-an Tan, Wenjiao Zhang, Yuhang Zhao, and Xiaohui Kuang. "An adversarial attack on DNN-based black-box object detectors." Journal of Network and Computer Applications 161 (July 2020): 102634. http://dx.doi.org/10.1016/j.jnca.2020.102634.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії