Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: White-box attack.

Artykuły w czasopismach na temat „White-box attack”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „White-box attack”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Chen, Jinghui, Dongruo Zhou, Jinfeng Yi i Quanquan Gu. "A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 04 (3.04.2020): 3486–94. http://dx.doi.org/10.1609/aaai.v34i04.5753.

Pełny tekst źródła
Streszczenie:
Depending on how much information an adversary can access to, adversarial attacks can be classified as white-box attack and black-box attack. For white-box attack, optimization-based attack algorithms such as projected gradient descent (PGD) can achieve relatively high attack success rates within moderate iterates. However, they tend to generate adversarial examples near or upon the boundary of the perturbation set, resulting in large distortion. Furthermore, their corresponding black-box attack algorithms also suffer from high query complexities, thereby limiting their practical usefulness. In this paper, we focus on the problem of developing efficient and effective optimization-based adversarial attack algorithms. In particular, we propose a novel adversarial attack framework for both white-box and black-box settings based on a variant of Frank-Wolfe algorithm. We show in theory that the proposed attack algorithms are efficient with an O(1/√T) convergence rate. The empirical results of attacking the ImageNet and MNIST datasets also verify the efficiency and effectiveness of the proposed algorithms. More specifically, our proposed algorithms attain the best attack performances in both white-box and black-box attacks among all baselines, and are more time and query efficient than the state-of-the-art.
Style APA, Harvard, Vancouver, ISO itp.
2

Josse, Sébastien. "White-box attack context cryptovirology". Journal in Computer Virology 5, nr 4 (2.08.2008): 321–34. http://dx.doi.org/10.1007/s11416-008-0097-x.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Porkodi, V., M. Sivaram, Amin Salih Mohammed i V. Manikandan. "Survey on White-Box Attacks and Solutions". Asian Journal of Computer Science and Technology 7, nr 3 (5.11.2018): 28–32. http://dx.doi.org/10.51983/ajcst-2018.7.3.1904.

Pełny tekst źródła
Streszczenie:
This Research paper provides special white-box attacks and outputs from it. Wearable IoT units perform keep doubtlessly captured, then accessed within an unauthorized behavior due to the fact concerning their bodily nature. That case, they are between white-box attack systems, toughness the opponent may additionally have quantity,visibility about the implementation of the inbuilt crypto system including complete control over its solution platform. The white-box attacks on wearable devices is an assignment without a doubt. To serve as a countermeasure in opposition to these problems in such contexts, here analyzing encryption schemes in imitation of protecting the confidentiality concerning records against white-box attacks. The lightweight encryption, intention in accordance with protecting the confidentiality over data towards white-box attacks is the recently stability good algorithm. In that paper, we read the extraordinary algorithms too.
Style APA, Harvard, Vancouver, ISO itp.
4

ALSHEKH, MOKHTAR, i KÖKSAL ERENTÜRK. "DEFENSE AGAINST WHITE BOX ADVERSARIAL ATTACKS IN ARABIC NATURAL LANGUAGE PROCESSING (ANLP)". International Journal of Advanced Natural Sciences and Engineering Researches 7, nr 6 (25.07.2023): 151–55. http://dx.doi.org/10.59287/ijanser.1149.

Pełny tekst źródła
Streszczenie:
Adversarial attacks are among the biggest threats that affect the accuracy of classifiers in machine learning systems. This type of attacks tricks the classification model and make it perform false predictions by providing noised data that only human can detect that noise. The risk of attacks is high in natural language processing applications because most of the data collected in this case is taken from social networking sites that do not impose any restrictions on users when writing comments, which allows the attack to be created (either intentionally or unintentionally) easily and simply affecting the level of accuracy of the model. In this paper, The MLP model was used for the sentiment analysis of the texts taken from the tweets, the effect of applying a white-box adversarial attack on this classifier was studied and a technique was proposed to protect it from the attack. After applying the proposed methodology, we found that the adversarial attack decreases the accuracy of the classifier from 55.17% to 11.11%, and after applying the proposed defense technique, this contributed to an increase in the accuracy of the classifier up to 77.77%, and therefore the proposed plan can be adopted in the face of the adversarial attack. Attacker determines their targets strategically and deliberately depend on vulnerabilities they have ascertained. Organization and individuals mostly try to protect themselves from one occurrence or type on an attack. Still, they have to acknowledge that the attacker may easily move focus to advanced uncovered vulnerabilities. Even if someone successfully tackles several attacks, risks remain, and the need to face threats will happen for the predictable future.
Style APA, Harvard, Vancouver, ISO itp.
5

Park, Hosung, Gwonsang Ryu i Daeseon Choi. "Partial Retraining Substitute Model for Query-Limited Black-Box Attacks". Applied Sciences 10, nr 20 (14.10.2020): 7168. http://dx.doi.org/10.3390/app10207168.

Pełny tekst źródła
Streszczenie:
Black-box attacks against deep neural network (DNN) classifiers are receiving increasing attention because they represent a more practical approach in the real world than white box attacks. In black-box environments, adversaries have limited knowledge regarding the target model. This makes it difficult to estimate gradients for crafting adversarial examples, such that powerful white-box algorithms cannot be directly applied to black-box attacks. Therefore, a well-known black-box attack strategy creates local DNNs, called substitute models, to emulate the target model. The adversaries then craft adversarial examples using the substitute models instead of the unknown target model. The substitute models repeat the query process and are trained by observing labels from the target model’s responses to queries. However, emulating a target model usually requires numerous queries because new DNNs are trained from the beginning. In this study, we propose a new training method for substitute models to minimize the number of queries. We consider the number of queries as an important factor for practical black-box attacks because real-world systems often restrict queries for security and financial purposes. To decrease the number of queries, the proposed method does not emulate the entire target model and only adjusts the partial classification boundary based on a current attack. Furthermore, it does not use queries in the pre-training phase and creates queries only in the retraining phase. The experimental results indicate that the proposed method is effective in terms of the number of queries and attack success ratio against MNIST, VGGFace2, and ImageNet classifiers in query-limited black-box environments. Further, we demonstrate a black-box attack against a commercial classifier, Google AutoML Vision.
Style APA, Harvard, Vancouver, ISO itp.
6

Zhou, Jie, Jian Bai i Meng Shan Jiang. "White-Box Implementation of ECDSA Based on the Cloud Plus Side Mode". Security and Communication Networks 2020 (19.11.2020): 1–10. http://dx.doi.org/10.1155/2020/8881116.

Pełny tekst źródła
Streszczenie:
White-box attack context assumes that the running environments of algorithms are visible and modifiable. Algorithms that can resist the white-box attack context are called white-box cryptography. The elliptic curve digital signature algorithm (ECDSA) is one of the most widely used digital signature algorithms which can provide integrity, authenticity, and nonrepudiation. Since the private key in the classical ECDSA is plaintext, it is easy for attackers to obtain the private key. To increase the security of the private key under the white-box attack context, this article presents an algorithm for the white-box implementation of ECDSA. It uses the lookup table technology and the “cloud plus side” mode to protect the private key. The residue number system (RNS) theory is used to reduce the size of storage. Moreover, the article analyzes the security of the proposed algorithm against an exhaustive search attack, a random number attack, a code lifting attack, and so on. The efficiency of the proposed scheme is compared with that of the classical ECDSA through experiments.
Style APA, Harvard, Vancouver, ISO itp.
7

LIN, Ting-Ting, i Xue-Jia LAI. "Efficient Attack to White-Box SMS4 Implementation". Journal of Software 24, nr 8 (17.01.2014): 2238–49. http://dx.doi.org/10.3724/sp.j.1001.2013.04356.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Zhang, Sicheng, Yun Lin, Zhida Bao i Jiangzhi Fu. "A Lightweight Modulation Classification Network Resisting White Box Gradient Attacks". Security and Communication Networks 2021 (12.10.2021): 1–10. http://dx.doi.org/10.1155/2021/8921485.

Pełny tekst źródła
Streszczenie:
Improving the attack resistance of the modulation classification model is an important means to improve the security of the physical layer of the Internet of Things (IoT). In this paper, a binary modulation classification defense network (BMCDN) was proposed, which has the advantages of small model scale and strong immunity to white box gradient attacks. Specifically, an end-to-end modulation signal recognition network that directly recognizes the form of the signal sequence is constructed, and its parameters are quantized to 1 bit to obtain the advantages of low memory usage and fast calculation speed. The gradient of the quantized parameter is directly transferred to the original parameter to realize the gradient concealment and achieve the effect of effectively defending against the white box gradient attack. Experimental results show that BMCDN obtains significant immune performance against white box gradient attacks while achieving a scale reduction of 6 times.
Style APA, Harvard, Vancouver, ISO itp.
9

Gao, Xianfeng, Yu-an Tan, Hongwei Jiang, Quanxin Zhang i Xiaohui Kuang. "Boosting Targeted Black-Box Attacks via Ensemble Substitute Training and Linear Augmentation". Applied Sciences 9, nr 11 (3.06.2019): 2286. http://dx.doi.org/10.3390/app9112286.

Pełny tekst źródła
Streszczenie:
These years, Deep Neural Networks (DNNs) have shown unprecedented performance in many areas. However, some recent studies revealed their vulnerability to small perturbations added on source inputs. Furthermore, we call the ways to generate these perturbations’ adversarial attacks, which contain two types, black-box and white-box attacks, according to the adversaries’ access to target models. In order to overcome the problem of black-box attackers’ unreachabilities to the internals of target DNN, many researchers put forward a series of strategies. Previous works include a method of training a local substitute model for the target black-box model via Jacobian-based augmentation and then use the substitute model to craft adversarial examples using white-box methods. In this work, we improve the dataset augmentation to make the substitute models better fit the decision boundary of the target model. Unlike the previous work that just performed the non-targeted attack, we make it first to generate targeted adversarial examples via training substitute models. Moreover, to boost the targeted attacks, we apply the idea of ensemble attacks to the substitute training. Experiments on MNIST and GTSRB, two common datasets for image classification, demonstrate our effectiveness and efficiency of boosting a targeted black-box attack, and we finally attack the MNIST and GTSRB classifiers with the success rates of 97.7% and 92.8%.
Style APA, Harvard, Vancouver, ISO itp.
10

Jiang, Yi, i Dengpan Ye. "Black-Box Adversarial Attacks against Audio Forensics Models". Security and Communication Networks 2022 (17.01.2022): 1–8. http://dx.doi.org/10.1155/2022/6410478.

Pełny tekst źródła
Streszczenie:
Speech synthesis technology has made great progress in recent years and is widely used in the Internet of things, but it also brings the risk of being abused by criminals. Therefore, a series of researches on audio forensics models have arisen to reduce or eliminate these negative effects. In this paper, we propose a black-box adversarial attack method that only relies on output scores of audio forensics models. To improve the transferability of adversarial attacks, we utilize the ensemble-model method. A defense method is also designed against our proposed attack method under the view of the huge threat of adversarial examples to audio forensics models. Our experimental results on 4 forensics models trained on the LA part of the ASVspoof 2019 dataset show that our attacks can get a 99 % attack success rate on score-only black-box models, which is competitive to the best of white-box attacks, and 60 % attack success rate on decision-only black-box models. Finally, our defense method reduces the attack success rate to 16 % and guarantees 98 % detection accuracy of forensics models.
Style APA, Harvard, Vancouver, ISO itp.
11

Lee, Xian Yeow, Sambit Ghadai, Kai Liang Tan, Chinmay Hegde i Soumik Sarkar. "Spatiotemporally Constrained Action Space Attacks on Deep Reinforcement Learning Agents". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 04 (3.04.2020): 4577–84. http://dx.doi.org/10.1609/aaai.v34i04.5887.

Pełny tekst źródła
Streszczenie:
Robustness of Deep Reinforcement Learning (DRL) algorithms towards adversarial attacks in real world applications such as those deployed in cyber-physical systems (CPS) are of increasing concern. Numerous studies have investigated the mechanisms of attacks on the RL agent's state space. Nonetheless, attacks on the RL agent's action space (corresponding to actuators in engineering systems) are equally perverse, but such attacks are relatively less studied in the ML literature. In this work, we first frame the problem as an optimization problem of minimizing the cumulative reward of an RL agent with decoupled constraints as the budget of attack. We propose the white-box Myopic Action Space (MAS) attack algorithm that distributes the attacks across the action space dimensions. Next, we reformulate the optimization problem above with the same objective function, but with a temporally coupled constraint on the attack budget to take into account the approximated dynamics of the agent. This leads to the white-box Look-ahead Action Space (LAS) attack algorithm that distributes the attacks across the action and temporal dimensions. Our results showed that using the same amount of resources, the LAS attack deteriorates the agent's performance significantly more than the MAS attack. This reveals the possibility that with limited resource, an adversary can utilize the agent's dynamics to malevolently craft attacks that causes the agent to fail. Additionally, we leverage these attack strategies as a possible tool to gain insights on the potential vulnerabilities of DRL agents.
Style APA, Harvard, Vancouver, ISO itp.
12

Chitic, Raluca, Ali Osman Topal i Franck Leprévost. "Empirical Perturbation Analysis of Two Adversarial Attacks: Black Box versus White Box". Applied Sciences 12, nr 14 (21.07.2022): 7339. http://dx.doi.org/10.3390/app12147339.

Pełny tekst źródła
Streszczenie:
Through the addition of humanly imperceptible noise to an image classified as belonging to a category ca, targeted adversarial attacks can lead convolutional neural networks (CNNs) to classify a modified image as belonging to any predefined target class ct≠ca. To achieve a better understanding of the inner workings of adversarial attacks, this study analyzes the adversarial images created by two completely opposite attacks against 10 ImageNet-trained CNNs. A total of 2×437 adversarial images are created by EAtarget,C, a black-box evolutionary algorithm (EA), and by the basic iterative method (BIM), a white-box, gradient-based attack. We inspect and compare these two sets of adversarial images from different perspectives: the behavior of CNNs at smaller image regions, the image noise frequency, the adversarial image transferability, the image texture change, and penultimate CNN layer activations. We find that texture change is a side effect rather than a means for the attacks and that ct-relevant features only build up significantly from image regions of size 56×56 onwards. In the penultimate CNN layers, both attacks increase the activation of units that are positively related to ct and units that are negatively related to ca. In contrast to EAtarget,C’s white noise nature, BIM predominantly introduces low-frequency noise. BIM affects the original ca features more than EAtarget,C, thus producing slightly more transferable adversarial images. However, the transferability with both attacks is low, since the attacks’ ct-related information is specific to the output layers of the targeted CNN. We find that the adversarial images are actually more transferable at regions with sizes of 56×56 than at full scale.
Style APA, Harvard, Vancouver, ISO itp.
13

Dionysiou, Antreas, Vassilis Vassiliades i Elias Athanasopoulos. "Exploring Model Inversion Attacks in the Black-box Setting". Proceedings on Privacy Enhancing Technologies 2023, nr 1 (styczeń 2023): 190–206. http://dx.doi.org/10.56553/popets-2023-0012.

Pełny tekst źródła
Streszczenie:
Model Inversion (MI) attacks, that aim to recover semantically meaningful reconstructions for each target class, have been extensively studied and demonstrated to be successful in the white-box setting. On the other hand, black-box MI attacks demonstrate low performance in terms of both effectiveness, i.e., reconstructing samples which are identifiable as their ground-truth, and efficiency, i.e., time or queries required for completing the attack process. Whether or not effective and efficient black-box MI attacks can be conducted on complex targets, such as Convolutional Neural Networks (CNNs), currently remains unclear. In this paper, we present a feasibility study in regards to the effectiveness and efficiency of MI attacks in the black-box setting. In this context, we introduce Deep-BMI (Deep Black-box Model Inversion), a framework that supports various black-box optimizers for conducting MI attacks on deep CNNs used for image recognition. Deep-BMI’s most efficient optimizer is based on an adaptive hill climbing algorithm, whereas its most effective optimizer is based on an evolutionary algorithm capable of performing an all-class attack and returning a diversity of images in a single run. For assessing the severity of this threat, we utilize all three evaluation approaches found in the literature. In particular, we (a) conduct a user study with human participants, (b) demonstrate our actual reconstructions along with their ground-truth, and (c) use relevant quantitative metrics. Surprisingly, our results suggest that black-box MI attacks, and for complex models, are comparable, in some cases, to those reported so far in the white-box setting.
Style APA, Harvard, Vancouver, ISO itp.
14

Du, Xiaohu, Jie Yu, Zibo Yi, Shasha Li, Jun Ma, Yusong Tan i Qinbo Wu. "A Hybrid Adversarial Attack for Different Application Scenarios". Applied Sciences 10, nr 10 (21.05.2020): 3559. http://dx.doi.org/10.3390/app10103559.

Pełny tekst źródła
Streszczenie:
Adversarial attack against natural language has been a hot topic in the field of artificial intelligence security in recent years. It is mainly to study the methods and implementation of generating adversarial examples. The purpose is to better deal with the vulnerability and security of deep learning systems. According to whether the attacker understands the deep learning model structure, the adversarial attack is divided into black-box attack and white-box attack. In this paper, we propose a hybrid adversarial attack for different application scenarios. Firstly, we propose a novel black-box attack method of generating adversarial examples to trick the word-level sentiment classifier, which is based on differential evolution (DE) algorithm to generate semantically and syntactically similar adversarial examples. Compared with existing genetic algorithm based adversarial attacks, our algorithm can achieve a higher attack success rate while maintaining a lower word replacement rate. At the 10% word substitution threshold, we have increased the attack success rate from 58.5% to 63%. Secondly, when we understand the model architecture and parameters, etc., we propose a white-box attack with gradient-based perturbation against the same sentiment classifier. In this attack, we use a Euclidean distance and cosine distance combined metric to find the most semantically and syntactically similar substitution, and we introduce the coefficient of variation (CV) factor to control the dispersion of the modified words in the adversarial examples. More dispersed modifications can increase human imperceptibility and text readability. Compared with the existing global attack, our attack can increase the attack success rate and make modification positions in generated examples more dispersed. We’ve increased the global search success rate from 75.8% to 85.8%. Finally, we can deal with different application scenarios by using these two attack methods, that is, whether we understand the internal structure and parameters of the model, we can all generate good adversarial examples.
Style APA, Harvard, Vancouver, ISO itp.
15

Duan, Mingxing, Kenli Li, Jiayan Deng, Bin Xiao i Qi Tian. "A Novel Multi-Sample Generation Method for Adversarial Attacks". ACM Transactions on Multimedia Computing, Communications, and Applications 18, nr 4 (30.11.2022): 1–21. http://dx.doi.org/10.1145/3506852.

Pełny tekst źródła
Streszczenie:
Deep learning models are widely used in daily life, which bring great convenience to our lives, but they are vulnerable to attacks. How to build an attack system with strong generalization ability to test the robustness of deep learning systems is a hot issue in current research, among which the research on black-box attacks is extremely challenging. Most current research on black-box attacks assumes that the input dataset is known. However, in fact, it is difficult for us to obtain detailed information for those datasets. In order to solve the above challenges, we propose a multi-sample generation model for black-box model attacks, called MsGM. MsGM is mainly composed of three parts: multi-sample generation, substitute model training, and adversarial sample generation and attack. Firstly, we design a multi-task generation model to learn the distribution of the original dataset. The model first converts an arbitrary signal of a certain distribution into the shared features of the original dataset through deconvolution operations, and then according to different input conditions, multiple identical sub-networks generate the corresponding targeted samples. Secondly, the generated sample features achieve different outputs through querying the black-box model and training the substitute model, which are used to construct different loss functions to optimize and update the generator and substitute model. Finally, some common white-box attack methods are used to attack the substitute model to generate corresponding adversarial samples, which are utilized to attack the black-box model. We conducted a large number of experiments on the MNIST and CIFAR-10 datasets. The experimental results show that under the same settings and attack algorithms, MsGM achieves better performance than the based models.
Style APA, Harvard, Vancouver, ISO itp.
16

Fu, Zhongwang, i Xiaohui Cui. "ELAA: An Ensemble-Learning-Based Adversarial Attack Targeting Image-Classification Model". Entropy 25, nr 2 (22.01.2023): 215. http://dx.doi.org/10.3390/e25020215.

Pełny tekst źródła
Streszczenie:
The research on image-classification-adversarial attacks is crucial in the realm of artificial intelligence (AI) security. Most of the image-classification-adversarial attack methods are for white-box settings, demanding target model gradients and network architectures, which is less practical when facing real-world cases. However, black-box adversarial attacks immune to the above limitations and reinforcement learning (RL) seem to be a feasible solution to explore an optimized evasion policy. Unfortunately, existing RL-based works perform worse than expected in the attack success rate. In light of these challenges, we propose an ensemble-learning-based adversarial attack (ELAA) targeting image-classification models which aggregate and optimize multiple reinforcement learning (RL) base learners, which further reveals the vulnerabilities of learning-based image-classification models. Experimental results show that the attack success rate for the ensemble model is about 35% higher than for a single model. The attack success rate of ELAA is 15% higher than those of the baseline methods.
Style APA, Harvard, Vancouver, ISO itp.
17

Chen, Yiding, i Xiaojin Zhu. "Optimal Attack against Autoregressive Models by Manipulating the Environment". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 04 (3.04.2020): 3545–52. http://dx.doi.org/10.1609/aaai.v34i04.5760.

Pełny tekst źródła
Streszczenie:
We describe an optimal adversarial attack formulation against autoregressive time series forecast using Linear Quadratic Regulator (LQR). In this threat model, the environment evolves according to a dynamical system; an autoregressive model observes the current environment state and predicts its future values; an attacker has the ability to modify the environment state in order to manipulate future autoregressive forecasts. The attacker's goal is to force autoregressive forecasts into tracking a target trajectory while minimizing its attack expenditure. In the white-box setting where the attacker knows the environment and forecast models, we present the optimal attack using LQR for linear models, and Model Predictive Control (MPC) for nonlinear models. In the black-box setting, we combine system identification and MPC. Experiments demonstrate the effectiveness of our attacks.
Style APA, Harvard, Vancouver, ISO itp.
18

Tu, Chun-Chen, Paishun Ting, Pin-Yu Chen, Sijia Liu, Huan Zhang, Jinfeng Yi, Cho-Jui Hsieh i Shin-Ming Cheng. "AutoZOOM: Autoencoder-Based Zeroth Order Optimization Method for Attacking Black-Box Neural Networks". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 742–49. http://dx.doi.org/10.1609/aaai.v33i01.3301742.

Pełny tekst źródła
Streszczenie:
Recent studies have shown that adversarial examples in state-of-the-art image classifiers trained by deep neural networks (DNN) can be easily generated when the target model is transparent to an attacker, known as the white-box setting. However, when attacking a deployed machine learning service, one can only acquire the input-output correspondences of the target model; this is the so-called black-box attack setting. The major drawback of existing black-box attacks is the need for excessive model queries, which may give a false sense of model robustness due to inefficient query designs. To bridge this gap, we propose a generic framework for query-efficient blackbox attacks. Our framework, AutoZOOM, which is short for Autoencoder-based Zeroth Order Optimization Method, has two novel building blocks towards efficient black-box attacks: (i) an adaptive random gradient estimation strategy to balance query counts and distortion, and (ii) an autoencoder that is either trained offline with unlabeled data or a bilinear resizing operation for attack acceleration. Experimental results suggest that, by applying AutoZOOM to a state-of-the-art black-box attack (ZOO), a significant reduction in model queries can be achieved without sacrificing the attack success rate and the visual quality of the resulting adversarial examples. In particular, when compared to the standard ZOO method, AutoZOOM can consistently reduce the mean query counts in finding successful adversarial examples (or reaching the same distortion level) by at least 93% on MNIST, CIFAR-10 and ImageNet datasets, leading to novel insights on adversarial robustness.
Style APA, Harvard, Vancouver, ISO itp.
19

Usoltsev, Yakov, Balzhit Lodonova, Alexander Shelupanov, Anton Konev i Evgeny Kostyuchenko. "Adversarial Attacks Impact on the Neural Network Performance and Visual Perception of Data under Attack". Information 13, nr 2 (5.02.2022): 77. http://dx.doi.org/10.3390/info13020077.

Pełny tekst źródła
Streszczenie:
Machine learning algorithms based on neural networks are vulnerable to adversarial attacks. The use of attacks against authentication systems greatly reduces the accuracy of such a system, despite the complexity of generating a competitive example. As part of this study, a white-box adversarial attack on an authentication system was carried out. The basis of the authentication system is a neural network perceptron, trained on a dataset of frequency signatures of sign. For an attack on an atypical dataset, the following results were obtained: with an attack intensity of 25%, the authentication system availability decreases to 50% for a particular user, and with a further increase in the attack intensity, the accuracy decreases to 5%.
Style APA, Harvard, Vancouver, ISO itp.
20

Fang, Yong, Cheng Huang, Yijia Xu i Yang Li. "RLXSS: Optimizing XSS Detection Model to Defend Against Adversarial Attacks Based on Reinforcement Learning". Future Internet 11, nr 8 (14.08.2019): 177. http://dx.doi.org/10.3390/fi11080177.

Pełny tekst źródła
Streszczenie:
With the development of artificial intelligence, machine learning algorithms and deep learning algorithms are widely applied to attack detection models. Adversarial attacks against artificial intelligence models become inevitable problems when there is a lack of research on the cross-site scripting (XSS) attack detection model for defense against attacks. It is extremely important to design a method that can effectively improve the detection model against attack. In this paper, we present a method based on reinforcement learning (called RLXSS), which aims to optimize the XSS detection model to defend against adversarial attacks. First, the adversarial samples of the detection model are mined by the adversarial attack model based on reinforcement learning. Secondly, the detection model and the adversarial model are alternately trained. After each round, the newly-excavated adversarial samples are marked as a malicious sample and are used to retrain the detection model. Experimental results show that the proposed RLXSS model can successfully mine adversarial samples that escape black-box and white-box detection and retain aggressive features. What is more, by alternately training the detection model and the confrontation attack model, the escape rate of the detection model is continuously reduced, which indicates that the model can improve the ability of the detection model to defend against attacks.
Style APA, Harvard, Vancouver, ISO itp.
21

Wei, Zhipeng, Jingjing Chen, Zuxuan Wu i Yu-Gang Jiang. "Boosting the Transferability of Video Adversarial Examples via Temporal Translation". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 3 (28.06.2022): 2659–67. http://dx.doi.org/10.1609/aaai.v36i3.20168.

Pełny tekst źródła
Streszczenie:
Although deep-learning based video recognition models have achieved remarkable success, they are vulnerable to adversarial examples that are generated by adding human-imperceptible perturbations on clean video samples. As indicated in recent studies, adversarial examples are transferable, which makes it feasible for black-box attacks in real-world applications. Nevertheless, most existing adversarial attack methods have poor transferability when attacking other video models and transfer-based attacks on video models are still unexplored. To this end, we propose to boost the transferability of video adversarial examples for black-box attacks on video recognition models. Through extensive analysis, we discover that different video recognition models rely on different discriminative temporal patterns, leading to the poor transferability of video adversarial examples. This motivates us to introduce a temporal translation attack method, which optimizes the adversarial perturbations over a set of temporal translated video clips. By generating adversarial examples over translated videos, the resulting adversarial examples are less sensitive to temporal patterns existed in the white-box model being attacked and thus can be better transferred. Extensive experiments on the Kinetics-400 dataset and the UCF-101 dataset demonstrate that our method can significantly boost the transferability of video adversarial examples. For transfer-based attack against video recognition models, it achieves a 61.56% average attack success rate on the Kinetics-400 and 48.60% on the UCF-101.
Style APA, Harvard, Vancouver, ISO itp.
22

Chang, Heng, Yu Rong, Tingyang Xu, Wenbing Huang, Honglei Zhang, Peng Cui, Wenwu Zhu i Junzhou Huang. "A Restricted Black-Box Adversarial Framework Towards Attacking Graph Embedding Models". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 04 (3.04.2020): 3389–96. http://dx.doi.org/10.1609/aaai.v34i04.5741.

Pełny tekst źródła
Streszczenie:
With the great success of graph embedding model on both academic and industry area, the robustness of graph embedding against adversarial attack inevitably becomes a central problem in graph learning domain. Regardless of the fruitful progress, most of the current works perform the attack in a white-box fashion: they need to access the model predictions and labels to construct their adversarial loss. However, the inaccessibility of model predictions in real systems makes the white-box attack impractical to real graph learning system. This paper promotes current frameworks in a more general and flexible sense – we demand to attack various kinds of graph embedding model with black-box driven. To this end, we begin by investigating the theoretical connections between graph signal processing and graph embedding models in a principled way and formulate the graph embedding model as a general graph signal process with corresponding graph filter. As such, a generalized adversarial attacker: GF-Attack is constructed by the graph filter and feature matrix. Instead of accessing any knowledge of the target classifiers used in graph embedding, GF-Attack performs the attack only on the graph filter in a black-box attack fashion. To validate the generalization of GF-Attack, we construct the attacker on four popular graph embedding models. Extensive experimental results validate the effectiveness of our attacker on several benchmark datasets. Particularly by using our attack, even small graph perturbations like one-edge flip is able to consistently make a strong attack in performance to different graph embedding models.
Style APA, Harvard, Vancouver, ISO itp.
23

Park, Sanglee, i Jungmin So. "On the Effectiveness of Adversarial Training in Defending against Adversarial Example Attacks for Image Classification". Applied Sciences 10, nr 22 (14.11.2020): 8079. http://dx.doi.org/10.3390/app10228079.

Pełny tekst źródła
Streszczenie:
State-of-the-art neural network models are actively used in various fields, but it is well-known that they are vulnerable to adversarial example attacks. Throughout the efforts to make the models robust against adversarial example attacks, it has been found to be a very difficult task. While many defense approaches were shown to be not effective, adversarial training remains as one of the promising methods. In adversarial training, the training data are augmented by “adversarial” samples generated using an attack algorithm. If the attacker uses a similar attack algorithm to generate adversarial examples, the adversarially trained network can be quite robust to the attack. However, there are numerous ways of creating adversarial examples, and the defender does not know what algorithm the attacker may use. A natural question is: Can we use adversarial training to train a model robust to multiple types of attack? Previous work have shown that, when a network is trained with adversarial examples generated from multiple attack methods, the network is still vulnerable to white-box attacks where the attacker has complete access to the model parameters. In this paper, we study this question in the context of black-box attacks, which can be a more realistic assumption for practical applications. Experiments with the MNIST dataset show that adversarially training a network with an attack method helps defending against that particular attack method, but has limited effect for other attack methods. In addition, even if the defender trains a network with multiple types of adversarial examples and the attacker attacks with one of the methods, the network could lose accuracy to the attack if the attacker uses a different data augmentation strategy on the target network. These results show that it is very difficult to make a robust network using adversarial training, even for black-box settings where the attacker has restricted information on the target network.
Style APA, Harvard, Vancouver, ISO itp.
24

Won, Jongho, Seung-Hyun Seo i Elisa Bertino. "A Secure Shuffling Mechanism for White-Box Attack-Resistant Unmanned Vehicles". IEEE Transactions on Mobile Computing 19, nr 5 (1.05.2020): 1023–39. http://dx.doi.org/10.1109/tmc.2019.2903048.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Pedersen, Joseph, Rafael Muñoz-Gómez, Jiangnan Huang, Haozhe Sun, Wei-Wei Tu i Isabelle Guyon. "LTU Attacker for Membership Inference". Algorithms 15, nr 7 (20.07.2022): 254. http://dx.doi.org/10.3390/a15070254.

Pełny tekst źródła
Streszczenie:
We address the problem of defending predictive models, such as machine learning classifiers (Defender models), against membership inference attacks, in both the black-box and white-box setting, when the trainer and the trained model are publicly released. The Defender aims at optimizing a dual objective: utility and privacy. Privacy is evaluated with the membership prediction error of a so-called “Leave-Two-Unlabeled” LTU Attacker, having access to all of the Defender and Reserved data, except for the membership label of one sample from each, giving the strongest possible attack scenario. We prove that, under certain conditions, even a “naïve” LTU Attacker can achieve lower bounds on privacy loss with simple attack strategies, leading to concrete necessary conditions to protect privacy, including: preventing over-fitting and adding some amount of randomness. This attack is straightforward to implement against any model trainer, and we demonstrate its performance against MemGaurd. However, we also show that such a naïve LTU Attacker can fail to attack the privacy of models known to be vulnerable in the literature, demonstrating that knowledge must be complemented with strong attack strategies to turn the LTU Attacker into a powerful means of evaluating privacy. The LTU Attacker can incorporate any existing attack strategy to compute individual privacy scores for each training sample. Our experiments on the QMNIST, CIFAR-10, and Location-30 datasets validate our theoretical results and confirm the roles of over-fitting prevention and randomness in the algorithms to protect against privacy attacks.
Style APA, Harvard, Vancouver, ISO itp.
26

Gomez-Alanis, Alejandro, Jose A. Gonzalez-Lopez i Antonio M. Peinado. "GANBA: Generative Adversarial Network for Biometric Anti-Spoofing". Applied Sciences 12, nr 3 (29.01.2022): 1454. http://dx.doi.org/10.3390/app12031454.

Pełny tekst źródła
Streszczenie:
Automatic speaker verification (ASV) is a voice biometric technology whose security might be compromised by spoofing attacks. To increase the robustness against spoofing attacks, presentation attack detection (PAD) or anti-spoofing systems for detecting replay, text-to-speech and voice conversion-based spoofing attacks are being developed. However, it was recently shown that adversarial spoofing attacks may seriously fool anti-spoofing systems. Moreover, the robustness of the whole biometric system (ASV + PAD) against this new type of attack is completely unexplored. In this work, a new generative adversarial network for biometric anti-spoofing (GANBA) is proposed. GANBA has a twofold basis: (1) it jointly employs the anti-spoofing and ASV losses to yield very damaging adversarial spoofing attacks, and (2) it trains the PAD as a discriminator in order to make them more robust against these types of adversarial attacks. The proposed system is able to generate adversarial spoofing attacks which can fool the complete voice biometric system. Then, the resulting PAD discriminators of the proposed GANBA can be used as a defense technique for detecting both original and adversarial spoofing attacks. The physical access (PA) and logical access (LA) scenarios of the ASVspoof 2019 database were employed to carry out the experiments. The experimental results show that the GANBA attacks are quite effective, outperforming other adversarial techniques when applied in white-box and black-box attack setups. In addition, the resulting PAD discriminators are more robust against both original and adversarial spoofing attacks.
Style APA, Harvard, Vancouver, ISO itp.
27

Croce, Francesco, Maksym Andriushchenko, Naman D. Singh, Nicolas Flammarion i Matthias Hein. "Sparse-RS: A Versatile Framework for Query-Efficient Sparse Black-Box Adversarial Attacks". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 6 (28.06.2022): 6437–45. http://dx.doi.org/10.1609/aaai.v36i6.20595.

Pełny tekst źródła
Streszczenie:
We propose a versatile framework based on random search, Sparse-RS, for score-based sparse targeted and untargeted attacks in the black-box setting. Sparse-RS does not rely on substitute models and achieves state-of-the-art success rate and query efficiency for multiple sparse attack models: L0-bounded perturbations, adversarial patches, and adversarial frames. The L0-version of untargeted Sparse-RS outperforms all black-box and even all white-box attacks for different models on MNIST, CIFAR-10, and ImageNet. Moreover, our untargeted Sparse-RS achieves very high success rates even for the challenging settings of 20x20 adversarial patches and 2-pixel wide adversarial frames for 224x224 images. Finally, we show that Sparse-RS can be applied to generate targeted universal adversarial patches where it significantly outperforms the existing approaches. Our code is available at https://github.com/fra31/sparse-rs.
Style APA, Harvard, Vancouver, ISO itp.
28

Huang, Yang, Yuling Chen, Xuewei Wang, Jing Yang i Qi Wang. "Promoting Adversarial Transferability via Dual-Sampling Variance Aggregation and Feature Heterogeneity Attacks". Electronics 12, nr 3 (3.02.2023): 767. http://dx.doi.org/10.3390/electronics12030767.

Pełny tekst źródła
Streszczenie:
At present, deep neural networks have been widely used in various fields, but their vulnerability requires attention. The adversarial attack aims to mislead the model by generating imperceptible perturbations on the source model, and although white-box attacks have achieved good success rates, existing adversarial samples exhibit weak migration in the black-box case, especially on some adversarially trained defense models. Previous work for gradient-based optimization either optimizes the image before iteration or optimizes the gradient during iteration, so it results in the generated adversarial samples overfitting the source model and exhibiting poor mobility to the adversarially trained model. To solve these problems, we propose the dual-sample variance aggregation with feature heterogeneity attack; our method is optimized before and during iterations to produce adversarial samples with better transferability. In addition, our method can be integrated with various input transformations. A large amount of experimental data demonstrate the effectiveness of the proposed method, which improves the attack success rate by 5.9% for the normally trained model and 11.5% for the adversarially trained model compared with the current state-of-the-art migration-enhancing attack methods.
Style APA, Harvard, Vancouver, ISO itp.
29

Riadi, Imam, Rusydi Umar, Iqbal Busthomi i Arif Wirawan Muhammad. "Block-hash of blockchain framework against man-in-the-middle attacks". Register: Jurnal Ilmiah Teknologi Sistem Informasi 8, nr 1 (15.05.2021): 1. http://dx.doi.org/10.26594/register.v8i1.2190.

Pełny tekst źródła
Streszczenie:
Payload authentication is vulnerable to Man-in-the-middle (MITM) attack. Blockchain technology offers methods such as peer to peer, block hash, and proof-of-work to secure the payload of authentication process. The implementation uses block hash and proof-of-work methods on blockchain technology and testing is using White-box-testing and security tests distributed to system security practitioners who are competent in MITM attacks. The analyisis results before implementing Blockchain technology show that the authentication payload is still in plain text, so the data confidentiality has not minimize passive voice. After implementing Blockchain technology to the system, white-box testing using the Wireshark gives the result that the authentication payload sent has been well encrypted and safe enough. The percentage of security test results gets 95% which shows that securing the system from MITM attacks is relatively high. Although it has succeeded in securing the system from MITM attacks, it still has a vulnerability from other cyber attacks, so implementation of the Blockchain needs security improvisation.
Style APA, Harvard, Vancouver, ISO itp.
30

Combey, Théo, António Loison, Maxime Faucher i Hatem Hajri. "Probabilistic Jacobian-Based Saliency Maps Attacks". Machine Learning and Knowledge Extraction 2, nr 4 (13.11.2020): 558–78. http://dx.doi.org/10.3390/make2040030.

Pełny tekst źródła
Streszczenie:
Neural network classifiers (NNCs) are known to be vulnerable to malicious adversarial perturbations of inputs including those modifying a small fraction of the input features named sparse or L0 attacks. Effective and fast L0 attacks, such as the widely used Jacobian-based Saliency Map Attack (JSMA) are practical to fool NNCs but also to improve their robustness. In this paper, we show that penalising saliency maps of JSMA by the output probabilities and the input features of the NNC leads to more powerful attack algorithms that better take into account each input’s characteristics. This leads us to introduce improved versions of JSMA, named Weighted JSMA (WJSMA) and Taylor JSMA (TJSMA), and demonstrate through a variety of white-box and black-box experiments on three different datasets (MNIST, CIFAR-10 and GTSRB), that they are both significantly faster and more efficient than the original targeted and non-targeted versions of JSMA. Experiments also demonstrate, in some cases, very competitive results of our attacks in comparison with the Carlini-Wagner (CW) L0 attack, while remaining, like JSMA, significantly faster (WJSMA and TJSMA are more than 50 times faster than CW L0 on CIFAR-10). Therefore, our new attacks provide good trade-offs between JSMA and CW for L0 real-time adversarial testing on datasets such as the ones previously cited.
Style APA, Harvard, Vancouver, ISO itp.
31

Ding, Daizong, Mi Zhang, Fuli Feng, Yuanmin Huang, Erling Jiang i Min Yang. "Black-Box Adversarial Attack on Time Series Classification". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 6 (26.06.2023): 7358–68. http://dx.doi.org/10.1609/aaai.v37i6.25896.

Pełny tekst źródła
Streszczenie:
With the increasing use of deep neural network (DNN) in time series classification (TSC), recent work reveals the threat of adversarial attack, where the adversary can construct adversarial examples to cause model mistakes. However, existing researches on the adversarial attack of TSC typically adopt an unrealistic white-box setting with model details transparent to the adversary. In this work, we study a more rigorous black-box setting with attack detection applied, which restricts gradient access and requires the adversarial example to be also stealthy. Theoretical analyses reveal that the key lies in: estimating black-box gradient with diversity and non-convexity of TSC models resolved, and restricting the l0 norm of the perturbation to construct adversarial samples. Towards this end, we propose a new framework named BlackTreeS, which solves the hard optimization issue for adversarial example construction with two simple yet effective modules. In particular, we propose a tree search strategy to find influential positions in a sequence, and independently estimate the black-box gradients for these positions. Extensive experiments on three real-world TSC datasets and five DNN based models validate the effectiveness of BlackTreeS, e.g., it improves the attack success rate from 19.3% to 27.3%, and decreases the detection success rate from 90.9% to 6.8% for LSTM on the UWave dataset.
Style APA, Harvard, Vancouver, ISO itp.
32

Jin, Di, Bingdao Feng, Siqi Guo, Xiaobao Wang, Jianguo Wei i Zhen Wang. "Local-Global Defense against Unsupervised Adversarial Attacks on Graphs". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 7 (26.06.2023): 8105–13. http://dx.doi.org/10.1609/aaai.v37i7.25979.

Pełny tekst źródła
Streszczenie:
Unsupervised pre-training algorithms for graph representation learning are vulnerable to adversarial attacks, such as first-order perturbations on graphs, which will have an impact on particular downstream applications. Designing an effective representation learning strategy against white-box attacks remains a crucial open topic. Prior research attempts to improve representation robustness by maximizing mutual information between the representation and the perturbed graph, which is sub-optimal because it does not adapt its defense techniques to the severity of the attack. To address this issue, we propose an unsupervised defense method that combines local and global defense to improve the robustness of representation. Note that we put forward the Perturbed Edges Harmfulness (PEH) metric to determine the riskiness of the attack. Thus, when the edges are attacked, the model can automatically identify the risk of attack. We present a method of attention-based protection against high-risk attacks that penalizes attention coefficients of perturbed edges to encoders. Extensive experiments demonstrate that our strategies can enhance the robustness of representation against various adversarial attacks on three benchmark graphs.
Style APA, Harvard, Vancouver, ISO itp.
33

Das, Debayan, Santosh Ghosh, Arijit Raychowdhury i Shreyas Sen. "EM/Power Side-Channel Attack: White-Box Modeling and Signature Attenuation Countermeasures". IEEE Design & Test 38, nr 3 (czerwiec 2021): 67–75. http://dx.doi.org/10.1109/mdat.2021.3065189.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Wang, Yixiang, Jiqiang Liu, Xiaolin Chang, Ricardo J. Rodríguez i Jianhua Wang. "DI-AA: An interpretable white-box attack for fooling deep neural networks". Information Sciences 610 (wrzesień 2022): 14–32. http://dx.doi.org/10.1016/j.ins.2022.07.157.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Koga, Kazuki, i Kazuhiro Takemoto. "Simple Black-Box Universal Adversarial Attacks on Deep Neural Networks for Medical Image Classification". Algorithms 15, nr 5 (22.04.2022): 144. http://dx.doi.org/10.3390/a15050144.

Pełny tekst źródła
Streszczenie:
Universal adversarial attacks, which hinder most deep neural network (DNN) tasks using only a single perturbation called universal adversarial perturbation (UAP), are a realistic security threat to the practical application of a DNN for medical imaging. Given that computer-based systems are generally operated under a black-box condition in which only input queries are allowed and outputs are accessible, the impact of UAPs seems to be limited because well-used algorithms for generating UAPs are limited to white-box conditions in which adversaries can access model parameters. Nevertheless, we propose a method for generating UAPs using a simple hill-climbing search based only on DNN outputs to demonstrate that UAPs are easily generatable using a relatively small dataset under black-box conditions with representative DNN-based medical image classifications. Black-box UAPs can be used to conduct both nontargeted and targeted attacks. Overall, the black-box UAPs showed high attack success rates (40–90%). The vulnerability of the black-box UAPs was observed in several model architectures. The results indicate that adversaries can also generate UAPs through a simple procedure under the black-box condition to foil or control diagnostic medical imaging systems based on DNNs, and that UAPs are a more serious security threat.
Style APA, Harvard, Vancouver, ISO itp.
36

Das, Debayan, i Shreyas Sen. "Electromagnetic and Power Side-Channel Analysis: Advanced Attacks and Low-Overhead Generic Countermeasures through White-Box Approach". Cryptography 4, nr 4 (31.10.2020): 30. http://dx.doi.org/10.3390/cryptography4040030.

Pełny tekst źródła
Streszczenie:
Electromagnetic and power side-channel analysis (SCA) provides attackers a prominent tool to extract the secret key from the cryptographic engine. In this article, we present our cross-device deep learning (DL)-based side-channel attack (X-DeepSCA) which reduces the time to attack on embedded devices, thereby increasing the threat surface significantly. Consequently, with the knowledge of such advanced attacks, we performed a ground-up white-box analysis of the crypto IC to root-cause the source of the electromagnetic (EM) side-channel leakage. Equipped with the understanding that the higher-level metals significantly contribute to the EM leakage, we present STELLAR, which proposes to route the crypto core within the lower metals and then embed it within a current-domain signature attenuation (CDSA) hardware to ensure that the critical correlated signature gets suppressed before it reaches the top-level metal layers. CDSA-AES256 with local lower metal routing was fabricated in a TSMC 65 nm process and evaluated against different profiled and non-profiled attacks, showing protection beyond 1B encryptions, compared to ∼10K for the unprotected AES. Overall, the presented countermeasure achieved a 100× improvement over the state-of-the-art countermeasures available, with comparable power/area overheads and without any performance degradation. Moreover, it is a generic countermeasure and can be used to protect any crypto cores while preserving the legacy of the existing implementations.
Style APA, Harvard, Vancouver, ISO itp.
37

Yang, Zhifei, Wenmin Li, Fei Gao i Qiaoyan Wen. "FAPA: Transferable Adversarial Attacks Based on Foreground Attention". Security and Communication Networks 2022 (29.10.2022): 1–8. http://dx.doi.org/10.1155/2022/4447307.

Pełny tekst źródła
Streszczenie:
Deep learning models are vulnerable to attacks by adversarial examples. However, current studies are mainly limited to generating adversarial examples for specific models, and the migration of adversarial examples between different models is rarely studied. At the same time, in only studies, it is not considered that adding disturbance to the position of the image can improve the migration of adversarial examples better. As the main part of the picture, the model should give more weight to the foreground information in the recognition. Will adding more perturbations to the foreground information of the image result in a higher transfer attack rate? This paper focuses on the above problems, and proposes the FAPA algorithm, which first selects the foreground information of the image through the DINO framework, then uses the foreground information to generate M, and then uses PNA to generate the perturbation required for the whole picture. In order to show that our method attaches importance to the foreground information, we give a greater weight to the perturbation corresponding to the foreground information, and a smaller weight to the rest of the image. Finally, we optimize the generated perturbation through the gradient generated by the dual attack framework. In order to demonstrate the effectiveness of our method, we have conducted relevant comparative experiments. During the experiment, we used the three white-box ViTs models to attack the six black-box ViTs models and the three black-box CNNs models. In the transferable attack of ViTs models, the average attack success rate of our algorithm reaches 64.19%, which is much higher than 21.12% of the FGSM algorithm. In the transferable attack of CNN models, the average attack success rate of our algorithm reaches 48.07%, which is also higher than 18.65% of the FGSM algorithm. By integrating ViTs and CNNs models, the attack success rate of transfer of our algorithm reaches 56.13%, which is higher than 1.18% of the dual attack framework we refer to.
Style APA, Harvard, Vancouver, ISO itp.
38

Haq, Ijaz Ul, Zahid Younas Khan, Arshad Ahmad, Bashir Hayat, Asif Khan, Ye-Eun Lee i Ki-Il Kim. "Evaluating and Enhancing the Robustness of Sustainable Neural Relationship Classifiers Using Query-Efficient Black-Box Adversarial Attacks". Sustainability 13, nr 11 (24.05.2021): 5892. http://dx.doi.org/10.3390/su13115892.

Pełny tekst źródła
Streszczenie:
Neural relation extraction (NRE) models are the backbone of various machine learning tasks, including knowledge base enrichment, information extraction, and document summarization. Despite the vast popularity of these models, their vulnerabilities remain unknown; this is of high concern given their growing use in security-sensitive applications such as question answering and machine translation in the aspects of sustainability. In this study, we demonstrate that NRE models are inherently vulnerable to adversarially crafted text that contains imperceptible modifications of the original but can mislead the target NRE model. Specifically, we propose a novel sustainable term frequency-inverse document frequency (TFIDF) based black-box adversarial attack to evaluate the robustness of state-of-the-art CNN, CGN, LSTM, and BERT-based models on two benchmark RE datasets. Compared with white-box adversarial attacks, black-box attacks impose further constraints on the query budget; thus, efficient black-box attacks remain an open problem. By applying TFIDF to the correctly classified sentences of each class label in the test set, the proposed query-efficient method achieves a reduction of up to 70% in the number of queries to the target model for identifying important text items. Based on these items, we design both character- and word-level perturbations to generate adversarial examples. The proposed attack successfully reduces the accuracy of six representative models from an average F1 score of 80% to below 20%. The generated adversarial examples were evaluated by humans and are considered semantically similar. Moreover, we discuss defense strategies that mitigate such attacks, and the potential countermeasures that could be deployed in order to improve sustainability of the proposed scheme.
Style APA, Harvard, Vancouver, ISO itp.
39

Li, Chenwei, Hengwei Zhang, Bo Yang i Jindong Wang. "Image classification adversarial attack with improved resizing transformation and ensemble models". PeerJ Computer Science 9 (25.07.2023): e1475. http://dx.doi.org/10.7717/peerj-cs.1475.

Pełny tekst źródła
Streszczenie:
Convolutional neural networks have achieved great success in computer vision, but incorrect predictions would be output when applying intended perturbations on original input. These human-indistinguishable replicas are called adversarial examples, which on this feature can be used to evaluate network robustness and security. White-box attack success rate is considerable, when already knowing network structure and parameters. But in a black-box attack, the adversarial examples success rate is relatively low and the transferability remains to be improved. This article refers to model augmentation which is derived from data augmentation in training generalizable neural networks, and proposes resizing invariance method. The proposed method introduces improved resizing transformation to achieve model augmentation. In addition, ensemble models are used to generate more transferable adversarial examples. Extensive experiments verify the better performance of this method in comparison to other baseline methods including the original model augmentation method, and the black-box attack success rate is improved on both the normal models and defense models.
Style APA, Harvard, Vancouver, ISO itp.
40

Lin, Gengyou, Zhisong Pan, Xingyu Zhou, Yexin Duan, Wei Bai, Dazhi Zhan, Leqian Zhu, Gaoqiang Zhao i Tao Li. "Boosting Adversarial Transferability with Shallow-Feature Attack on SAR Images". Remote Sensing 15, nr 10 (22.05.2023): 2699. http://dx.doi.org/10.3390/rs15102699.

Pełny tekst źródła
Streszczenie:
Adversarial example generation on Synthetic Aperture Radar (SAR) images is an important research area that could have significant impacts on security and environmental monitoring. However, most current adversarial attack methods on SAR images are designed for white-box situations by end-to-end means, which are often difficult to achieve in real-world situations. This article proposes a novel black-box targeted attack method, called Shallow-Feature Attack (SFA). Specifically, SFA assumes that the shallow features of the model are more capable of reflecting spatial and semantic information such as target contours and textures in the image. The proposed SFA generates ghost data packages for input images and generates critical features by extracting gradients and feature maps at shallow layers of the model. The feature-level loss is then constructed using the critical features from both clean images and target images, which is combined with the end-to-end loss to form a hybrid loss function. By fitting the critical features of the input image at specific shallow layers of the neural network to the target critical features, our attack method generates more powerful and transferable adversarial examples. Experimental results show that the adversarial examples generated by the SFA attack method improved the success rate of single-model attack under a black-box scenario by an average of 3.73%, and 4.61% after combining them with ensemble-model attack without victim models.
Style APA, Harvard, Vancouver, ISO itp.
41

Zhang, Chao, i Yu Wang. "Research on the Structure of Authentication Protocol Analysis Based on MSCs/Promela". Advanced Materials Research 989-994 (lipiec 2014): 4698–703. http://dx.doi.org/10.4028/www.scientific.net/amr.989-994.4698.

Pełny tekst źródła
Streszczenie:
To discover the existing or possibly existing vulnerability of present authentication protocol, we proposed a structure of authentication protocol analysis, which consist of white box analysis, black box analysis and indicator system as the three main functional components. White box analysis makes use of the transformation from MSCs (Message Sequence Charts) to Promela (Process Meta Language), the input language of the remarkable model checker SPIN; black box analysis is based on the attack platform of authentication protocol analysis; indicator system is decided by deducibility restraint methods. Compared with the UML modeling methods based on Promela, the MSC2Promela methods we figured out on the context have more advantages than its disadvantages. Finally, we proposed a structure of authentication protocol analysis based on the transformation from MSC2Promela, which makes sense to the later work on the area of authentication protocol analysis.
Style APA, Harvard, Vancouver, ISO itp.
42

Zhang, Yue, Seong-Yoon Shin, Xujie Tan i Bin Xiong. "A Self-Adaptive Approximated-Gradient-Simulation Method for Black-Box Adversarial Sample Generation". Applied Sciences 13, nr 3 (18.01.2023): 1298. http://dx.doi.org/10.3390/app13031298.

Pełny tekst źródła
Streszczenie:
Deep neural networks (DNNs) have famously been applied in various ordinary duties. However, DNNs are sensitive to adversarial attacks which, by adding imperceptible perturbation samples to an original image, can easily alter the output. In state-of-the-art white-box attack methods, perturbation samples can successfully fool DNNs through the network gradient. In addition, they generate perturbation samples by only considering the sign information of the gradient and by dropping the magnitude. Accordingly, gradients of different magnitudes may adopt the same sign to construct perturbation samples, resulting in inefficiency. Unfortunately, it is often impractical to acquire the gradient in real-world scenarios. Consequently, we propose a self-adaptive approximated-gradient-simulation method for black-box adversarial attacks (SAGM) to generate efficient perturbation samples. Our proposed method uses knowledge-based differential evolution to simulate gradients and the self-adaptive momentum gradient to generate adversarial samples. To estimate the efficiency of the proposed SAGM, a series of experiments were carried out on two datasets, namely MNIST and CIFAR-10. Compared to state-of-the-art attack techniques, our proposed method can quickly and efficiently search for perturbation samples to misclassify the original samples. The results reveal that the SAGM is an effective and efficient technique for generating perturbation samples.
Style APA, Harvard, Vancouver, ISO itp.
43

Guo, Lu, i Hua Zhang. "A white-box impersonation attack on the FaceID system in the real world". Journal of Physics: Conference Series 1651 (listopad 2020): 012037. http://dx.doi.org/10.1088/1742-6596/1651/1/012037.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Shi, Yang, Qin Liu i Qinpei Zhao. "A Secure Implementation of a Symmetric Encryption Algorithm in White-Box Attack Contexts". Journal of Applied Mathematics 2013 (2013): 1–9. http://dx.doi.org/10.1155/2013/431794.

Pełny tekst źródła
Streszczenie:
In a white-box context, an adversary has total visibility of the implementation of the cryptosystem and full control over its execution platform. As a countermeasure against the threat of key compromise in this context, a new secure implementation of the symmetric encryption algorithm SHARK is proposed. The general approach is to merge several steps of the round function of SHARK into table lookups, blended by randomly generated mixing bijections. We prove the soundness of the implementation of the algorithm and analyze its security and efficiency. The implementation can be used in web hosts, digital right management devices, and mobile devices such as tablets and smart phones. We explain how the design approach can be adapted to other symmetric encryption algorithms with a slight modification.
Style APA, Harvard, Vancouver, ISO itp.
45

Liu, Zhenpeng, Ruilin Li, Dewei Miao, Lele Ren i Yonggang Zhao. "Membership Inference Defense in Distributed Federated Learning Based on Gradient Differential Privacy and Trust Domain Division Mechanisms". Security and Communication Networks 2022 (14.07.2022): 1–14. http://dx.doi.org/10.1155/2022/1615476.

Pełny tekst źródła
Streszczenie:
Distributed federated learning models are vulnerable to membership inference attacks (MIA) because they remember information about their training data. Through a comprehensive privacy analysis of distributed federated learning models, we design an attack model based on generative adversarial networks (GAN) and member inference attacks (MIA). Malicious participants (attackers) utilize the attack model to successfully reconstruct training sets of other regular participants without any negative impact on the global model. To solve this problem, we apply the differential privacy method to the training process of the model, which effectively reduces the accuracy of member inference attacks by clipping the gradient and adding noise to it. In addition, we manage the participants hierarchically through the method of trust domain division to alleviate the performance degradation of the model caused by differential privacy processing. Experimental results show that in distributed federated learning, our designed scheme can effectively defend against member inference attacks in white-box scenarios and maintain the usability of the global model, realizing an effective trade-off between privacy and usability.
Style APA, Harvard, Vancouver, ISO itp.
46

Wang, Fangwei, Yuanyuan Lu, Changguang Wang i Qingru Li. "Binary Black-Box Adversarial Attacks with Evolutionary Learning against IoT Malware Detection". Wireless Communications and Mobile Computing 2021 (30.08.2021): 1–9. http://dx.doi.org/10.1155/2021/8736946.

Pełny tekst źródła
Streszczenie:
5G is about to open Pandora’s box of security threats to the Internet of Things (IoT). Key technologies, such as network function virtualization and edge computing introduced by the 5G network, bring new security threats and risks to the Internet infrastructure. Therefore, higher detection and defense against malware are required. Nowadays, deep learning (DL) is widely used in malware detection. Recently, research has demonstrated that adversarial attacks have posed a hazard to DL-based models. The key issue of enhancing the antiattack performance of malware detection systems that are used to detect adversarial attacks is to generate effective adversarial samples. However, numerous existing methods to generate adversarial samples are manual feature extraction or using white-box models, which makes it not applicable in the actual scenarios. This paper presents an effective binary manipulation-based attack framework, which generates adversarial samples with an evolutionary learning algorithm. The framework chooses some appropriate action sequences to modify malicious samples. Thus, the modified malware can successfully circumvent the detection system. The evolutionary algorithm can adaptively simplify the modification actions and make the adversarial sample more targeted. Our approach can efficiently generate adversarial samples without human intervention. The generated adversarial samples can effectively combat DL-based malware detection models while preserving the consistency of the executable and malicious behavior of the original malware samples. We apply the generated adversarial samples to attack the detection engines of VirusTotal. Experimental results illustrate that the adversarial samples generated by our method reach an evasion success rate of 47.8%, which outperforms other attack methods. By adding adversarial samples in the training process, the MalConv network is retrained. We show that the detection accuracy is improved by 10.3%.
Style APA, Harvard, Vancouver, ISO itp.
47

Mao, Junjie, Bin Weng, Tianqiang Huang, Feng Ye i Liqing Huang. "Research on Multimodality Face Antispoofing Model Based on Adversarial Attacks". Security and Communication Networks 2021 (9.08.2021): 1–12. http://dx.doi.org/10.1155/2021/3670339.

Pełny tekst źródła
Streszczenie:
Face antispoofing detection aims to identify whether the user’s face identity information is legal. Multimodality models generally have high accuracy. However, the existing works of face antispoofing detection have the problem of insufficient research on the safety of the model itself. Therefore, the purpose of this paper is to explore the vulnerability of existing face antispoofing models, especially multimodality models, when resisting various types of attacks. In this paper, we firstly study the resistance ability of multimodality models when they encounter white-box attacks and black-box attacks from the perspective of adversarial examples. Then, we propose a new method that combines mixed adversarial training and differentiable high-frequency suppression modules to effectively improve model safety. Experimental results show that the accuracy of the multimodality face antispoofing model is reduced from over 90% to about 10% when it is attacked by adversarial examples. But, after applying the proposed defence method, the model can still maintain more than 90% accuracy on original examples, and the accuracy of the model can reach more than 80% on attack examples.
Style APA, Harvard, Vancouver, ISO itp.
48

Suri, Anshuman, i David Evans. "Formalizing and Estimating Distribution Inference Risks". Proceedings on Privacy Enhancing Technologies 2022, nr 4 (październik 2022): 528–51. http://dx.doi.org/10.56553/popets-2022-0121.

Pełny tekst źródła
Streszczenie:
Distribution inference, sometimes called property inference, infers statistical properties about a training set from access to a model trained on that data. Distribution inference attacks can pose serious risks when models are trained on private data, but are difficult to distinguish from the intrinsic purpose of statistical machine learning—namely, to produce models that capture statistical properties about a distribution. Motivated by Yeom et al.’s membership inference framework, we propose a formal definition of distribution inference attacks general enough to describe a broad class of attacks distinguishing between possible training distributions. We show how our definition captures previous ratio-based inference attacks as well as new kinds of attack including revealing the average node degree or clustering coefficient of training graphs. To understand distribution inference risks, we introduce a metric that quantifies observed leakage by relating it to the leakage that would occur if samples from the training distribution were provided directly to the adversary. We report on a series of experiments across a range of different distributions using both novel black-box attacks and improved versions of the state-of-the-art white-box attacks. Our results show that inexpensive attacks are often as effective as expensive meta-classifier attacks, and that there are surprising asymmetries in the effectiveness of attacks.
Style APA, Harvard, Vancouver, ISO itp.
49

Hwang, Ren-Hung, Jia-You Lin, Sun-Ying Hsieh, Hsuan-Yu Lin i Chia-Liang Lin. "Adversarial Patch Attacks on Deep-Learning-Based Face Recognition Systems Using Generative Adversarial Networks". Sensors 23, nr 2 (11.01.2023): 853. http://dx.doi.org/10.3390/s23020853.

Pełny tekst źródła
Streszczenie:
Deep learning technology has developed rapidly in recent years and has been successfully applied in many fields, including face recognition. Face recognition is used in many scenarios nowadays, including security control systems, access control management, health and safety management, employee attendance monitoring, automatic border control, and face scan payment. However, deep learning models are vulnerable to adversarial attacks conducted by perturbing probe images to generate adversarial examples, or using adversarial patches to generate well-designed perturbations in specific regions of the image. Most previous studies on adversarial attacks assume that the attacker hacks into the system and knows the architecture and parameters behind the deep learning model. In other words, the attacked model is a white box. However, this scenario is unrepresentative of most real-world adversarial attacks. Consequently, the present study assumes the face recognition system to be a black box, over which the attacker has no control. A Generative Adversarial Network method is proposed for generating adversarial patches to carry out dodging and impersonation attacks on the targeted face recognition system. The experimental results show that the proposed method yields a higher attack success rate than previous works.
Style APA, Harvard, Vancouver, ISO itp.
50

Sun, Jiazheng, Li Chen, Chenxiao Xia, Da Zhang, Rong Huang, Zhi Qiu, Wenqi Xiong, Jun Zheng i Yu-An Tan. "CANARY: An Adversarial Robustness Evaluation Platform for Deep Learning Models on Image Classification". Electronics 12, nr 17 (30.08.2023): 3665. http://dx.doi.org/10.3390/electronics12173665.

Pełny tekst źródła
Streszczenie:
The vulnerability of deep-learning-based image classification models to erroneous conclusions in the presence of small perturbations crafted by attackers has prompted attention to the question of the models’ robustness level. However, the question of how to comprehensively and fairly measure the adversarial robustness of models with different structures and defenses as well as the performance of different attack methods has never been accurately answered. In this work, we present the design, implementation, and evaluation of Canary, a platform that aims to answer this question. Canary uses a common scoring framework that includes 4 dimensions with 26 (sub)metrics for evaluation. First, Canary generates and selects valid adversarial examples and collects metrics data through a series of tests. Then it uses a two-way evaluation strategy to guide the data organization and finally integrates all the data to give the scores for model robustness and attack effectiveness. In this process, we use Item Response Theory (IRT) for the first time to ensure that all the metrics can be fairly calculated into a score that can visually measure the capability. In order to fully demonstrate the effectiveness of Canary, we conducted large-scale testing of 15 representative models trained on the ImageNet dataset using 12 white-box attacks and 12 black-box attacks and came up with a series of in-depth and interesting findings. This further illustrates the capabilities and strengths of Canary as a benchmarking platform. Our paper provides an open-source framework for model robustness evaluation, allowing researchers to perform comprehensive and rapid evaluations of models or attack/defense algorithms, thus inspiring further improvements and greatly benefiting future work.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii