Academic literature on the topic 'White-box attack'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'White-box attack.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "White-box attack"

1

Chen, Jinghui, Dongruo Zhou, Jinfeng Yi, and Quanquan Gu. "A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3486–94. http://dx.doi.org/10.1609/aaai.v34i04.5753.

Full text
Abstract:
Depending on how much information an adversary can access to, adversarial attacks can be classified as white-box attack and black-box attack. For white-box attack, optimization-based attack algorithms such as projected gradient descent (PGD) can achieve relatively high attack success rates within moderate iterates. However, they tend to generate adversarial examples near or upon the boundary of the perturbation set, resulting in large distortion. Furthermore, their corresponding black-box attack algorithms also suffer from high query complexities, thereby limiting their practical usefulness. In this paper, we focus on the problem of developing efficient and effective optimization-based adversarial attack algorithms. In particular, we propose a novel adversarial attack framework for both white-box and black-box settings based on a variant of Frank-Wolfe algorithm. We show in theory that the proposed attack algorithms are efficient with an O(1/√T) convergence rate. The empirical results of attacking the ImageNet and MNIST datasets also verify the efficiency and effectiveness of the proposed algorithms. More specifically, our proposed algorithms attain the best attack performances in both white-box and black-box attacks among all baselines, and are more time and query efficient than the state-of-the-art.
APA, Harvard, Vancouver, ISO, and other styles
2

Josse, Sébastien. "White-box attack context cryptovirology." Journal in Computer Virology 5, no. 4 (August 2, 2008): 321–34. http://dx.doi.org/10.1007/s11416-008-0097-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Porkodi, V., M. Sivaram, Amin Salih Mohammed, and V. Manikandan. "Survey on White-Box Attacks and Solutions." Asian Journal of Computer Science and Technology 7, no. 3 (November 5, 2018): 28–32. http://dx.doi.org/10.51983/ajcst-2018.7.3.1904.

Full text
Abstract:
This Research paper provides special white-box attacks and outputs from it. Wearable IoT units perform keep doubtlessly captured, then accessed within an unauthorized behavior due to the fact concerning their bodily nature. That case, they are between white-box attack systems, toughness the opponent may additionally have quantity,visibility about the implementation of the inbuilt crypto system including complete control over its solution platform. The white-box attacks on wearable devices is an assignment without a doubt. To serve as a countermeasure in opposition to these problems in such contexts, here analyzing encryption schemes in imitation of protecting the confidentiality concerning records against white-box attacks. The lightweight encryption, intention in accordance with protecting the confidentiality over data towards white-box attacks is the recently stability good algorithm. In that paper, we read the extraordinary algorithms too.
APA, Harvard, Vancouver, ISO, and other styles
4

ALSHEKH, MOKHTAR, and KÖKSAL ERENTÜRK. "DEFENSE AGAINST WHITE BOX ADVERSARIAL ATTACKS IN ARABIC NATURAL LANGUAGE PROCESSING (ANLP)." International Journal of Advanced Natural Sciences and Engineering Researches 7, no. 6 (July 25, 2023): 151–55. http://dx.doi.org/10.59287/ijanser.1149.

Full text
Abstract:
Adversarial attacks are among the biggest threats that affect the accuracy of classifiers in machine learning systems. This type of attacks tricks the classification model and make it perform false predictions by providing noised data that only human can detect that noise. The risk of attacks is high in natural language processing applications because most of the data collected in this case is taken from social networking sites that do not impose any restrictions on users when writing comments, which allows the attack to be created (either intentionally or unintentionally) easily and simply affecting the level of accuracy of the model. In this paper, The MLP model was used for the sentiment analysis of the texts taken from the tweets, the effect of applying a white-box adversarial attack on this classifier was studied and a technique was proposed to protect it from the attack. After applying the proposed methodology, we found that the adversarial attack decreases the accuracy of the classifier from 55.17% to 11.11%, and after applying the proposed defense technique, this contributed to an increase in the accuracy of the classifier up to 77.77%, and therefore the proposed plan can be adopted in the face of the adversarial attack. Attacker determines their targets strategically and deliberately depend on vulnerabilities they have ascertained. Organization and individuals mostly try to protect themselves from one occurrence or type on an attack. Still, they have to acknowledge that the attacker may easily move focus to advanced uncovered vulnerabilities. Even if someone successfully tackles several attacks, risks remain, and the need to face threats will happen for the predictable future.
APA, Harvard, Vancouver, ISO, and other styles
5

Park, Hosung, Gwonsang Ryu, and Daeseon Choi. "Partial Retraining Substitute Model for Query-Limited Black-Box Attacks." Applied Sciences 10, no. 20 (October 14, 2020): 7168. http://dx.doi.org/10.3390/app10207168.

Full text
Abstract:
Black-box attacks against deep neural network (DNN) classifiers are receiving increasing attention because they represent a more practical approach in the real world than white box attacks. In black-box environments, adversaries have limited knowledge regarding the target model. This makes it difficult to estimate gradients for crafting adversarial examples, such that powerful white-box algorithms cannot be directly applied to black-box attacks. Therefore, a well-known black-box attack strategy creates local DNNs, called substitute models, to emulate the target model. The adversaries then craft adversarial examples using the substitute models instead of the unknown target model. The substitute models repeat the query process and are trained by observing labels from the target model’s responses to queries. However, emulating a target model usually requires numerous queries because new DNNs are trained from the beginning. In this study, we propose a new training method for substitute models to minimize the number of queries. We consider the number of queries as an important factor for practical black-box attacks because real-world systems often restrict queries for security and financial purposes. To decrease the number of queries, the proposed method does not emulate the entire target model and only adjusts the partial classification boundary based on a current attack. Furthermore, it does not use queries in the pre-training phase and creates queries only in the retraining phase. The experimental results indicate that the proposed method is effective in terms of the number of queries and attack success ratio against MNIST, VGGFace2, and ImageNet classifiers in query-limited black-box environments. Further, we demonstrate a black-box attack against a commercial classifier, Google AutoML Vision.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhou, Jie, Jian Bai, and Meng Shan Jiang. "White-Box Implementation of ECDSA Based on the Cloud Plus Side Mode." Security and Communication Networks 2020 (November 19, 2020): 1–10. http://dx.doi.org/10.1155/2020/8881116.

Full text
Abstract:
White-box attack context assumes that the running environments of algorithms are visible and modifiable. Algorithms that can resist the white-box attack context are called white-box cryptography. The elliptic curve digital signature algorithm (ECDSA) is one of the most widely used digital signature algorithms which can provide integrity, authenticity, and nonrepudiation. Since the private key in the classical ECDSA is plaintext, it is easy for attackers to obtain the private key. To increase the security of the private key under the white-box attack context, this article presents an algorithm for the white-box implementation of ECDSA. It uses the lookup table technology and the “cloud plus side” mode to protect the private key. The residue number system (RNS) theory is used to reduce the size of storage. Moreover, the article analyzes the security of the proposed algorithm against an exhaustive search attack, a random number attack, a code lifting attack, and so on. The efficiency of the proposed scheme is compared with that of the classical ECDSA through experiments.
APA, Harvard, Vancouver, ISO, and other styles
7

LIN, Ting-Ting, and Xue-Jia LAI. "Efficient Attack to White-Box SMS4 Implementation." Journal of Software 24, no. 8 (January 17, 2014): 2238–49. http://dx.doi.org/10.3724/sp.j.1001.2013.04356.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Sicheng, Yun Lin, Zhida Bao, and Jiangzhi Fu. "A Lightweight Modulation Classification Network Resisting White Box Gradient Attacks." Security and Communication Networks 2021 (October 12, 2021): 1–10. http://dx.doi.org/10.1155/2021/8921485.

Full text
Abstract:
Improving the attack resistance of the modulation classification model is an important means to improve the security of the physical layer of the Internet of Things (IoT). In this paper, a binary modulation classification defense network (BMCDN) was proposed, which has the advantages of small model scale and strong immunity to white box gradient attacks. Specifically, an end-to-end modulation signal recognition network that directly recognizes the form of the signal sequence is constructed, and its parameters are quantized to 1 bit to obtain the advantages of low memory usage and fast calculation speed. The gradient of the quantized parameter is directly transferred to the original parameter to realize the gradient concealment and achieve the effect of effectively defending against the white box gradient attack. Experimental results show that BMCDN obtains significant immune performance against white box gradient attacks while achieving a scale reduction of 6 times.
APA, Harvard, Vancouver, ISO, and other styles
9

Gao, Xianfeng, Yu-an Tan, Hongwei Jiang, Quanxin Zhang, and Xiaohui Kuang. "Boosting Targeted Black-Box Attacks via Ensemble Substitute Training and Linear Augmentation." Applied Sciences 9, no. 11 (June 3, 2019): 2286. http://dx.doi.org/10.3390/app9112286.

Full text
Abstract:
These years, Deep Neural Networks (DNNs) have shown unprecedented performance in many areas. However, some recent studies revealed their vulnerability to small perturbations added on source inputs. Furthermore, we call the ways to generate these perturbations’ adversarial attacks, which contain two types, black-box and white-box attacks, according to the adversaries’ access to target models. In order to overcome the problem of black-box attackers’ unreachabilities to the internals of target DNN, many researchers put forward a series of strategies. Previous works include a method of training a local substitute model for the target black-box model via Jacobian-based augmentation and then use the substitute model to craft adversarial examples using white-box methods. In this work, we improve the dataset augmentation to make the substitute models better fit the decision boundary of the target model. Unlike the previous work that just performed the non-targeted attack, we make it first to generate targeted adversarial examples via training substitute models. Moreover, to boost the targeted attacks, we apply the idea of ensemble attacks to the substitute training. Experiments on MNIST and GTSRB, two common datasets for image classification, demonstrate our effectiveness and efficiency of boosting a targeted black-box attack, and we finally attack the MNIST and GTSRB classifiers with the success rates of 97.7% and 92.8%.
APA, Harvard, Vancouver, ISO, and other styles
10

Jiang, Yi, and Dengpan Ye. "Black-Box Adversarial Attacks against Audio Forensics Models." Security and Communication Networks 2022 (January 17, 2022): 1–8. http://dx.doi.org/10.1155/2022/6410478.

Full text
Abstract:
Speech synthesis technology has made great progress in recent years and is widely used in the Internet of things, but it also brings the risk of being abused by criminals. Therefore, a series of researches on audio forensics models have arisen to reduce or eliminate these negative effects. In this paper, we propose a black-box adversarial attack method that only relies on output scores of audio forensics models. To improve the transferability of adversarial attacks, we utilize the ensemble-model method. A defense method is also designed against our proposed attack method under the view of the huge threat of adversarial examples to audio forensics models. Our experimental results on 4 forensics models trained on the LA part of the ASVspoof 2019 dataset show that our attacks can get a 99 % attack success rate on score-only black-box models, which is competitive to the best of white-box attacks, and 60 % attack success rate on decision-only black-box models. Finally, our defense method reduces the attack success rate to 16 % and guarantees 98 % detection accuracy of forensics models.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "White-box attack"

1

Koski, Alexander. "Randomly perturbing the bytecode of white box cryptography implementations in an attempt to mitigate side-channel attacks." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273928.

Full text
Abstract:
This study takes one step further towards constructing a tool able to automatically amplify the security on your cryptographic implementations. In white box cryptography the encryption key is hidden inside the encryption algorithm out of plain sight. An attacker can try to extract the secret key by conducting a side channel attack, differential computational analysis, which many white boxes are vulnerable to. The technique to increase security explored in this study consists of randomly with different probabilities perturb the white box by adding the value one to a variable inside the running white box. This does break the correctness of the output on all the three tested white box implementations to various extents, but some perturbations can be made which maintains fairly high correctness on the output of the program. Running a white box with perturbations does not cause any significant increase in execution time. Out of more than 100 000 possible perturbation points 25 were chosen to be investigated further. In one case the security of a perturbed white box increased, but in four similar cases the white box was made more insecure, otherwise no change in security was observed. A more sophisticated technique of identifying the best point to insert perturbations are therefore required in order to further investigate how to increase the security of your cryptographic implementations while still maintaining a fairly high correctness despite the program experiencing random perturbations.
Denna studie tar ett steg längre i att skapa ett verktyg som automatiskt höjer säkerheten på kryptografiska implementationer. I white box-kryptografi är krypteringsnyckeln som används redan gömd inuti den valda krypteringsalgoritmen. En användare med onda intentioner kan vilja försöka att extrahera denna gömda krypteringsnyckel genom att utföra en typ utav side-channel -attack, differential computational analys, vilket flera white box-implementationer är sårbara för. Tekniken för att öka säkerheten hos dessa white box-implementationer som utforskas i denna studie går ut på att slumpmässigt, med olika sannolikheter störa white box-implementationen genom att addera värdet ett (1) till en godtycklig variabel mitt under programmets körning. Detta sänker korrektheten på outputen från alla de tre testade white box-implementationerna olika mycket, men vissa störningar kan utföras och fortfarande resultera i hög korrekthet. Att sätta in sådana störningar i white box-implementationerna har ingen större inverkan på dess totala exekveringstid. Utav 100 000 möjliga ställen att sätta in en störning på valdes 25 utav dessa att undersökas vidare. I ett utav dessa fall ökade säkerheten på white box-implementationen, men i fyra liknande fall minskade säkerheten, annars på övriga störningspunkter fanns det inga tecken på en förändring i säkerhet. Ett mer sofistikerat sätt att identifiera de bästa störningspunkterna på behöver därför utredas för att bygga vidare på detta arbete med att öka säkerheten hos kryptografiska implementationer genom att utsätta dem för slumpmässiga störningar och samtidigt kunna hålla hög korrekthet i programmets resultat.
APA, Harvard, Vancouver, ISO, and other styles
2

Marion, Damien. "Multidimensionality of the models and the data in the side-channel domain." Thesis, Paris, ENST, 2018. http://www.theses.fr/2018ENST0056/document.

Full text
Abstract:
Depuis la publication en 1999 du papier fondateur de Paul C. Kocher, Joshua Jaffe et Benjamin Jun, intitulé "Differential Power Analysis", les attaques par canaux auxiliaires se sont révélées être un moyen d’attaque performant contre les algorithmes cryptographiques. En effet, il s’est avéré que l’utilisation d’information extraite de canaux auxiliaires comme le temps d’exécution, la consommation de courant ou les émanations électromagnétiques, pouvait être utilisée pour retrouver des clés secrètes. C’est dans ce contexte que cette thèse propose, dans un premier temps, de traiter le problème de la réduction de dimension. En effet, en vingt ans, la complexité ainsi que la taille des données extraites des canaux auxiliaires n’a cessé de croître. C’est pourquoi la réduction de dimension de ces données permet de réduire le temps et d’augmenter l’efficacité des attaques. Les méthodes de réduction de dimension proposées le sont pour des modèles de fuites complexe et de dimension quelconques. Dans un second temps, une méthode d’évaluation d’algorithmes logiciels est proposée. Celle-ci repose sur l’analyse de l’ensemble des données manipulées lors de l’exécution du logiciel évalué. La méthode proposée est composée de plusieurs fonctionnalités permettant d’accélérer et d’augmenter l’efficacité de l’analyse, notamment dans le contexte d’évaluation d’implémentation de cryptographie en boîte blanche
Since the publication in 1999 of the seminal paper of Paul C. Kocher, Joshua Jaffe and Benjamin Jun, entitled "Differential Power Analysis", the side-channel attacks have been proved to be efficient ways to attack cryptographic algorithms. Indeed, it has been revealed that the usage of information extracted from the side-channels such as the execution time, the power consumption or the electromagnetic emanations could be used to recover secret keys. In this context, we propose first, to treat the problem of dimensionality reduction. Indeed, since twenty years, the complexity and the size of the data extracted from the side-channels do not stop to grow. That is why the reduction of these data decreases the time and increases the efficiency of these attacks. The dimension reduction is proposed for complex leakage models and any dimension. Second, a software leakage assessment methodology is proposed ; it is based on the analysis of all the manipulated data during the execution of the software. The proposed methodology provides features that speed-up and increase the efficiency of the analysis, especially in the case of white box cryptography
APA, Harvard, Vancouver, ISO, and other styles
3

Vivek, B. S. "Towards Learning Adversarially Robust Deep Learning Models." Thesis, 2019. https://etd.iisc.ac.in/handle/2005/4488.

Full text
Abstract:
Deep learning models have shown impressive performance across a wide spectrum of computer vision applications, including medical diagnosis and autonomous driving. One of the major concerns that these models face is their susceptibility to adversarial samples: samples with small, crafted noise designed to manipulate the model’s prediction. A defense mechanism named Adversarial Training (AT) shows promising results against these attacks. This training regime augments mini-batches with adversaries. However, to scale this training to large networks and datasets, fast and simple methods (e.g., single-step methods such as Fast Gradient Sign Method (FGSM)), are essential for generating these adversaries. But, single-step adversarial training (e.g., FGSM adversarial training) converges to a degenerate minimum, where the model merely appears to be robust. As a result, models are vulnerable to simple black-box attacks. In this thesis, we explore the following aspects of adversarial training: Failure of Single-step Adversarial Training: In the first part of the thesis, we will demonstrate that the pseudo robustness of an adversarially trained model is due to the limitations in the existing evaluation procedure. Further, we introduce novel variants of white-box and black-box attacks, dubbed “gray-box adversarial attacks”, based on which we propose a novel evaluation method to assess the robustness of the learned models. A novel variant of adversarial training named “Gray-box Adversarial Training” that uses intermediate versions of the model to seed the adversaries is proposed to improve the model’s robustness. Regularizers for Single-step Adversarial Training: In this part of the thesis, we will discuss various regularizers that could help to learn robust models using single-step adversarial training methods. (i) Regularizer that enforces logits for FGSM and I-FGSM (iterative-FGSM) of a clean sample, to be similar (imposed on only one pair of an adversarial sample in a mini-batch), (ii) Regularizer that enforces logits for FGSM and R-FGSM (Random+FGSM) of a clean sample, to be similar, (iii) Monotonic loss constraint: Enforces the loss to increase monotonically with an increase in the perturbation size of the FGSM attack, and (iv) Dropout with decaying dropout probability: Introduces dropout layer with decaying dropout probability, after each nonlinear layer of a network. Incorporating Domain Knowledge to Improve Model’s Adversarial Robustness: In this final part of the thesis, we show that the existing normal training method fails to incorporate domain knowledge into the learned feature representation of the network. Further, we show that incorporating domain knowledge into the learned feature representation of the network results in a significant improvement in the robustness of the network against adversarial attacks, within normal training regime.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "White-box attack"

1

Cai, Jinghui, Boyang Wang, Xiangfeng Wang, and Bo Jin. "Accelerate Black-Box Attack with White-Box Prior Knowledge." In Intelligence Science and Big Data Engineering. Big Data and Machine Learning, 394–405. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36204-1_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Meng, Lubin, Chin-Teng Lin, Tzyy-Ping Jung, and Dongrui Wu. "White-Box Target Attack for EEG-Based BCI Regression Problems." In Neural Information Processing, 476–88. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36708-4_39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zeyad, Mohamed, Houssem Maghrebi, Davide Alessio, and Boris Batteux. "Another Look on Bucketing Attack to Defeat White-Box Implementations." In Constructive Side-Channel Analysis and Secure Design, 99–117. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-16350-1_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lucas, Keane, Mahmood Sharif, Lujo Bauer, Michael K. Reiter, and Saurabh Shintre. "Deceiving ML-Based Friend-or-Foe Identification for Executables." In Advances in Information Security, 217–49. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-16613-6_10.

Full text
Abstract:
AbstractDeceiving an adversary who may, e.g., attempt to reconnoiter a system before launching an attack, typically involves changing the system’s behavior such that it deceives the attacker while still permitting the system to perform its intended function. We develop techniques to achieve such deception by studying a proxy problem: malware detection.Researchers and anti-virus vendors have proposed DNNs for malware detection from raw bytes that do not require manual feature engineering. In this work, we propose an attack that interweaves binary-diversification techniques and optimization frameworks to mislead such DNNs while preserving the functionality of binaries. Unlike prior attacks, ours manipulates instructions that are a functional part of the binary, which makes it particularly challenging to defend against. We evaluated our attack against three DNNs in white- and black-box settings and found that it often achieved success rates near 100%. Moreover, we found that our attack can fool some commercial anti-viruses, in certain cases with a success rate of 85%. We explored several defenses, both new and old, and identified some that can foil over 80% of our evasion attempts. However, these defenses may still be susceptible to evasion by attacks, and so we advocate for augmenting malware-detection systems with methods that do not rely on machine learning.
APA, Harvard, Vancouver, ISO, and other styles
5

Amadori, Alessandro, Wil Michiels, and Peter Roelse. "A DFA Attack on White-Box Implementations of AES with External Encodings." In Lecture Notes in Computer Science, 591–617. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-38471-5_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Haohan, Xingquan Zuo, Hai Huang, and Xing Wan. "Saliency Map-Based Local White-Box Adversarial Attack Against Deep Neural Networks." In Artificial Intelligence, 3–14. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20500-2_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Alpirez Bock, Estuardo, Chris Brzuska, Wil Michiels, and Alexander Treff. "On the Ineffectiveness of Internal Encodings - Revisiting the DCA Attack on White-Box Cryptography." In Applied Cryptography and Network Security, 103–20. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-93387-0_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Biryukov, Alex, and Aleksei Udovenko. "Attacks and Countermeasures for White-box Designs." In Lecture Notes in Computer Science, 373–402. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-03329-3_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lepoint, Tancrède, Matthieu Rivain, Yoni De Mulder, Peter Roelse, and Bart Preneel. "Two Attacks on a White-Box AES Implementation." In Selected Areas in Cryptography -- SAC 2013, 265–85. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-43414-7_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Biryukov, Alex, and Aleksei Udovenko. "Dummy Shuffling Against Algebraic Attacks in White-Box Implementations." In Lecture Notes in Computer Science, 219–48. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-77886-6_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "White-box attack"

1

Mesbah, Abdelhak, Mohamed Mezghiche, and Jean-Louis Lanet. "Persistent fault injection attack from white-box to black-box." In 2017 5th International Conference on Electrical Engineering - Boumerdes (ICEE-B). IEEE, 2017. http://dx.doi.org/10.1109/icee-b.2017.8192164.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Chaoning, Philipp Benz, Adil Karjauv, Jae Won Cho, Kang Zhang, and In So Kweon. "Investigating Top-k White-Box and Transferable Black-box Attack." In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.01466.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Yufei, Zexin Li, Yingfan Gao, and Cong Liu. "White-Box Multi-Objective Adversarial Attack on Dialogue Generation." In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.acl-long.100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Xiao, Chaowei, Bo Li, Jun-yan Zhu, Warren He, Mingyan Liu, and Dawn Song. "Generating Adversarial Examples with Adversarial Networks." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/543.

Full text
Abstract:
Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires more research efforts. In this paper, we propose AdvGAN to generate adversarial exam- ples with generative adversarial networks (GANs), which can learn and approximate the distribution of original instances. For AdvGAN, once the generator is trained, it can generate perturbations efficiently for any instance, so as to potentially accelerate adversarial training as defenses. We apply Adv- GAN in both semi-whitebox and black-box attack settings. In semi-whitebox attacks, there is no need to access the original target model after the generator is trained, in contrast to traditional white-box attacks. In black-box attacks, we dynamically train a distilled model for the black-box model and optimize the generator accordingly. Adversarial examples generated by AdvGAN on different target models have high attack success rate under state-of-the-art defenses compared to other attacks. Our attack has placed the first with 92.76% accuracy on a public MNIST black-box attack challenge.
APA, Harvard, Vancouver, ISO, and other styles
5

Jia, Yin, TingTing Lin, and Xuejia Lai. "A generic attack against white box implementation of block ciphers." In 2016 International Conference on Computer, Information and Telecommunication Systems (CITS). IEEE, 2016. http://dx.doi.org/10.1109/cits.2016.7546449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Azadmanesh, Maryam, Behrouz Shahgholi Ghahfarokhi, and Maede Ashouri Talouki. "A White-Box Generator Membership Inference Attack Against Generative Models." In 2021 18th International ISC Conference on Information Security and Cryptology (ISCISC). IEEE, 2021. http://dx.doi.org/10.1109/iscisc53448.2021.9720436.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Xiying, Rongrong Ni, Wenjie Li, and Yao Zhao. "Adversarial Attack on Fake-Faces Detectors Under White and Black Box Scenarios." In 2021 IEEE International Conference on Image Processing (ICIP). IEEE, 2021. http://dx.doi.org/10.1109/icip42928.2021.9506273.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Betz, Patrick, Christian Meilicke, and Heiner Stuckenschmidt. "Adversarial Explanations for Knowledge Graph Embeddings." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/391.

Full text
Abstract:
We propose a novel black-box approach for performing adversarial attacks against knowledge graph embedding models. An adversarial attack is a small perturbation of the data at training time to cause model failure at test time. We make use of an efficient rule learning approach and use abductive reasoning to identify triples which are logical explanations for a particular prediction. The proposed attack is then based on the simple idea to suppress or modify one of the triples in the most confident explanation. Although our attack scheme is model independent and only needs access to the training data, we report results on par with state-of-the-art white-box attack methods that additionally require full access to the model architecture, the learned embeddings, and the loss functions. This is a surprising result which indicates that knowledge graph embedding models can partly be explained post hoc with the help of symbolic methods.
APA, Harvard, Vancouver, ISO, and other styles
9

Amadori, Alessandro, Wil Michiels, and Peter Roelse. "Automating the BGE Attack on White-Box Implementations of AES with External Encodings." In 2020 IEEE 10th International Conference on Consumer Electronics (ICCE-Berlin). IEEE, 2020. http://dx.doi.org/10.1109/icce-berlin50680.2020.9352195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Joe, Byunggill, Insik Shin, and Jihun Hamm. "Online Evasion Attacks on Recurrent Models:The Power of Hallucinating the Future." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/433.

Full text
Abstract:
Recurrent models are frequently being used in online tasks such as autonomous driving, and a comprehensive study of their vulnerability is called for. Existing research is limited in generality only addressing application-specific vulnerability or making implausible assumptions such as the knowledge of future input. In this paper, we present a general attack framework for online tasks incorporating the unique constraints of the online setting different from offline tasks. Our framework is versatile in that it covers time-varying adversarial objectives and various optimization constraints, allowing for a comprehensive study of robustness. Using the framework, we also present a novel white-box attack called Predictive Attack that `hallucinates' the future. The attack achieves 98 percent of the performance of the ideal but infeasible clairvoyant attack on average. We validate the effectiveness of the proposed framework and attacks through various experiments.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography