Gotowa bibliografia na temat „Black-box attack”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Black-box attack”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Black-box attack"

1

Chen, Jinghui, Dongruo Zhou, Jinfeng Yi i Quanquan Gu. "A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 04 (3.04.2020): 3486–94. http://dx.doi.org/10.1609/aaai.v34i04.5753.

Pełny tekst źródła
Streszczenie:
Depending on how much information an adversary can access to, adversarial attacks can be classified as white-box attack and black-box attack. For white-box attack, optimization-based attack algorithms such as projected gradient descent (PGD) can achieve relatively high attack success rates within moderate iterates. However, they tend to generate adversarial examples near or upon the boundary of the perturbation set, resulting in large distortion. Furthermore, their corresponding black-box attack algorithms also suffer from high query complexities, thereby limiting their practical usefulness. In this paper, we focus on the problem of developing efficient and effective optimization-based adversarial attack algorithms. In particular, we propose a novel adversarial attack framework for both white-box and black-box settings based on a variant of Frank-Wolfe algorithm. We show in theory that the proposed attack algorithms are efficient with an O(1/√T) convergence rate. The empirical results of attacking the ImageNet and MNIST datasets also verify the efficiency and effectiveness of the proposed algorithms. More specifically, our proposed algorithms attain the best attack performances in both white-box and black-box attacks among all baselines, and are more time and query efficient than the state-of-the-art.
Style APA, Harvard, Vancouver, ISO itp.
2

Jiang, Yi, i Dengpan Ye. "Black-Box Adversarial Attacks against Audio Forensics Models". Security and Communication Networks 2022 (17.01.2022): 1–8. http://dx.doi.org/10.1155/2022/6410478.

Pełny tekst źródła
Streszczenie:
Speech synthesis technology has made great progress in recent years and is widely used in the Internet of things, but it also brings the risk of being abused by criminals. Therefore, a series of researches on audio forensics models have arisen to reduce or eliminate these negative effects. In this paper, we propose a black-box adversarial attack method that only relies on output scores of audio forensics models. To improve the transferability of adversarial attacks, we utilize the ensemble-model method. A defense method is also designed against our proposed attack method under the view of the huge threat of adversarial examples to audio forensics models. Our experimental results on 4 forensics models trained on the LA part of the ASVspoof 2019 dataset show that our attacks can get a 99 % attack success rate on score-only black-box models, which is competitive to the best of white-box attacks, and 60 % attack success rate on decision-only black-box models. Finally, our defense method reduces the attack success rate to 16 % and guarantees 98 % detection accuracy of forensics models.
Style APA, Harvard, Vancouver, ISO itp.
3

Park, Hosung, Gwonsang Ryu i Daeseon Choi. "Partial Retraining Substitute Model for Query-Limited Black-Box Attacks". Applied Sciences 10, nr 20 (14.10.2020): 7168. http://dx.doi.org/10.3390/app10207168.

Pełny tekst źródła
Streszczenie:
Black-box attacks against deep neural network (DNN) classifiers are receiving increasing attention because they represent a more practical approach in the real world than white box attacks. In black-box environments, adversaries have limited knowledge regarding the target model. This makes it difficult to estimate gradients for crafting adversarial examples, such that powerful white-box algorithms cannot be directly applied to black-box attacks. Therefore, a well-known black-box attack strategy creates local DNNs, called substitute models, to emulate the target model. The adversaries then craft adversarial examples using the substitute models instead of the unknown target model. The substitute models repeat the query process and are trained by observing labels from the target model’s responses to queries. However, emulating a target model usually requires numerous queries because new DNNs are trained from the beginning. In this study, we propose a new training method for substitute models to minimize the number of queries. We consider the number of queries as an important factor for practical black-box attacks because real-world systems often restrict queries for security and financial purposes. To decrease the number of queries, the proposed method does not emulate the entire target model and only adjusts the partial classification boundary based on a current attack. Furthermore, it does not use queries in the pre-training phase and creates queries only in the retraining phase. The experimental results indicate that the proposed method is effective in terms of the number of queries and attack success ratio against MNIST, VGGFace2, and ImageNet classifiers in query-limited black-box environments. Further, we demonstrate a black-box attack against a commercial classifier, Google AutoML Vision.
Style APA, Harvard, Vancouver, ISO itp.
4

Zhao, Pu, Pin-yu Chen, Siyue Wang i Xue Lin. "Towards Query-Efficient Black-Box Adversary with Zeroth-Order Natural Gradient Descent". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 04 (3.04.2020): 6909–16. http://dx.doi.org/10.1609/aaai.v34i04.6173.

Pełny tekst źródła
Streszczenie:
Despite the great achievements of the modern deep neural networks (DNNs), the vulnerability/robustness of state-of-the-art DNNs raises security concerns in many application domains requiring high reliability. Various adversarial attacks are proposed to sabotage the learning performance of DNN models. Among those, the black-box adversarial attack methods have received special attentions owing to their practicality and simplicity. Black-box attacks usually prefer less queries in order to maintain stealthy and low costs. However, most of the current black-box attack methods adopt the first-order gradient descent method, which may come with certain deficiencies such as relatively slow convergence and high sensitivity to hyper-parameter settings. In this paper, we propose a zeroth-order natural gradient descent (ZO-NGD) method to design the adversarial attacks, which incorporates the zeroth-order gradient estimation technique catering to the black-box attack scenario and the second-order natural gradient descent to achieve higher query efficiency. The empirical evaluations on image classification datasets demonstrate that ZO-NGD can obtain significantly lower model query complexities compared with state-of-the-art attack methods.
Style APA, Harvard, Vancouver, ISO itp.
5

Duan, Mingxing, Kenli Li, Jiayan Deng, Bin Xiao i Qi Tian. "A Novel Multi-Sample Generation Method for Adversarial Attacks". ACM Transactions on Multimedia Computing, Communications, and Applications 18, nr 4 (30.11.2022): 1–21. http://dx.doi.org/10.1145/3506852.

Pełny tekst źródła
Streszczenie:
Deep learning models are widely used in daily life, which bring great convenience to our lives, but they are vulnerable to attacks. How to build an attack system with strong generalization ability to test the robustness of deep learning systems is a hot issue in current research, among which the research on black-box attacks is extremely challenging. Most current research on black-box attacks assumes that the input dataset is known. However, in fact, it is difficult for us to obtain detailed information for those datasets. In order to solve the above challenges, we propose a multi-sample generation model for black-box model attacks, called MsGM. MsGM is mainly composed of three parts: multi-sample generation, substitute model training, and adversarial sample generation and attack. Firstly, we design a multi-task generation model to learn the distribution of the original dataset. The model first converts an arbitrary signal of a certain distribution into the shared features of the original dataset through deconvolution operations, and then according to different input conditions, multiple identical sub-networks generate the corresponding targeted samples. Secondly, the generated sample features achieve different outputs through querying the black-box model and training the substitute model, which are used to construct different loss functions to optimize and update the generator and substitute model. Finally, some common white-box attack methods are used to attack the substitute model to generate corresponding adversarial samples, which are utilized to attack the black-box model. We conducted a large number of experiments on the MNIST and CIFAR-10 datasets. The experimental results show that under the same settings and attack algorithms, MsGM achieves better performance than the based models.
Style APA, Harvard, Vancouver, ISO itp.
6

Chen, Zhiyu, Jianyu Ding, Fei Wu, Chi Zhang, Yiming Sun, Jing Sun, Shangdong Liu i Yimu Ji. "An Optimized Black-Box Adversarial Simulator Attack Based on Meta-Learning". Entropy 24, nr 10 (27.09.2022): 1377. http://dx.doi.org/10.3390/e24101377.

Pełny tekst źródła
Streszczenie:
Abstract: Much research on adversarial attacks has proved that deep neural networks have certain security vulnerabilities. Among potential attacks, black-box adversarial attacks are considered the most realistic based on the the natural hidden nature of deep neural networks. Such attacks havebecome a critical academic emphasis in the current security field. However, current black-box attack methods still have shortcomings, resulting in incomplete utilization of query information. Our research, based on the newly proposed Simulator Attack, proves the correctness and usability of feature layer information in a simulator model obtained by meta-learning for the first time. Then, we propose an optimized Simulator Attack+ based on this discovery. Our optimization methods used in Simulator Attack+ include: (1) a feature attentional boosting module that uses the feature layer information of the simulator to enhance the attack and accelerate the generation of adversarial examples; (2) a linear self-adaptive simulator-predict interval mechanism that allows the simulator model to be fully fine-tuned in the early stage of the attack and dynamically adjusts the interval for querying the black-box model; and (3) an unsupervised clustering module to provide a warm-start for targeted attacks. Results from experiments on the CIFAR-10 and CIFAR-100 datasets clearly show that Simulator Attack+ can further reduce the number of consuming queries to improve query efficiency while maintaining the attack.
Style APA, Harvard, Vancouver, ISO itp.
7

Xiang, Fengtao, Jiahui Xu, Wanpeng Zhang i Weidong Wang. "A Distributed Biased Boundary Attack Method in Black-Box Attack". Applied Sciences 11, nr 21 (8.11.2021): 10479. http://dx.doi.org/10.3390/app112110479.

Pełny tekst źródła
Streszczenie:
The adversarial samples threaten the effectiveness of machine learning (ML) models and algorithms in many applications. In particular, black-box attack methods are quite close to actual scenarios. Research on black-box attack methods and the generation of adversarial samples is helpful to discover the defects of machine learning models. It can strengthen the robustness of machine learning algorithms models. Such methods require queries frequently, which are less efficient. This paper has made improvements in the initial generation and the search for the most effective adversarial examples. Besides, it is found that some indicators can be used to detect attacks, which is a new foundation compared with our previous studies. Firstly, the paper proposed an algorithm to generate initial adversarial samples with a smaller L2 norm; secondly, a combination between particle swarm optimization (PSO) and biased boundary adversarial attack (BBA) is proposed. It is the PSO-BBA. Experiments are conducted on the ImageNet. The PSO-BBA is compared with the baseline method. Experimental comparison results certificate that: (1) A distributed framework for adversarial attack methods is proposed; (2) The proposed initial point selection method can reduces query numbers effectively; (3) Compared to the original BBA, the proposed PSO-BBA algorithm accelerates the convergence speed and improves the accuracy of attack accuracy; (4) The improved PSO-BBA algorithm has preferable performance on targeted and non-targeted attacks; (5) The mean structural similarity (MSSIM) can be used as the indicators of adversarial attack.
Style APA, Harvard, Vancouver, ISO itp.
8

Wang, Qiuhua, Hui Yang, Guohua Wu, Kim-Kwang Raymond Choo, Zheng Zhang, Gongxun Miao i Yizhi Ren. "Black-box adversarial attacks on XSS attack detection model". Computers & Security 113 (luty 2022): 102554. http://dx.doi.org/10.1016/j.cose.2021.102554.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Wang, Lu, Huan Zhang, Jinfeng Yi, Cho-Jui Hsieh i Yuan Jiang. "Spanning attack: reinforce black-box attacks with unlabeled data". Machine Learning 109, nr 12 (29.10.2020): 2349–68. http://dx.doi.org/10.1007/s10994-020-05916-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Gao, Xianfeng, Yu-an Tan, Hongwei Jiang, Quanxin Zhang i Xiaohui Kuang. "Boosting Targeted Black-Box Attacks via Ensemble Substitute Training and Linear Augmentation". Applied Sciences 9, nr 11 (3.06.2019): 2286. http://dx.doi.org/10.3390/app9112286.

Pełny tekst źródła
Streszczenie:
These years, Deep Neural Networks (DNNs) have shown unprecedented performance in many areas. However, some recent studies revealed their vulnerability to small perturbations added on source inputs. Furthermore, we call the ways to generate these perturbations’ adversarial attacks, which contain two types, black-box and white-box attacks, according to the adversaries’ access to target models. In order to overcome the problem of black-box attackers’ unreachabilities to the internals of target DNN, many researchers put forward a series of strategies. Previous works include a method of training a local substitute model for the target black-box model via Jacobian-based augmentation and then use the substitute model to craft adversarial examples using white-box methods. In this work, we improve the dataset augmentation to make the substitute models better fit the decision boundary of the target model. Unlike the previous work that just performed the non-targeted attack, we make it first to generate targeted adversarial examples via training substitute models. Moreover, to boost the targeted attacks, we apply the idea of ensemble attacks to the substitute training. Experiments on MNIST and GTSRB, two common datasets for image classification, demonstrate our effectiveness and efficiency of boosting a targeted black-box attack, and we finally attack the MNIST and GTSRB classifiers with the success rates of 97.7% and 92.8%.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Black-box attack"

1

Sun, Michael(Michael Z. ). "Local approximations of deep learning models for black-box adversarial attacks". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/121687.

Pełny tekst źródła
Streszczenie:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 45-47).
We study the problem of generating adversarial examples for image classifiers in the black-box setting (when the model is available only as an oracle). We unify two seemingly orthogonal and concurrent lines of work in black-box adversarial generation: query-based attacks and substitute models. In particular, we reinterpret adversarial transferability as a strong gradient prior. Based on this unification, we develop a method for integrating model-based priors into the generation of black-box attacks. The resulting algorithms significantly improve upon the current state-of-the-art in black-box adversarial attacks across a wide range of threat models.
by Michael Sun.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Style APA, Harvard, Vancouver, ISO itp.
2

Auernhammer, Katja [Verfasser], Felix [Akademischer Betreuer] Freiling, Kolagari Ramin [Akademischer Betreuer] Tavakoli, Felix [Gutachter] Freiling, Kolagari Ramin [Gutachter] Tavakoli i Dominique [Gutachter] Schröder. "Mask-based Black-box Attacks on Safety-Critical Systems that Use Machine Learning / Katja Auernhammer ; Gutachter: Felix Freiling, Ramin Tavakoli Kolagari, Dominique Schröder ; Felix Freiling, Ramin Tavakoli Kolagari". Erlangen : Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 2021. http://d-nb.info/1238358292/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Black-box attack"

1

Cai, Jinghui, Boyang Wang, Xiangfeng Wang i Bo Jin. "Accelerate Black-Box Attack with White-Box Prior Knowledge". W Intelligence Science and Big Data Engineering. Big Data and Machine Learning, 394–405. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36204-1_33.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Bai, Yang, Yuyuan Zeng, Yong Jiang, Yisen Wang, Shu-Tao Xia i Weiwei Guo. "Improving Query Efficiency of Black-Box Adversarial Attack". W Computer Vision – ECCV 2020, 101–16. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58595-2_7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Andriushchenko, Maksym, Francesco Croce, Nicolas Flammarion i Matthias Hein. "Square Attack: A Query-Efficient Black-Box Adversarial Attack via Random Search". W Computer Vision – ECCV 2020, 484–501. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58592-1_29.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Feng, Xinjie, Hongxun Yao, Wenbin Che i Shengping Zhang. "An Effective Way to Boost Black-Box Adversarial Attack". W MultiMedia Modeling, 393–404. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-37731-1_32.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Huan, Zhaoxin, Yulong Wang, Xiaolu Zhang, Lin Shang, Chilin Fu i Jun Zhou. "Data-Free Adversarial Perturbations for Practical Black-Box Attack". W Advances in Knowledge Discovery and Data Mining, 127–38. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-47436-2_10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Wang, Tong, Yuan Yao, Feng Xu, Shengwei An, Hanghang Tong i Ting Wang. "An Invisible Black-Box Backdoor Attack Through Frequency Domain". W Lecture Notes in Computer Science, 396–413. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19778-9_23.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Pooja, S., i Gilad Gressel. "Towards a General Black-Box Attack on Tabular Datasets". W Lecture Notes in Networks and Systems, 557–67. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1203-2_47.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Bayram, Samet, i Kenneth Barner. "A Black-Box Attack on Optical Character Recognition Systems". W Computer Vision and Machine Intelligence, 221–31. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-7867-8_18.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Yang, Chenglin, Adam Kortylewski, Cihang Xie, Yinzhi Cao i Alan Yuille. "PatchAttack: A Black-Box Texture-Based Attack with Reinforcement Learning". W Computer Vision – ECCV 2020, 681–98. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58574-7_41.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Yuito, Makoto, Kenta Suzuki i Kazuki Yoneyama. "Query-Efficient Black-Box Adversarial Attack with Random Pattern Noises". W Information and Communications Security, 303–23. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-15777-6_17.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Black-box attack"

1

Chen, Pengpeng, Yongqiang Yang, Dingqi Yang, Hailong Sun, Zhijun Chen i Peng Lin. "Black-Box Data Poisoning Attacks on Crowdsourcing". W Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/332.

Pełny tekst źródła
Streszczenie:
Understanding the vulnerability of label aggregation against data poisoning attacks is key to ensuring data quality in crowdsourced label collection. State-of-the-art attack mechanisms generally assume full knowledge of the aggregation models while failing to consider the flexibility of malicious workers in selecting which instances to label. Such a setup limits the applicability of the attack mechanisms and impedes further improvement of their success rate. This paper introduces a black-box data poisoning attack framework that finds the optimal strategies for instance selection and labeling to attack unknown label aggregation models in crowdsourcing. We formulate the attack problem on top of a generic formalization of label aggregation models and then introduce a substitution approach that attacks a substitute aggregation model in replacement of the unknown model. Through extensive validation on multiple real-world datasets, we demonstrate the effectiveness of both instance selection and model substitution in improving the success rate of attacks.
Style APA, Harvard, Vancouver, ISO itp.
2

Zhao, Mengchen, Bo An, Wei Gao i Teng Zhang. "Efficient Label Contamination Attacks Against Black-Box Learning Models". W Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/551.

Pełny tekst źródła
Streszczenie:
Label contamination attack (LCA) is an important type of data poisoning attack where an attacker manipulates the labels of training data to make the learned model beneficial to him. Existing work on LCA assumes that the attacker has full knowledge of the victim learning model, whereas the victim model is usually a black-box to the attacker. In this paper, we develop a Projected Gradient Ascent (PGA) algorithm to compute LCAs on a family of empirical risk minimizations and show that an attack on one victim model can also be effective on other victim models. This makes it possible that the attacker designs an attack against a substitute model and transfers it to a black-box victim model. Based on the observation of the transferability, we develop a defense algorithm to identify the data points that are most likely to be attacked. Empirical studies show that PGA significantly outperforms existing baselines and linear learning models are better substitute models than nonlinear ones.
Style APA, Harvard, Vancouver, ISO itp.
3

Ji, Yimu, Jianyu Ding, Zhiyu Chen, Fei Wu, Chi Zhang, Yiming Sun, Jing Sun i Shangdong Liu. "Simulator Attack+ for Black-Box Adversarial Attack". W 2022 IEEE International Conference on Image Processing (ICIP). IEEE, 2022. http://dx.doi.org/10.1109/icip46576.2022.9897950.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Zhang, Yihe, Xu Yuan, Jin Li, Jiadong Lou, Li Chen i Nian-Feng Tzeng. "Reverse Attack: Black-box Attacks on Collaborative Recommendation". W CCS '21: 2021 ACM SIGSAC Conference on Computer and Communications Security. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3460120.3484805.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Wang, Run, Felix Juefei-Xu, Qing Guo, Yihao Huang, Xiaofei Xie, Lei Ma i Yang Liu. "Amora: Black-box Adversarial Morphing Attack". W MM '20: The 28th ACM International Conference on Multimedia. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3394171.3413544.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Xiao, Chaowei, Bo Li, Jun-yan Zhu, Warren He, Mingyan Liu i Dawn Song. "Generating Adversarial Examples with Adversarial Networks". W Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/543.

Pełny tekst źródła
Streszczenie:
Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires more research efforts. In this paper, we propose AdvGAN to generate adversarial exam- ples with generative adversarial networks (GANs), which can learn and approximate the distribution of original instances. For AdvGAN, once the generator is trained, it can generate perturbations efficiently for any instance, so as to potentially accelerate adversarial training as defenses. We apply Adv- GAN in both semi-whitebox and black-box attack settings. In semi-whitebox attacks, there is no need to access the original target model after the generator is trained, in contrast to traditional white-box attacks. In black-box attacks, we dynamically train a distilled model for the black-box model and optimize the generator accordingly. Adversarial examples generated by AdvGAN on different target models have high attack success rate under state-of-the-art defenses compared to other attacks. Our attack has placed the first with 92.76% accuracy on a public MNIST black-box attack challenge.
Style APA, Harvard, Vancouver, ISO itp.
7

Li, Jie, Rongrong Ji, Hong Liu, Jianzhuang Liu, Bineng Zhong, Cheng Deng i Qi Tian. "Projection & Probability-Driven Black-Box Attack". W 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020. http://dx.doi.org/10.1109/cvpr42600.2020.00044.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Williams, Phoenix, Ke Li i Geyong Min. "Black-box adversarial attack via overlapped shapes". W GECCO '22: Genetic and Evolutionary Computation Conference. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3520304.3528934.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Moraffah, Raha, i Huan Liu. "Query-Efficient Target-Agnostic Black-Box Attack". W 2022 IEEE International Conference on Data Mining (ICDM). IEEE, 2022. http://dx.doi.org/10.1109/icdm54844.2022.00047.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Mesbah, Abdelhak, Mohamed Mezghiche i Jean-Louis Lanet. "Persistent fault injection attack from white-box to black-box". W 2017 5th International Conference on Electrical Engineering - Boumerdes (ICEE-B). IEEE, 2017. http://dx.doi.org/10.1109/icee-b.2017.8192164.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Black-box attack"

1

Ghosh, Anup, Steve Noel i Sushil Jajodia. Mapping Attack Paths in Black-Box Networks Through Passive Vulnerability Inference. Fort Belvoir, VA: Defense Technical Information Center, sierpień 2011. http://dx.doi.org/10.21236/ada563714.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii