Literatura académica sobre el tema "Black-box attack"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Black-box attack".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Black-box attack"

1

Chen, Jinghui, Dongruo Zhou, Jinfeng Yi y Quanquan Gu. "A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 04 (3 de abril de 2020): 3486–94. http://dx.doi.org/10.1609/aaai.v34i04.5753.

Texto completo
Resumen
Depending on how much information an adversary can access to, adversarial attacks can be classified as white-box attack and black-box attack. For white-box attack, optimization-based attack algorithms such as projected gradient descent (PGD) can achieve relatively high attack success rates within moderate iterates. However, they tend to generate adversarial examples near or upon the boundary of the perturbation set, resulting in large distortion. Furthermore, their corresponding black-box attack algorithms also suffer from high query complexities, thereby limiting their practical usefulness. In this paper, we focus on the problem of developing efficient and effective optimization-based adversarial attack algorithms. In particular, we propose a novel adversarial attack framework for both white-box and black-box settings based on a variant of Frank-Wolfe algorithm. We show in theory that the proposed attack algorithms are efficient with an O(1/√T) convergence rate. The empirical results of attacking the ImageNet and MNIST datasets also verify the efficiency and effectiveness of the proposed algorithms. More specifically, our proposed algorithms attain the best attack performances in both white-box and black-box attacks among all baselines, and are more time and query efficient than the state-of-the-art.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Jiang, Yi y Dengpan Ye. "Black-Box Adversarial Attacks against Audio Forensics Models". Security and Communication Networks 2022 (17 de enero de 2022): 1–8. http://dx.doi.org/10.1155/2022/6410478.

Texto completo
Resumen
Speech synthesis technology has made great progress in recent years and is widely used in the Internet of things, but it also brings the risk of being abused by criminals. Therefore, a series of researches on audio forensics models have arisen to reduce or eliminate these negative effects. In this paper, we propose a black-box adversarial attack method that only relies on output scores of audio forensics models. To improve the transferability of adversarial attacks, we utilize the ensemble-model method. A defense method is also designed against our proposed attack method under the view of the huge threat of adversarial examples to audio forensics models. Our experimental results on 4 forensics models trained on the LA part of the ASVspoof 2019 dataset show that our attacks can get a 99 % attack success rate on score-only black-box models, which is competitive to the best of white-box attacks, and 60 % attack success rate on decision-only black-box models. Finally, our defense method reduces the attack success rate to 16 % and guarantees 98 % detection accuracy of forensics models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Park, Hosung, Gwonsang Ryu y Daeseon Choi. "Partial Retraining Substitute Model for Query-Limited Black-Box Attacks". Applied Sciences 10, n.º 20 (14 de octubre de 2020): 7168. http://dx.doi.org/10.3390/app10207168.

Texto completo
Resumen
Black-box attacks against deep neural network (DNN) classifiers are receiving increasing attention because they represent a more practical approach in the real world than white box attacks. In black-box environments, adversaries have limited knowledge regarding the target model. This makes it difficult to estimate gradients for crafting adversarial examples, such that powerful white-box algorithms cannot be directly applied to black-box attacks. Therefore, a well-known black-box attack strategy creates local DNNs, called substitute models, to emulate the target model. The adversaries then craft adversarial examples using the substitute models instead of the unknown target model. The substitute models repeat the query process and are trained by observing labels from the target model’s responses to queries. However, emulating a target model usually requires numerous queries because new DNNs are trained from the beginning. In this study, we propose a new training method for substitute models to minimize the number of queries. We consider the number of queries as an important factor for practical black-box attacks because real-world systems often restrict queries for security and financial purposes. To decrease the number of queries, the proposed method does not emulate the entire target model and only adjusts the partial classification boundary based on a current attack. Furthermore, it does not use queries in the pre-training phase and creates queries only in the retraining phase. The experimental results indicate that the proposed method is effective in terms of the number of queries and attack success ratio against MNIST, VGGFace2, and ImageNet classifiers in query-limited black-box environments. Further, we demonstrate a black-box attack against a commercial classifier, Google AutoML Vision.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Zhao, Pu, Pin-yu Chen, Siyue Wang y Xue Lin. "Towards Query-Efficient Black-Box Adversary with Zeroth-Order Natural Gradient Descent". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 04 (3 de abril de 2020): 6909–16. http://dx.doi.org/10.1609/aaai.v34i04.6173.

Texto completo
Resumen
Despite the great achievements of the modern deep neural networks (DNNs), the vulnerability/robustness of state-of-the-art DNNs raises security concerns in many application domains requiring high reliability. Various adversarial attacks are proposed to sabotage the learning performance of DNN models. Among those, the black-box adversarial attack methods have received special attentions owing to their practicality and simplicity. Black-box attacks usually prefer less queries in order to maintain stealthy and low costs. However, most of the current black-box attack methods adopt the first-order gradient descent method, which may come with certain deficiencies such as relatively slow convergence and high sensitivity to hyper-parameter settings. In this paper, we propose a zeroth-order natural gradient descent (ZO-NGD) method to design the adversarial attacks, which incorporates the zeroth-order gradient estimation technique catering to the black-box attack scenario and the second-order natural gradient descent to achieve higher query efficiency. The empirical evaluations on image classification datasets demonstrate that ZO-NGD can obtain significantly lower model query complexities compared with state-of-the-art attack methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Duan, Mingxing, Kenli Li, Jiayan Deng, Bin Xiao y Qi Tian. "A Novel Multi-Sample Generation Method for Adversarial Attacks". ACM Transactions on Multimedia Computing, Communications, and Applications 18, n.º 4 (30 de noviembre de 2022): 1–21. http://dx.doi.org/10.1145/3506852.

Texto completo
Resumen
Deep learning models are widely used in daily life, which bring great convenience to our lives, but they are vulnerable to attacks. How to build an attack system with strong generalization ability to test the robustness of deep learning systems is a hot issue in current research, among which the research on black-box attacks is extremely challenging. Most current research on black-box attacks assumes that the input dataset is known. However, in fact, it is difficult for us to obtain detailed information for those datasets. In order to solve the above challenges, we propose a multi-sample generation model for black-box model attacks, called MsGM. MsGM is mainly composed of three parts: multi-sample generation, substitute model training, and adversarial sample generation and attack. Firstly, we design a multi-task generation model to learn the distribution of the original dataset. The model first converts an arbitrary signal of a certain distribution into the shared features of the original dataset through deconvolution operations, and then according to different input conditions, multiple identical sub-networks generate the corresponding targeted samples. Secondly, the generated sample features achieve different outputs through querying the black-box model and training the substitute model, which are used to construct different loss functions to optimize and update the generator and substitute model. Finally, some common white-box attack methods are used to attack the substitute model to generate corresponding adversarial samples, which are utilized to attack the black-box model. We conducted a large number of experiments on the MNIST and CIFAR-10 datasets. The experimental results show that under the same settings and attack algorithms, MsGM achieves better performance than the based models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Chen, Zhiyu, Jianyu Ding, Fei Wu, Chi Zhang, Yiming Sun, Jing Sun, Shangdong Liu y Yimu Ji. "An Optimized Black-Box Adversarial Simulator Attack Based on Meta-Learning". Entropy 24, n.º 10 (27 de septiembre de 2022): 1377. http://dx.doi.org/10.3390/e24101377.

Texto completo
Resumen
Abstract: Much research on adversarial attacks has proved that deep neural networks have certain security vulnerabilities. Among potential attacks, black-box adversarial attacks are considered the most realistic based on the the natural hidden nature of deep neural networks. Such attacks havebecome a critical academic emphasis in the current security field. However, current black-box attack methods still have shortcomings, resulting in incomplete utilization of query information. Our research, based on the newly proposed Simulator Attack, proves the correctness and usability of feature layer information in a simulator model obtained by meta-learning for the first time. Then, we propose an optimized Simulator Attack+ based on this discovery. Our optimization methods used in Simulator Attack+ include: (1) a feature attentional boosting module that uses the feature layer information of the simulator to enhance the attack and accelerate the generation of adversarial examples; (2) a linear self-adaptive simulator-predict interval mechanism that allows the simulator model to be fully fine-tuned in the early stage of the attack and dynamically adjusts the interval for querying the black-box model; and (3) an unsupervised clustering module to provide a warm-start for targeted attacks. Results from experiments on the CIFAR-10 and CIFAR-100 datasets clearly show that Simulator Attack+ can further reduce the number of consuming queries to improve query efficiency while maintaining the attack.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Xiang, Fengtao, Jiahui Xu, Wanpeng Zhang y Weidong Wang. "A Distributed Biased Boundary Attack Method in Black-Box Attack". Applied Sciences 11, n.º 21 (8 de noviembre de 2021): 10479. http://dx.doi.org/10.3390/app112110479.

Texto completo
Resumen
The adversarial samples threaten the effectiveness of machine learning (ML) models and algorithms in many applications. In particular, black-box attack methods are quite close to actual scenarios. Research on black-box attack methods and the generation of adversarial samples is helpful to discover the defects of machine learning models. It can strengthen the robustness of machine learning algorithms models. Such methods require queries frequently, which are less efficient. This paper has made improvements in the initial generation and the search for the most effective adversarial examples. Besides, it is found that some indicators can be used to detect attacks, which is a new foundation compared with our previous studies. Firstly, the paper proposed an algorithm to generate initial adversarial samples with a smaller L2 norm; secondly, a combination between particle swarm optimization (PSO) and biased boundary adversarial attack (BBA) is proposed. It is the PSO-BBA. Experiments are conducted on the ImageNet. The PSO-BBA is compared with the baseline method. Experimental comparison results certificate that: (1) A distributed framework for adversarial attack methods is proposed; (2) The proposed initial point selection method can reduces query numbers effectively; (3) Compared to the original BBA, the proposed PSO-BBA algorithm accelerates the convergence speed and improves the accuracy of attack accuracy; (4) The improved PSO-BBA algorithm has preferable performance on targeted and non-targeted attacks; (5) The mean structural similarity (MSSIM) can be used as the indicators of adversarial attack.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Wang, Qiuhua, Hui Yang, Guohua Wu, Kim-Kwang Raymond Choo, Zheng Zhang, Gongxun Miao y Yizhi Ren. "Black-box adversarial attacks on XSS attack detection model". Computers & Security 113 (febrero de 2022): 102554. http://dx.doi.org/10.1016/j.cose.2021.102554.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Wang, Lu, Huan Zhang, Jinfeng Yi, Cho-Jui Hsieh y Yuan Jiang. "Spanning attack: reinforce black-box attacks with unlabeled data". Machine Learning 109, n.º 12 (29 de octubre de 2020): 2349–68. http://dx.doi.org/10.1007/s10994-020-05916-1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Gao, Xianfeng, Yu-an Tan, Hongwei Jiang, Quanxin Zhang y Xiaohui Kuang. "Boosting Targeted Black-Box Attacks via Ensemble Substitute Training and Linear Augmentation". Applied Sciences 9, n.º 11 (3 de junio de 2019): 2286. http://dx.doi.org/10.3390/app9112286.

Texto completo
Resumen
These years, Deep Neural Networks (DNNs) have shown unprecedented performance in many areas. However, some recent studies revealed their vulnerability to small perturbations added on source inputs. Furthermore, we call the ways to generate these perturbations’ adversarial attacks, which contain two types, black-box and white-box attacks, according to the adversaries’ access to target models. In order to overcome the problem of black-box attackers’ unreachabilities to the internals of target DNN, many researchers put forward a series of strategies. Previous works include a method of training a local substitute model for the target black-box model via Jacobian-based augmentation and then use the substitute model to craft adversarial examples using white-box methods. In this work, we improve the dataset augmentation to make the substitute models better fit the decision boundary of the target model. Unlike the previous work that just performed the non-targeted attack, we make it first to generate targeted adversarial examples via training substitute models. Moreover, to boost the targeted attacks, we apply the idea of ensemble attacks to the substitute training. Experiments on MNIST and GTSRB, two common datasets for image classification, demonstrate our effectiveness and efficiency of boosting a targeted black-box attack, and we finally attack the MNIST and GTSRB classifiers with the success rates of 97.7% and 92.8%.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Black-box attack"

1

Sun, Michael(Michael Z. ). "Local approximations of deep learning models for black-box adversarial attacks". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/121687.

Texto completo
Resumen
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 45-47).
We study the problem of generating adversarial examples for image classifiers in the black-box setting (when the model is available only as an oracle). We unify two seemingly orthogonal and concurrent lines of work in black-box adversarial generation: query-based attacks and substitute models. In particular, we reinterpret adversarial transferability as a strong gradient prior. Based on this unification, we develop a method for integrating model-based priors into the generation of black-box attacks. The resulting algorithms significantly improve upon the current state-of-the-art in black-box adversarial attacks across a wide range of threat models.
by Michael Sun.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Auernhammer, Katja [Verfasser], Felix [Akademischer Betreuer] Freiling, Kolagari Ramin [Akademischer Betreuer] Tavakoli, Felix [Gutachter] Freiling, Kolagari Ramin [Gutachter] Tavakoli y Dominique [Gutachter] Schröder. "Mask-based Black-box Attacks on Safety-Critical Systems that Use Machine Learning / Katja Auernhammer ; Gutachter: Felix Freiling, Ramin Tavakoli Kolagari, Dominique Schröder ; Felix Freiling, Ramin Tavakoli Kolagari". Erlangen : Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 2021. http://d-nb.info/1238358292/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Black-box attack"

1

Cai, Jinghui, Boyang Wang, Xiangfeng Wang y Bo Jin. "Accelerate Black-Box Attack with White-Box Prior Knowledge". En Intelligence Science and Big Data Engineering. Big Data and Machine Learning, 394–405. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36204-1_33.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Bai, Yang, Yuyuan Zeng, Yong Jiang, Yisen Wang, Shu-Tao Xia y Weiwei Guo. "Improving Query Efficiency of Black-Box Adversarial Attack". En Computer Vision – ECCV 2020, 101–16. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58595-2_7.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Andriushchenko, Maksym, Francesco Croce, Nicolas Flammarion y Matthias Hein. "Square Attack: A Query-Efficient Black-Box Adversarial Attack via Random Search". En Computer Vision – ECCV 2020, 484–501. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58592-1_29.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Feng, Xinjie, Hongxun Yao, Wenbin Che y Shengping Zhang. "An Effective Way to Boost Black-Box Adversarial Attack". En MultiMedia Modeling, 393–404. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-37731-1_32.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Huan, Zhaoxin, Yulong Wang, Xiaolu Zhang, Lin Shang, Chilin Fu y Jun Zhou. "Data-Free Adversarial Perturbations for Practical Black-Box Attack". En Advances in Knowledge Discovery and Data Mining, 127–38. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-47436-2_10.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Wang, Tong, Yuan Yao, Feng Xu, Shengwei An, Hanghang Tong y Ting Wang. "An Invisible Black-Box Backdoor Attack Through Frequency Domain". En Lecture Notes in Computer Science, 396–413. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19778-9_23.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Pooja, S. y Gilad Gressel. "Towards a General Black-Box Attack on Tabular Datasets". En Lecture Notes in Networks and Systems, 557–67. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1203-2_47.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Bayram, Samet y Kenneth Barner. "A Black-Box Attack on Optical Character Recognition Systems". En Computer Vision and Machine Intelligence, 221–31. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-7867-8_18.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Yang, Chenglin, Adam Kortylewski, Cihang Xie, Yinzhi Cao y Alan Yuille. "PatchAttack: A Black-Box Texture-Based Attack with Reinforcement Learning". En Computer Vision – ECCV 2020, 681–98. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58574-7_41.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Yuito, Makoto, Kenta Suzuki y Kazuki Yoneyama. "Query-Efficient Black-Box Adversarial Attack with Random Pattern Noises". En Information and Communications Security, 303–23. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-15777-6_17.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Black-box attack"

1

Chen, Pengpeng, Yongqiang Yang, Dingqi Yang, Hailong Sun, Zhijun Chen y Peng Lin. "Black-Box Data Poisoning Attacks on Crowdsourcing". En Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/332.

Texto completo
Resumen
Understanding the vulnerability of label aggregation against data poisoning attacks is key to ensuring data quality in crowdsourced label collection. State-of-the-art attack mechanisms generally assume full knowledge of the aggregation models while failing to consider the flexibility of malicious workers in selecting which instances to label. Such a setup limits the applicability of the attack mechanisms and impedes further improvement of their success rate. This paper introduces a black-box data poisoning attack framework that finds the optimal strategies for instance selection and labeling to attack unknown label aggregation models in crowdsourcing. We formulate the attack problem on top of a generic formalization of label aggregation models and then introduce a substitution approach that attacks a substitute aggregation model in replacement of the unknown model. Through extensive validation on multiple real-world datasets, we demonstrate the effectiveness of both instance selection and model substitution in improving the success rate of attacks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Zhao, Mengchen, Bo An, Wei Gao y Teng Zhang. "Efficient Label Contamination Attacks Against Black-Box Learning Models". En Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/551.

Texto completo
Resumen
Label contamination attack (LCA) is an important type of data poisoning attack where an attacker manipulates the labels of training data to make the learned model beneficial to him. Existing work on LCA assumes that the attacker has full knowledge of the victim learning model, whereas the victim model is usually a black-box to the attacker. In this paper, we develop a Projected Gradient Ascent (PGA) algorithm to compute LCAs on a family of empirical risk minimizations and show that an attack on one victim model can also be effective on other victim models. This makes it possible that the attacker designs an attack against a substitute model and transfers it to a black-box victim model. Based on the observation of the transferability, we develop a defense algorithm to identify the data points that are most likely to be attacked. Empirical studies show that PGA significantly outperforms existing baselines and linear learning models are better substitute models than nonlinear ones.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Ji, Yimu, Jianyu Ding, Zhiyu Chen, Fei Wu, Chi Zhang, Yiming Sun, Jing Sun y Shangdong Liu. "Simulator Attack+ for Black-Box Adversarial Attack". En 2022 IEEE International Conference on Image Processing (ICIP). IEEE, 2022. http://dx.doi.org/10.1109/icip46576.2022.9897950.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Zhang, Yihe, Xu Yuan, Jin Li, Jiadong Lou, Li Chen y Nian-Feng Tzeng. "Reverse Attack: Black-box Attacks on Collaborative Recommendation". En CCS '21: 2021 ACM SIGSAC Conference on Computer and Communications Security. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3460120.3484805.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Wang, Run, Felix Juefei-Xu, Qing Guo, Yihao Huang, Xiaofei Xie, Lei Ma y Yang Liu. "Amora: Black-box Adversarial Morphing Attack". En MM '20: The 28th ACM International Conference on Multimedia. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3394171.3413544.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Xiao, Chaowei, Bo Li, Jun-yan Zhu, Warren He, Mingyan Liu y Dawn Song. "Generating Adversarial Examples with Adversarial Networks". En Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/543.

Texto completo
Resumen
Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires more research efforts. In this paper, we propose AdvGAN to generate adversarial exam- ples with generative adversarial networks (GANs), which can learn and approximate the distribution of original instances. For AdvGAN, once the generator is trained, it can generate perturbations efficiently for any instance, so as to potentially accelerate adversarial training as defenses. We apply Adv- GAN in both semi-whitebox and black-box attack settings. In semi-whitebox attacks, there is no need to access the original target model after the generator is trained, in contrast to traditional white-box attacks. In black-box attacks, we dynamically train a distilled model for the black-box model and optimize the generator accordingly. Adversarial examples generated by AdvGAN on different target models have high attack success rate under state-of-the-art defenses compared to other attacks. Our attack has placed the first with 92.76% accuracy on a public MNIST black-box attack challenge.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Li, Jie, Rongrong Ji, Hong Liu, Jianzhuang Liu, Bineng Zhong, Cheng Deng y Qi Tian. "Projection & Probability-Driven Black-Box Attack". En 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020. http://dx.doi.org/10.1109/cvpr42600.2020.00044.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Williams, Phoenix, Ke Li y Geyong Min. "Black-box adversarial attack via overlapped shapes". En GECCO '22: Genetic and Evolutionary Computation Conference. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3520304.3528934.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Moraffah, Raha y Huan Liu. "Query-Efficient Target-Agnostic Black-Box Attack". En 2022 IEEE International Conference on Data Mining (ICDM). IEEE, 2022. http://dx.doi.org/10.1109/icdm54844.2022.00047.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Mesbah, Abdelhak, Mohamed Mezghiche y Jean-Louis Lanet. "Persistent fault injection attack from white-box to black-box". En 2017 5th International Conference on Electrical Engineering - Boumerdes (ICEE-B). IEEE, 2017. http://dx.doi.org/10.1109/icee-b.2017.8192164.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Informes sobre el tema "Black-box attack"

1

Ghosh, Anup, Steve Noel y Sushil Jajodia. Mapping Attack Paths in Black-Box Networks Through Passive Vulnerability Inference. Fort Belvoir, VA: Defense Technical Information Center, agosto de 2011. http://dx.doi.org/10.21236/ada563714.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía