Literatura académica sobre el tema "Gray-box adversarial attacks"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Gray-box adversarial attacks".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Gray-box adversarial attacks"

1

Vitorino, João, Nuno Oliveira y Isabel Praça. "Adaptative Perturbation Patterns: Realistic Adversarial Learning for Robust Intrusion Detection". Future Internet 14, n.º 4 (29 de marzo de 2022): 108. http://dx.doi.org/10.3390/fi14040108.

Texto completo
Resumen
Adversarial attacks pose a major threat to machine learning and to the systems that rely on it. In the cybersecurity domain, adversarial cyber-attack examples capable of evading detection are especially concerning. Nonetheless, an example generated for a domain with tabular data must be realistic within that domain. This work establishes the fundamental constraint levels required to achieve realism and introduces the adaptative perturbation pattern method (A2PM) to fulfill these constraints in a gray-box setting. A2PM relies on pattern sequences that are independently adapted to the characteristics of each class to create valid and coherent data perturbations. The proposed method was evaluated in a cybersecurity case study with two scenarios: Enterprise and Internet of Things (IoT) networks. Multilayer perceptron (MLP) and random forest (RF) classifiers were created with regular and adversarial training, using the CIC-IDS2017 and IoT-23 datasets. In each scenario, targeted and untargeted attacks were performed against the classifiers, and the generated examples were compared with the original network traffic flows to assess their realism. The obtained results demonstrate that A2PM provides a scalable generation of realistic adversarial examples, which can be advantageous for both adversarial training and attacks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Wang, Zongwei, Min Gao, Jundong Li, Junwei Zhang y Jiang Zhong. "Gray-Box Shilling Attack: An Adversarial Learning Approach". ACM Transactions on Intelligent Systems and Technology, 22 de marzo de 2022. http://dx.doi.org/10.1145/3512352.

Texto completo
Resumen
Recommender systems are essential components of many information services, which aim to find relevant items that match user preferences. Several studies have shown shilling attacks can significantly weaken the robustness of recommender systems by injecting fake user profiles. Traditional shilling attacks focus on creating hand-engineered fake user profiles, but these profiles can be detected effortlessly by advanced detection methods. Adversarial learning, emerged in recent years, can be leveraged to generate powerful and intelligent attack models. To this end, in this paper, we explore potential risks of recommender systems and shed light on a gray-box shilling attack model based on generative adversarial networks, named GSA-GANs. Specifically, we aim to generate fake user profiles that can achieve two goals: unnoticeable and offensive. Towards these goals, there are several challenges that we need to address: (1) learn complex user behaviors from user-item rating data; (2) adversely influence the recommendation results without knowing the underlying recommendation algorithms. To tackle these challenges, two essential GAN modules are respectively designed to make generated fake profiles more similar to real ones and harmful to recommendation results. Experimental results on three public datasets demonstrate that the proposed GSA-GANs framework outperforms baseline models in attack effectiveness, transferability, and camouflage. In the end, we also provide several possible defensive strategies against GSA-GANs. The exploration and analysis in our work will contribute to the defense research of recommender systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Apruzzese, Giovanni y V. S. Subrahmanian. "Mitigating Adversarial Gray-Box Attacks Against Phishing Detectors". IEEE Transactions on Dependable and Secure Computing, 2022, 1–19. http://dx.doi.org/10.1109/tdsc.2022.3210029.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Li, Xingjian, Dou Goodman, Ji Liu, Tao Wei y Dejing Dou. "Improving Adversarial Robustness via Attention and Adversarial Logit Pairing". Frontiers in Artificial Intelligence 4 (27 de enero de 2022). http://dx.doi.org/10.3389/frai.2021.752831.

Texto completo
Resumen
Though deep neural networks have achieved the state of the art performance in visual classification, recent studies have shown that they are all vulnerable to the attack of adversarial examples. In this paper, we develop improved techniques for defending against adversarial examples. First, we propose an enhanced defense technique denoted Attention and Adversarial Logit Pairing (AT + ALP), which encourages both attention map and logit for the pairs of examples to be similar. When being applied to clean examples and their adversarial counterparts, AT + ALP improves accuracy on adversarial examples over adversarial training. We show that AT + ALP can effectively increase the average activations of adversarial examples in the key area and demonstrate that it focuses on discriminate features to improve the robustness of the model. Finally, we conduct extensive experiments using a wide range of datasets and the experiment results show that our AT + ALP achieves the state of the art defense performance. For example, on 17 Flower Category Database, under strong 200-iteration Projected Gradient Descent (PGD) gray-box and black-box attacks where prior art has 34 and 39% accuracy, our method achieves 50 and 51%. Compared with previous work, our work is evaluated under highly challenging PGD attack: the maximum perturbation ϵ ∈ {0.25, 0.5} i.e. L∞ ∈ {0.25, 0.5} with 10–200 attack iterations. To the best of our knowledge, such a strong attack has not been previously explored on a wide range of datasets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Aafaq, Nayyer, Naveed Akhtar, Wei Liu, Mubarak Shah y Ajmal Mian. "Language Model Agnostic Gray-Box Adversarial Attack on Image Captioning". IEEE Transactions on Information Forensics and Security, 2022, 1. http://dx.doi.org/10.1109/tifs.2022.3226905.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Hu, Han, Yujin Huang, Qiuyuan Chen, Terry Yue zhuo y Chunyang Chen. "A First Look at On-device Models in iOS Apps". ACM Transactions on Software Engineering and Methodology, 23 de agosto de 2023. http://dx.doi.org/10.1145/3617177.

Texto completo
Resumen
Powered by the rising popularity of deep learning techniques on smartphones, on-device deep learning models are being used in vital fields like finance, social media, and driving assistance. Because of the transparency of the Android platform and the on-device models inside, on-device models on Android smartphones have been proven to be extremely vulnerable. However, due to the challenge in accessing and analysing iOS app files, despite iOS being a mobile platform as popular as Android, there are no relevant works on on-device models in iOS apps. Since the functionalities of the same app on Android and iOS platforms are similar, the same vulnerabilities may exist on both platforms. In this paper, we present the first empirical study about on-device models in iOS apps, including their adoption of deep learning frameworks, structure, functionality, and potential security issues. We study why current developers use different on-device models for one app between iOS and Android. We propose a more general attack against white-box models that does not rely on pre-trained models and a new adversarial attack approach based on our findings to target iOS’s gray-box on-device models. Our results show the effectiveness of our approaches. Finally, we successfully exploit the vulnerabilities of on-device models to attack real-world iOS apps.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Gray-box adversarial attacks"

1

Vivek, B. S. "Towards Learning Adversarially Robust Deep Learning Models". Thesis, 2019. https://etd.iisc.ac.in/handle/2005/4488.

Texto completo
Resumen
Deep learning models have shown impressive performance across a wide spectrum of computer vision applications, including medical diagnosis and autonomous driving. One of the major concerns that these models face is their susceptibility to adversarial samples: samples with small, crafted noise designed to manipulate the model’s prediction. A defense mechanism named Adversarial Training (AT) shows promising results against these attacks. This training regime augments mini-batches with adversaries. However, to scale this training to large networks and datasets, fast and simple methods (e.g., single-step methods such as Fast Gradient Sign Method (FGSM)), are essential for generating these adversaries. But, single-step adversarial training (e.g., FGSM adversarial training) converges to a degenerate minimum, where the model merely appears to be robust. As a result, models are vulnerable to simple black-box attacks. In this thesis, we explore the following aspects of adversarial training: Failure of Single-step Adversarial Training: In the first part of the thesis, we will demonstrate that the pseudo robustness of an adversarially trained model is due to the limitations in the existing evaluation procedure. Further, we introduce novel variants of white-box and black-box attacks, dubbed “gray-box adversarial attacks”, based on which we propose a novel evaluation method to assess the robustness of the learned models. A novel variant of adversarial training named “Gray-box Adversarial Training” that uses intermediate versions of the model to seed the adversaries is proposed to improve the model’s robustness. Regularizers for Single-step Adversarial Training: In this part of the thesis, we will discuss various regularizers that could help to learn robust models using single-step adversarial training methods. (i) Regularizer that enforces logits for FGSM and I-FGSM (iterative-FGSM) of a clean sample, to be similar (imposed on only one pair of an adversarial sample in a mini-batch), (ii) Regularizer that enforces logits for FGSM and R-FGSM (Random+FGSM) of a clean sample, to be similar, (iii) Monotonic loss constraint: Enforces the loss to increase monotonically with an increase in the perturbation size of the FGSM attack, and (iv) Dropout with decaying dropout probability: Introduces dropout layer with decaying dropout probability, after each nonlinear layer of a network. Incorporating Domain Knowledge to Improve Model’s Adversarial Robustness: In this final part of the thesis, we show that the existing normal training method fails to incorporate domain knowledge into the learned feature representation of the network. Further, we show that incorporating domain knowledge into the learned feature representation of the network results in a significant improvement in the robustness of the network against adversarial attacks, within normal training regime.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Gray-box adversarial attacks"

1

Gong, Yuxin, Shen Wang, Xunzhi Jiang y Dechen Zhan. "An Adversarial Attack Method in Gray-Box Setting Oriented to Defenses Based on Image Preprocessing". En Advances in Intelligent Information Hiding and Multimedia Signal Processing, 87–96. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-9714-1_10.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Gray-box adversarial attacks"

1

Liu, Zihan, Yun Luo, Zelin Zang y Stan Z. Li. "Surrogate Representation Learning with Isometric Mapping for Gray-box Graph Adversarial Attacks". En WSDM '22: The Fifteenth ACM International Conference on Web Search and Data Mining. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3488560.3498481.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Wang, Hanrui, Shuo Wang, Zhe Jin, Yandan Wang, Cunjian Chen y Massimo Tistarelli. "Similarity-based Gray-box Adversarial Attack Against Deep Face Recognition". En 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021). IEEE, 2021. http://dx.doi.org/10.1109/fg52635.2021.9667076.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía