Academic literature on the topic 'Gray-box adversarial attacks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Gray-box adversarial attacks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Gray-box adversarial attacks"

1

Jo, Junhyung, Joongsu Kim, and Young-Joo Suh. "Exploring Public Data Vulnerabilities in Semi-Supervised Learning Models through Gray-box Adversarial Attack." Electronics 13, no. 5 (2024): 940. http://dx.doi.org/10.3390/electronics13050940.

Full text
Abstract:
Semi-supervised learning (SSL) models, integrating labeled and unlabeled data, have gained prominence in vision-based tasks, yet their susceptibility to adversarial attacks remains underexplored. This paper unveils the vulnerability of SSL models to gray-box adversarial attacks—a scenario where the attacker has partial knowledge of the model. We introduce an efficient attack method, Gray-box Adversarial Attack on Semi-supervised learning (GAAS), which exploits the dependency of SSL models on publicly available labeled data. Our analysis demonstrates that even with limited knowledge, GAAS can s
APA, Harvard, Vancouver, ISO, and other styles
2

Penmetsa, Mitra, Jayakeshav Reddy Bhumireddy, Rajiv Chalasani, Srikanth Reddy Vangala, Ram Mohan Polam, and Bhavana Kamarthapu. "Adversarial Machine Learning in Cybersecurity: A Review on Defending Against AI-Driven Attacks." European Journal of Applied Science, Engineering and Technology 3, no. 4 (2025): 4–14. https://doi.org/10.59324/ejaset.2025.3(4).01.

Full text
Abstract:
The application of artificial intelligence (AI) and machine learning (ML) in cybersecurity has revolutionized the capacity to identify, prevent, and react to more complex cyber threats. However, as ML models become central to defense mechanisms, adversarial attacks designed to deceive these models have emerging as a major problem. Adversarial Machine Learning (AML) focuses on how attackers manipulate data inputs to exploit vulnerabilities in ML systems, leading to misclassification, data breaches, and system failures. This article gives a detailed study of adversarial attacks against ML models
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Tianxiao, Yingtao Niu, and Zhanyang Zhou. "Adversarial attacks against intelligent anti-jamming communication: An adaptive gray-box attack method." Physical Communication 72 (October 2025): 102716. https://doi.org/10.1016/j.phycom.2025.102716.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Vitorino, João, Nuno Oliveira, and Isabel Praça. "Adaptative Perturbation Patterns: Realistic Adversarial Learning for Robust Intrusion Detection." Future Internet 14, no. 4 (2022): 108. http://dx.doi.org/10.3390/fi14040108.

Full text
Abstract:
Adversarial attacks pose a major threat to machine learning and to the systems that rely on it. In the cybersecurity domain, adversarial cyber-attack examples capable of evading detection are especially concerning. Nonetheless, an example generated for a domain with tabular data must be realistic within that domain. This work establishes the fundamental constraint levels required to achieve realism and introduces the adaptative perturbation pattern method (A2PM) to fulfill these constraints in a gray-box setting. A2PM relies on pattern sequences that are independently adapted to the characteri
APA, Harvard, Vancouver, ISO, and other styles
5

Islam, Md Tawfiqul. "A QUANTITATIVE ASSESSMENT OF SECURE NEURAL NETWORK ARCHITECTURES FOR FAULT DETECTION IN INDUSTRIAL CONTROL SYSTEMS." Review of Applied Science and Technology 02, no. 04 (2023): 01–24. https://doi.org/10.63125/3m7gbs97.

Full text
Abstract:
Industrial Control Systems (ICS) form the core infrastructure for critical sectors such as energy, water, manufacturing, and transportation, yet their increasing digital interconnectivity has exposed them to complex fault dynamics and sophisticated cyber-physical threats. Traditional fault detection mechanisms—whether rule-based or model-driven—often fail to cope with the nonlinearity, high dimensionality, and adversarial vulnerabilities prevalent in modern ICS environments. To address these limitations, this study conducts a comprehensive quantitative evaluation of secure neural network archi
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Zongwei, Min Gao, Jundong Li, Junwei Zhang, and Jiang Zhong. "Gray-Box Shilling Attack: An Adversarial Learning Approach." ACM Transactions on Intelligent Systems and Technology, March 22, 2022. http://dx.doi.org/10.1145/3512352.

Full text
Abstract:
Recommender systems are essential components of many information services, which aim to find relevant items that match user preferences. Several studies have shown shilling attacks can significantly weaken the robustness of recommender systems by injecting fake user profiles. Traditional shilling attacks focus on creating hand-engineered fake user profiles, but these profiles can be detected effortlessly by advanced detection methods. Adversarial learning, emerged in recent years, can be leveraged to generate powerful and intelligent attack models. To this end, in this paper, we explore potent
APA, Harvard, Vancouver, ISO, and other styles
7

Apruzzese, Giovanni, and V. S. Subrahmanian. "Mitigating Adversarial Gray-Box Attacks Against Phishing Detectors." IEEE Transactions on Dependable and Secure Computing, 2022, 1–19. http://dx.doi.org/10.1109/tdsc.2022.3210029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Hanrui, Shuo Wang, Cunjian Chen, Massimo Tistarelli, and Zhe Jin. "A Multi-task Adversarial Attack Against Face Authentication." ACM Transactions on Multimedia Computing, Communications, and Applications, May 21, 2024. http://dx.doi.org/10.1145/3665496.

Full text
Abstract:
Deep-learning-based identity management systems, such as face authentication systems, are vulnerable to adversarial attacks. However, existing attacks are typically designed for single-task purposes, which means they are tailored to exploit vulnerabilities unique to the individual target rather than being adaptable for multiple users or systems. This limitation makes them unsuitable for certain attack scenarios, such as morphing, universal, transferable, and counter attacks. In this paper, we propose a multi-task adversarial attack algorithm called MTADV that are adaptable for multiple users o
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Xingjian, Dou Goodman, Ji Liu, Tao Wei, and Dejing Dou. "Improving Adversarial Robustness via Attention and Adversarial Logit Pairing." Frontiers in Artificial Intelligence 4 (January 27, 2022). http://dx.doi.org/10.3389/frai.2021.752831.

Full text
Abstract:
Though deep neural networks have achieved the state of the art performance in visual classification, recent studies have shown that they are all vulnerable to the attack of adversarial examples. In this paper, we develop improved techniques for defending against adversarial examples. First, we propose an enhanced defense technique denoted Attention and Adversarial Logit Pairing (AT + ALP), which encourages both attention map and logit for the pairs of examples to be similar. When being applied to clean examples and their adversarial counterparts, AT + ALP improves accuracy on adversarial examp
APA, Harvard, Vancouver, ISO, and other styles
10

Aafaq, Nayyer, Naveed Akhtar, Wei Liu, Mubarak Shah, and Ajmal Mian. "Language Model Agnostic Gray-Box Adversarial Attack on Image Captioning." IEEE Transactions on Information Forensics and Security, 2022, 1. http://dx.doi.org/10.1109/tifs.2022.3226905.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Gray-box adversarial attacks"

1

Vivek, B. S. "Towards Learning Adversarially Robust Deep Learning Models." Thesis, 2019. https://etd.iisc.ac.in/handle/2005/4488.

Full text
Abstract:
Deep learning models have shown impressive performance across a wide spectrum of computer vision applications, including medical diagnosis and autonomous driving. One of the major concerns that these models face is their susceptibility to adversarial samples: samples with small, crafted noise designed to manipulate the model’s prediction. A defense mechanism named Adversarial Training (AT) shows promising results against these attacks. This training regime augments mini-batches with adversaries. However, to scale this training to large networks and datasets, fast and simple methods (e.g.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Gray-box adversarial attacks"

1

Feng, Hua, Shangyi Li, Haoyuan Shi, and Zhixun Ye. "A Comparative Analysis of White Box and Gray Box Adversarial Attacks to Natural Language Processing Systems." In Advances in Computer Science Research. Atlantis Press International BV, 2024. http://dx.doi.org/10.2991/978-94-6463-540-9_65.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lapid, Raz, and Moshe Sipper. "I See Dead People: Gray-Box Adversarial Attack on Image-to-Text Models." In Communications in Computer and Information Science. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-74643-7_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gong, Yuxin, Shen Wang, Xunzhi Jiang, and Dechen Zhan. "An Adversarial Attack Method in Gray-Box Setting Oriented to Defenses Based on Image Preprocessing." In Advances in Intelligent Information Hiding and Multimedia Signal Processing. Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-9714-1_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Gray-box adversarial attacks"

1

Liu, Zihan, Yun Luo, Zelin Zang, and Stan Z. Li. "Surrogate Representation Learning with Isometric Mapping for Gray-box Graph Adversarial Attacks." In WSDM '22: The Fifteenth ACM International Conference on Web Search and Data Mining. ACM, 2022. http://dx.doi.org/10.1145/3488560.3498481.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Hanrui, Shuo Wang, Zhe Jin, Yandan Wang, Cunjian Chen, and Massimo Tistarelli. "Similarity-based Gray-box Adversarial Attack Against Deep Face Recognition." In 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021). IEEE, 2021. http://dx.doi.org/10.1109/fg52635.2021.9667076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Al-qudah, Rabiah, Moayad Aloqaily, Bassem Ouni, Mohsen Guizani, and Thierry Lestable. "An Incremental Gray-Box Physical Adversarial Attack on Neural Network Training." In ICC 2023 - IEEE International Conference on Communications. IEEE, 2023. http://dx.doi.org/10.1109/icc45041.2023.10278837.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ataiefard, Foozhan, and Hadi Hemmati. "Gray-box Adversarial Attack of Deep Reinforcement Learning-based Trading Agents*." In 2023 International Conference on Machine Learning and Applications (ICMLA). IEEE, 2023. http://dx.doi.org/10.1109/icmla58977.2023.00099.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!