Academic literature on the topic 'Adversarial Attack and Defense'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Adversarial Attack and Defense.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Adversarial Attack and Defense"

1

Park, Sanglee, and Jungmin So. "On the Effectiveness of Adversarial Training in Defending against Adversarial Example Attacks for Image Classification." Applied Sciences 10, no. 22 (November 14, 2020): 8079. http://dx.doi.org/10.3390/app10228079.

Full text
Abstract:
State-of-the-art neural network models are actively used in various fields, but it is well-known that they are vulnerable to adversarial example attacks. Throughout the efforts to make the models robust against adversarial example attacks, it has been found to be a very difficult task. While many defense approaches were shown to be not effective, adversarial training remains as one of the promising methods. In adversarial training, the training data are augmented by “adversarial” samples generated using an attack algorithm. If the attacker uses a similar attack algorithm to generate adversarial examples, the adversarially trained network can be quite robust to the attack. However, there are numerous ways of creating adversarial examples, and the defender does not know what algorithm the attacker may use. A natural question is: Can we use adversarial training to train a model robust to multiple types of attack? Previous work have shown that, when a network is trained with adversarial examples generated from multiple attack methods, the network is still vulnerable to white-box attacks where the attacker has complete access to the model parameters. In this paper, we study this question in the context of black-box attacks, which can be a more realistic assumption for practical applications. Experiments with the MNIST dataset show that adversarially training a network with an attack method helps defending against that particular attack method, but has limited effect for other attack methods. In addition, even if the defender trains a network with multiple types of adversarial examples and the attacker attacks with one of the methods, the network could lose accuracy to the attack if the attacker uses a different data augmentation strategy on the target network. These results show that it is very difficult to make a robust network using adversarial training, even for black-box settings where the attacker has restricted information on the target network.
APA, Harvard, Vancouver, ISO, and other styles
2

Tang, Renzhi, Guowei Shen, Chun Guo, and Yunhe Cui. "SAD: Website Fingerprinting Defense Based on Adversarial Examples." Security and Communication Networks 2022 (April 7, 2022): 1–12. http://dx.doi.org/10.1155/2022/7330465.

Full text
Abstract:
Website fingerprinting (WF) attacks can infer website names from encrypted network traffic when the victim is browsing the website. Inherent defenses of anonymous communication systems such as The Onion Router(Tor) cannot compete with current WF attacks. The state-of-the-art attack based on deep learning can gain over 98% accuracy in Tor. Most of the defenses have excellent defensive capabilities, but it will bring a relatively high bandwidth overhead, which will seriously affect the user’s network experience. And some defense methods have less impact on the latest website fingerprinting attacks. Defense-based adversarial examples have excellent defense capabilities and low bandwidth overhead, but they need to get the complete website traffic to generate defense data, which is obviously impractical. In this article, based on adversarial examples, we propose segmented adversary defense (SAD) for deep learning-based WF attacks. In SAD, sequence data are divided into multiple segments to ensure that SAD is feasible in real scenarios. Then, the adversarial examples for each segment of data can be generated by SAD. Finally, dummy packets are inserted after each segment original data. We also found that setting different head rates, that is, end points for the segments, will get better results. Experimentally, our results show that SAD can effectively reduce the accuracy of WF attacks. The technique drops the accuracy of the state-of-the-art attack hardened from 96% to 3% while incurring only 40% bandwidth overhead. Compared with the existing proposed defense named Deep Fingerprinting Defender (DFD), the defense effect of SAD is better under the same bandwidth overhead.
APA, Harvard, Vancouver, ISO, and other styles
3

Zheng, Tianhang, Changyou Chen, and Kui Ren. "Distributionally Adversarial Attack." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 2253–60. http://dx.doi.org/10.1609/aaai.v33i01.33012253.

Full text
Abstract:
Recent work on adversarial attack has shown that Projected Gradient Descent (PGD) Adversary is a universal first-order adversary, and the classifier adversarially trained by PGD is robust against a wide range of first-order attacks. It is worth noting that the original objective of an attack/defense model relies on a data distribution p(x), typically in the form of risk maximization/minimization, e.g., max/min Ep(x) L(x) with p(x) some unknown data distribution and L(·) a loss function. However, since PGD generates attack samples independently for each data sample based on L(·), the procedure does not necessarily lead to good generalization in terms of risk optimization. In this paper, we achieve the goal by proposing distributionally adversarial attack (DAA), a framework to solve an optimal adversarial-data distribution, a perturbed distribution that satisfies the L∞ constraint but deviates from the original data distribution to increase the generalization risk maximally. Algorithmically, DAA performs optimization on the space of potential data distributions, which introduces direct dependency between all data points when generating adversarial samples. DAA is evaluated by attacking state-of-the-art defense models, including the adversarially-trained models provided by MIT MadryLab. Notably, DAA ranks the first place on MadryLab’s white-box leaderboards, reducing the accuracy of their secret MNIST model to 88.56% (with l∞ perturbations of ε = 0.3) and the accuracy of their secret CIFAR model to 44.71% (with l∞ perturbations of ε = 8.0). Code for the experiments is released on https://github.com/tianzheng4/Distributionally-Adversarial-Attack.
APA, Harvard, Vancouver, ISO, and other styles
4

Liang, Hongshuo, Erlu He, Yangyang Zhao, Zhe Jia, and Hao Li. "Adversarial Attack and Defense: A Survey." Electronics 11, no. 8 (April 18, 2022): 1283. http://dx.doi.org/10.3390/electronics11081283.

Full text
Abstract:
In recent years, artificial intelligence technology represented by deep learning has achieved remarkable results in image recognition, semantic analysis, natural language processing and other fields. In particular, deep neural networks have been widely used in different security-sensitive tasks. Fields, such as facial payment, smart medical and autonomous driving, which accelerate the construction of smart cities. Meanwhile, in order to fully unleash the potential of edge big data, there is an urgent need to push the AI frontier to the network edge. Edge AI, the combination of artificial intelligence and edge computing, supports the deployment of deep learning algorithms to edge devices that generate data, and has become a key driver of smart city development. However, the latest research shows that deep neural networks are vulnerable to attacks from adversarial example and output wrong results. This type of attack is called adversarial attack, which greatly limits the promotion of deep neural networks in tasks with extremely high security requirements. Due to the influence of adversarial attacks, researchers have also begun to pay attention to the research in the field of adversarial defense. In the game process of adversarial attacks and defense technologies, both attack and defense technologies have been developed rapidly. This article first introduces the principles and characteristics of adversarial attacks, and summarizes and analyzes the adversarial example generation methods in recent years. Then, it introduces the adversarial example defense technology in detail from the three directions of model, data, and additional network. Finally, combined with the current status of adversarial example generation and defense technology development, put forward challenges and prospects in this field.
APA, Harvard, Vancouver, ISO, and other styles
5

Zeng, Huimin, Chen Zhu, Tom Goldstein, and Furong Huang. "Are Adversarial Examples Created Equal? A Learnable Weighted Minimax Risk for Robustness under Non-uniform Attacks." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (May 18, 2021): 10815–23. http://dx.doi.org/10.1609/aaai.v35i12.17292.

Full text
Abstract:
Adversarial Training is proved to be an efficient method to defend against adversarial examples, being one of the few defenses that withstand strong attacks. However, traditional defense mechanisms assume a uniform attack over the examples according to the underlying data distribution, which is apparently unrealistic as the attacker could choose to focus on more vulnerable examples. We present a weighted minimax risk optimization that defends against non-uniform attacks, achieving robustness against adversarial examples under perturbed test data distributions. Our modified risk considers importance weights of different adversarial examples and focuses adaptively on harder examples that are wrongly classified or at higher risk of being classified incorrectly. The designed risk allows the training process to learn a strong defense through optimizing the importance weights. The experiments show that our model significantly improves state-of-the-art adversarial accuracy under non-uniform attacks without a significant drop under uniform attacks.
APA, Harvard, Vancouver, ISO, and other styles
6

Rosenberg, Ishai, Asaf Shabtai, Yuval Elovici, and Lior Rokach. "Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain." ACM Computing Surveys 54, no. 5 (June 2021): 1–36. http://dx.doi.org/10.1145/3453158.

Full text
Abstract:
In recent years, machine learning algorithms, and more specifically deep learning algorithms, have been widely used in many fields, including cyber security. However, machine learning systems are vulnerable to adversarial attacks, and this limits the application of machine learning, especially in non-stationary, adversarial environments, such as the cyber security domain, where actual adversaries (e.g., malware developers) exist. This article comprehensively summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques and illuminates the risks they pose. First, the adversarial attack methods are characterized based on their stage of occurrence, and the attacker’ s goals and capabilities. Then, we categorize the applications of adversarial attack and defense methods in the cyber security domain. Finally, we highlight some characteristics identified in recent research and discuss the impact of recent advancements in other adversarial learning domains on future research directions in the cyber security domain. To the best of our knowledge, this work is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain, map them in a unified taxonomy, and use the taxonomy to highlight future research directions.
APA, Harvard, Vancouver, ISO, and other styles
7

Qiu, Shilin, Qihe Liu, Shijie Zhou, and Chunjiang Wu. "Review of Artificial Intelligence Adversarial Attack and Defense Technologies." Applied Sciences 9, no. 5 (March 4, 2019): 909. http://dx.doi.org/10.3390/app9050909.

Full text
Abstract:
In recent years, artificial intelligence technologies have been widely used in computer vision, natural language processing, automatic driving, and other fields. However, artificial intelligence systems are vulnerable to adversarial attacks, which limit the applications of artificial intelligence (AI) technologies in key security fields. Therefore, improving the robustness of AI systems against adversarial attacks has played an increasingly important role in the further development of AI. This paper aims to comprehensively summarize the latest research progress on adversarial attack and defense technologies in deep learning. According to the target model’s different stages where the adversarial attack occurred, this paper expounds the adversarial attack methods in the training stage and testing stage respectively. Then, we sort out the applications of adversarial attack technologies in computer vision, natural language processing, cyberspace security, and the physical world. Finally, we describe the existing adversarial defense methods respectively in three main categories, i.e., modifying data, modifying models and using auxiliary tools.
APA, Harvard, Vancouver, ISO, and other styles
8

Yang, Kaichen, Tzungyu Tsai, Honggang Yu, Tsung-Yi Ho, and Yier Jin. "Beyond Digital Domain: Fooling Deep Learning Based Recognition System in Physical World." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (April 3, 2020): 1088–95. http://dx.doi.org/10.1609/aaai.v34i01.5459.

Full text
Abstract:
Adversarial examples that can fool deep neural network (DNN) models in computer vision present a growing threat. The current methods of launching adversarial attacks concentrate on attacking image classifiers by adding noise to digital inputs. The problem of attacking object detection models and adversarial attacks in physical world are rarely touched. Some prior works are proposed to launch physical adversarial attack against object detection models, but limited by certain aspects. In this paper, we propose a novel physical adversarial attack targeting object detection models. Instead of simply printing images, we manufacture real metal objects that could achieve the adversarial effect. In both indoor and outdoor experiments we show our physical adversarial objects can fool widely applied object detection models including SSD, YOLO and Faster R-CNN in various environments. We also test our attack in a variety of commercial platforms for object detection and demonstrate that our attack is still valid on these platforms. Consider the potential defense mechanisms our adversarial objects may encounter, we conduct a series of experiments to evaluate the effect of existing defense methods on our physical attack.
APA, Harvard, Vancouver, ISO, and other styles
9

Shi, Lin, Teyi Liao, and Jianfeng He. "Defending Adversarial Attacks against DNN Image Classification Models by a Noise-Fusion Method." Electronics 11, no. 12 (June 8, 2022): 1814. http://dx.doi.org/10.3390/electronics11121814.

Full text
Abstract:
Adversarial attacks deceive deep neural network models by adding imperceptibly small but well-designed attack data to the model input. Those attacks cause serious problems. Various defense methods have been provided to defend against those attacks by: (1) providing adversarial training according to specific attacks; (2) denoising the input data; (3) preprocessing the input data; and (4) adding noise to various layers of models. Here we provide a simple but effective Noise-Fusion Method (NFM) to defend adversarial attacks against DNN image classification models. Without knowing any details about attacks or models, NFM not only adds noise to the model input at run time, but also to the training data at training time. Two l∞-attacks, the Fast Gradient Signed Method (FGSM) and the Projected Gradient Descent (PGD), and one l1-attack, the Sparse L1 Descent (SLD), are applied to evaluate defense effects of the NFM on various deep neural network models which used MNIST and CIFAR-10 datasets. Various amplitude noises with different statistical distribution are applied to show the defense effects of the NFM in different noise. The NFM also compares with an adversarial training method on MNIST and CIFAR-10 datasets. Results show that adding noise to the input images and the training images not only defends against all three adversarial attacks but also improves robustness of corresponding models. The results indicate possibly generalized defense effects of the NFM which can extend to other adversarial attacks. It also shows potential application of the NFM to models not only with image input but also with voice or audio input.
APA, Harvard, Vancouver, ISO, and other styles
10

Shieh, Chin-Shiuh, Thanh-Tuan Nguyen, Wan-Wei Lin, Wei Kuang Lai, Mong-Fong Horng, and Denis Miu. "Detection of Adversarial DDoS Attacks Using Symmetric Defense Generative Adversarial Networks." Electronics 11, no. 13 (June 24, 2022): 1977. http://dx.doi.org/10.3390/electronics11131977.

Full text
Abstract:
DDoS (distributed denial of service) attacks consist of a large number of compromised computer systems that launch joint attacks at a targeted victim, such as a server, website, or other network equipment, simultaneously. DDoS has become a widespread and severe threat to the integrity of computer networks. DDoS can lead to system paralysis, making it difficult to troubleshoot. As a critical component of the creation of an integrated defensive system, it is essential to detect DDoS attacks as early as possible. With the popularization of artificial intelligence, more and more researchers have applied machine learning (ML) and deep learning (DL) to the detection of DDoS attacks and have achieved satisfactory accomplishments. The complexity and sophistication of DDoS attacks have continuously increased and evolved since the first DDoS attack was reported in 1996. Regarding the headways in this problem, a new type of DDoS attack, named adversarial DDoS attack, is investigated in this study. The generating adversarial DDoS traffic is carried out using a symmetric generative adversarial network (GAN) architecture called CycleGAN to demonstrate the severe impact of adversarial DDoS attacks. Experiment results reveal that the synthesized attack can easily penetrate ML-based detection systems, including RF (random forest), KNN (k-nearest neighbor), SVM (support vector machine), and naïve Bayes. These alarming results intimate the urgent need for countermeasures against adversarial DDoS attacks. We present a novel DDoS detection framework that incorporates GAN with a symmetrically built generator and discriminator defense system (SDGAN) to deal with these problems. Both symmetric discriminators are intended to simultaneously identify adversarial DDoS traffic. As demonstrated by the experimental results, the suggested SDGAN can be an effective solution against adversarial DDoS attacks. We train SDGAN on adversarial DDoS data generated by CycleGAN and compare it to four previous machine learning-based detection systems. SDGAN outperformed the other machine learning models, with a TPR (true positive rate) of 87.2%, proving its protection ability. Additionally, a more comprehensive test was undertaken to evaluate SDGAN’s capacity to defend against unseen adversarial threats. SDGAN was evaluated using non-training data-generated adversarial traffic. SDGAN remained effective, with a TPR of around 70.9%, compared to RF’s 9.4%.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Adversarial Attack and Defense"

1

Branlat, Matthieu. "Challenges to Adversarial Interplay Under High Uncertainty: Staged-World Study of a Cyber Security Event." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1316462733.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kanerva, Anton, and Fredrik Helgesson. "On the Use of Model-Agnostic Interpretation Methods as Defense Against Adversarial Input Attacks on Tabular Data." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20085.

Full text
Abstract:
Context. Machine learning is a constantly developing subfield within the artificial intelligence field. The number of domains in which we deploy machine learning models is constantly growing and the systems using these models spread almost unnoticeably in our daily lives through different devices. In previous years, lots of time and effort has been put into increasing the performance of these models, overshadowing the significant risks of attacks targeting the very core of the systems, the trained machine learning models themselves. A specific attack with the aim of fooling the decision-making of a model, called the adversarial input attack, has almost exclusively been researched for models processing image data. However, the threat of adversarial input attacks stretches beyond systems using image data, to e.g the tabular domain which is the most common data domain used in the industry. Methods used for interpreting complex machine learning models can help humans understand the behavior and predictions of these complex machine learning systems. Understanding the behavior of a model is an important component in detecting, understanding and mitigating vulnerabilities of the model. Objectives. This study aims to reduce the research gap of adversarial input attacks and defenses targeting machine learning models in the tabular data domain. The goal of this study is to analyze how model-agnostic interpretation methods can be used in order to mitigate and detect adversarial input attacks on tabular data. Methods. The goal is reached by conducting three consecutive experiments where model interpretation methods are analyzed and adversarial input attacks are evaluated as well as visualized in terms of perceptibility. Additionally, a novel method for adversarial input attack detection based on model interpretation is proposed together with a novel way of defensively using feature selection to reduce the attack vector size. Results. The adversarial input attack detection showed state-of-the-art results with an accuracy over 86%. The proposed feature selection-based mitigation technique was successful in hardening the model from adversarial input attacks by reducing their scores by 33% without decreasing the performance of the model. Conclusions. This study contributes with satisfactory and useful methods for adversarial input attack detection and mitigation as well as methods for evaluating and visualizing the imperceptibility of attacks on tabular data.
Kontext. Maskininlärning är ett område inom artificiell intelligens som är under konstant utveckling. Mängden domäner som vi sprider maskininlärningsmodeller i växer sig allt större och systemen sprider sig obemärkt nära inpå våra dagliga liv genom olika elektroniska enheter. Genom åren har mycket tid och arbete lagts på att öka dessa modellers prestanda vilket har överskuggat risken för sårbarheter i systemens kärna, den tränade modellen. En relativt ny attack, kallad "adversarial input attack", med målet att lura modellen till felaktiga beslutstaganden har nästan uteslutande forskats på inom bildigenkänning. Men, hotet som adversarial input-attacker utgör sträcker sig utom ramarna för bilddata till andra datadomäner som den tabulära domänen vilken är den vanligaste datadomänen inom industrin. Metoder för att tolka komplexa maskininlärningsmodeller kan hjälpa människor att förstå beteendet hos dessa komplexa maskininlärningssystem samt de beslut som de tar. Att förstå en modells beteende är en viktig komponent för att upptäcka, förstå och mitigera sårbarheter hos modellen. Syfte. Den här studien försöker reducera det forskningsgap som adversarial input-attacker och motsvarande försvarsmetoder i den tabulära domänen utgör. Målet med denna studie är att analysera hur modelloberoende tolkningsmetoder kan användas för att mitigera och detektera adversarial input-attacker mot tabulär data. Metod. Det uppsatta målet nås genom tre på varandra följande experiment där modelltolkningsmetoder analyseras, adversarial input-attacker utvärderas och visualiseras samt där en ny metod baserad på modelltolkning föreslås för detektion av adversarial input-attacker tillsammans med en ny mitigeringsteknik där feature selection används defensivt för att minska attackvektorns storlek. Resultat. Den föreslagna metoden för detektering av adversarial input-attacker visar state-of-the-art-resultat med över 86% träffsäkerhet. Den föreslagna mitigeringstekniken visades framgångsrik i att härda modellen mot adversarial input attacker genom att minska deras attackstyrka med 33% utan att degradera modellens klassifieringsprestanda. Slutsats. Denna studie bidrar med användbara metoder för detektering och mitigering av adversarial input-attacker såväl som metoder för att utvärdera och visualisera svårt förnimbara attacker mot tabulär data.
APA, Harvard, Vancouver, ISO, and other styles
3

Harris, Rae. "Spectre: Attack and Defense." Scholarship @ Claremont, 2019. https://scholarship.claremont.edu/scripps_theses/1384.

Full text
Abstract:
Modern processors use architecture like caches, branch predictors, and speculative execution in order to maximize computation throughput. For instance, recently accessed memory can be stored in a cache so that subsequent accesses take less time. Unfortunately microarchitecture-based side channel attacks can utilize this cache property to enable unauthorized memory accesses. The Spectre attack is a recent example of this attack. The Spectre attack is particularly dangerous because the vulnerabilities that it exploits are found in microprocessors used in billions of current systems. It involves the attacker inducing a victim’s process to speculatively execute code with a malicious input and store the recently accessed memory into the cache. This paper describes the previous microarchitecture side channel attacks. It then describes the three variants of the Spectre attack. It describes and evaluates proposed defenses against Spectre.
APA, Harvard, Vancouver, ISO, and other styles
4

Wood, Adrian Michael. "A defensive strategy for detecting targeted adversarial poisoning attacks in machine learning trained malware detection models." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2021. https://ro.ecu.edu.au/theses/2483.

Full text
Abstract:
Machine learning is a subset of Artificial Intelligence which is utilised in a variety of different fields to increase productivity, reduce overheads, and simplify the work process through training machines to automatically perform a task. Machine learning has been implemented in many different fields such as medical science, information technology, finance, and cyber security. Machine learning algorithms build models which identify patterns within data, which when applied to new data, can map the input to an output with a high degree of accuracy. To build the machine learning model, a dataset comprised of appropriate examples is divided into training and testing sets. The training set is used by the machine learning algorithm to identify patterns within the data, which are used to make predictions on new data. The test set is used to evaluate the performance of the machine learning model. These models are popular because they significantly improve the performance of technology through automation of feature detection which previously required human input. However, machine learning algorithms are susceptible to a variety of adversarial attacks, which allow an attacker to manipulate the machine learning model into performing an unwanted action, such as misclassifying data into the attackers desired class, or reducing the overall efficacy of the ML model. One current research area is that of malware detection. Malware detection relies on machine learning to detect previously unknown malware variants, without the need to manually reverse-engineer every suspicious file. Detection of Zero-day malware plays an important role in protecting systems generally but is particularly important in systems which manage critical infrastructure, as such systems often cannot be shut down to apply patches and thus must rely on network defence. In this research, a targeted adversarial poisoning attack was developed to allow Zero-day malware files, which were originally classified as malicious, to bypass detection by being misclassified as benign files. An adversarial poisoning attack occurs when an attacker can inject specifically-crafted samples into the training dataset which alters the training process to the desired outcome of the attacker. The targeted adversarial poisoning attack was performed by taking a random selection of the Zero-day file’s import functions and injecting them into the benign training dataset. The targeted adversarial poisoning attack succeeded for both Multi-Layer Perceptron (MLP) and Decision Tree models without reducing the overall efficacy of the target model. A defensive strategy was developed for the targeted adversarial poisoning attack for the MLP models by examining the activation weights of the penultimate layer at test time. If the activation weights were outside the norm for the target (benign) class, the file is quarantined for further examination. It was found to be possible to identify on average 80% of the target Zero-day files from the combined targeted poisoning attacks by examining the activation weights of the neurons from the penultimate layer.
APA, Harvard, Vancouver, ISO, and other styles
5

Moore, Tyler Weston. "Cooperative attack and defense in distributed networks." Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Ning. "Attack and Defense with Hardware-Aided Security." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/72855.

Full text
Abstract:
Riding on recent advances in computing and networking, our society is now experiencing the evolution into the age of information. While the development of these technologies brings great value to our daily life, the lucrative reward from cyber-crimes has also attracted criminals. As computing continues to play an increasing role in the society, security has become a pressing issue. Failures in computing systems could result in loss of infrastructure or human life, as demonstrated in both academic research and production environment. With the continuing widespread of malicious software and new vulnerabilities revealing every day, protecting the heterogeneous computing systems across the Internet has become a daunting task. Our approach to this challenge consists of two directions. The first direction aims to gain a better understanding of the inner working of both attacks and defenses in the cyber environment. Meanwhile, our other direction is designing secure systems in adversarial environment.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
7

Sohail, Imran, and Sikandar Hayat. "Cooperative Defense Against DDoS Attack using GOSSIP Protocol." Thesis, Blekinge Tekniska Högskola, Avdelningen för telekommunikationssystem, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1224.

Full text
Abstract:
The ability to detect and prevent a network from DDoS attack and to ensure the high quality infrastructure is a back bone of today’s network security issues. In this thesis, we have successfully validated an algorithm using OmNet++ Ver. 4.0 simulation to show how a DDoS attack can be detected and how the nodes can be protected from such an attack using GOSSIP protocol.
APA, Harvard, Vancouver, ISO, and other styles
8

Townsend, James R. "Defense of Naval Task Forces from Anti-Ship Missile attack." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1999. http://handle.dtic.mil/100.2/ADA363038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Haberlin, Richard J. "Analysis of unattended ground sensors in theater Missile Defense Attack Operations." Thesis, Monterey, California. Naval Postgraduate School, 1997. http://hdl.handle.net/10945/26369.

Full text
Abstract:
Approved for public release; distribution is unlimited
Unattended ground sensors have a tremendous potential for improving Tactical Ballistic Missile Attack Operations. To date, however, this potential has gone unrealized primarily due to a lack of confidence in the systems and a lack of tactical doctrine for their employment. This thesis provides analyses to demonstrate the effective use of sensor technology and provides recommendations as to how they may best be employed. The probabilistic decision model reports the optimal size array for each of the candidate array locations. It also provides an optimal policy for determining the likelihood that the target is a Time Critical Target based on the number of sensors in agreement as to its identity. This policy may vary with each candidate array. Additionally, recommendations are made on the placement of the arrays within the theater of operations and their optimal configuration to maximize information gained while minimizing the likelihood of compromise. Specifics include, inter-sensor spacing, placement patterns, array locations, and off-road distance
APA, Harvard, Vancouver, ISO, and other styles
10

Widel, Wojciech. "Formal modeling and quantitative analysis of security using attack- defense trees." Thesis, Rennes, INSA, 2019. http://www.theses.fr/2019ISAR0019.

Full text
Abstract:
L'analyse de risque est un processus très complexe. Elle nécessite une représentation rigoureuse et une évaluation approfondie des menaces et de leur contre-mesures. Cette thèse porte sur la modélisation formelle de la sécurité à l'aide d'arbres d'attaque et de défense. Ces derniers servent à représenter et à quantifier les attaques potentielles afin de mieux comprendre les enjeux de sécurité auxquels le système analysé peut être confronté. Ils permettent donc de guider un expert dans le choix des contre-mesures à implémenter pour sécuriser son système. - Le développement d'une méthodologie basée sur la dominance de Pareto et permettant de prendre en compte plusieurs aspects quantitatifs simultanément (e.g., coût, temps, probabilité, difficulté, etc.) lors d'une analyse de risques. - La conception d'une technique, utilisant les méthodes de programmation linéaire, pour sélectionner un ensemble de contre-mesures optimal, en tenant compte du budget destiné à la protection du système analysé. C'est une technique générique qui peut être appliquée à plusieurs problèmes d'optimisation, par exemple, la maximisation de la couverture de surface d'attaque, Les principales contributions de cette thèse sont les ou encore la maximisation du investissement de suivantes : l'attaquant. - L'enrichissement du modèle des arbres d'attaque et de défense permettant d'analyser des scénarios de Pour garantir leur applicabilité pratique, le modèle et sécurité réels. Nous avons notamment développé les les algorithmes mathématiques développés ont été fondements théoriques et les algorithmes d'évaluation implémentés dans un outil informatique à source quantitative pour le modèle où une action de ouverte et accès gratuit. Tous les résultats ont l'attaquant peut contribuer à plusieurs attaques et où également été validés lors d'une étude pratique une contre-mesure peut prévenir plusieurs menaces. portant sur un scénario industriel d'altération de compteurs de consommation d'électricité
Risk analysis is a very complex process. It requires rigorous representation and in-depth assessment of threats and countermeasures. This thesis focuses on the formal modelling of security using attack and defence trees. These are used to represent and quantify potential attacks in order to better understand the security issues that the analyzed system may face. They therefore make it possible to guide an expert in the choice of countermeasures to be implemented to secure their system. The main contributions of this thesis are as follows: - The enrichment of the attack and defence tree model allowing the analysis of real security scenarios. In particular, we have developed the theoretical foundations and quantitative evaluation algorithms for the model where an attacker's action can contribute to several attacks and a countermeasure can prevent several threats. - The development of a methodology based on Pareto dominance and allowing several quantitative aspects to be taken into account simultaneously (e.g., cost, time, probability, difficulty, etc.) during a risk analysis. - The design of a technique, using linear programming methods, for selecting an optimal set of countermeasures, taking into account the budget available for protecting the analyzed system. It is a generic technique that can be applied to several optimization problems, for example, maximizing the attack surface coverage, or maximizing the attacker's investment. To ensure their practical applicability, the model and mathematical algorithms developed were implemented in a freely available open source tool. All the results were also validated with a practical study on an industrial scenario of alteration of electricity consumption meters
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Adversarial Attack and Defense"

1

Attack and defense. Broomall, PA: Mason Crest Publishers, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Attack and defense. Sidney, N.S.W: Weldon Owen, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jajodia, Sushil, George Cybenko, Peng Liu, Cliff Wang, and Michael Wellman, eds. Adversarial and Uncertain Reasoning for Adaptive Cyber Defense. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30719-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lutes, W. John. Scandinavian defense: Anderssen counter attack.--. Coraopolis: Chess Enterprises Inc., 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

C, Kenski Henry, ed. Attack politics: Strategy and defense. New York: Praeger, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Coaching football's attack & pursue 50 defense. West Nyack, N.Y: Parker Pub. Co., 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Riley, Kathy. Weird and wonderful: Attack and defense. New York: Kingfisher, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jajodia, Sushil. Moving Target Defense II: Application of Game Theory and Adversarial Modeling. New York, NY: Springer New York, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Carson, Harry. Point of attack: The defense strikes back. New York: McGraw-Hill, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Q, Elvee Richard, ed. The end of science?: Attack and defense. Lanham, Md: University Press of America, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Adversarial Attack and Defense"

1

Zhou, Mo, Zhenxing Niu, Le Wang, Qilin Zhang, and Gang Hua. "Adversarial Ranking Attack and Defense." In Computer Vision – ECCV 2020, 781–99. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58568-6_46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yang, Jie, Man Wu, and Xiao-Zhang Liu. "Defense Against Adversarial Attack Using PCA." In Communications in Computer and Information Science, 627–36. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-8086-4_59.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hu, Nianyan, Ting Lu, Wenjing Guo, Qiubo Huang, Guohua Liu, Shan Chang, Jiafei Song, and Yiyang Luo. "Random Sparsity Defense Against Adversarial Attack." In PRICAI 2021: Trends in Artificial Intelligence, 597–607. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89363-7_45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kuribayashi, Minoru. "Defense Against Adversarial Attacks." In Frontiers in Fake Media Generation and Detection, 131–48. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-1524-6_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wu, Ji, Chaoqun Ye, and Shiyao Jin. "Adversarial Organization Modeling for Network Attack/Defense." In Information Security Practice and Experience, 90–99. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11689522_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Yanjing, Jianming Cui, and Ming Liu. "Research on Adversarial Patch Attack Defense Method for Traffic Sign Detection." In Communications in Computer and Information Science, 199–210. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-8285-9_15.

Full text
Abstract:
AbstractAccurate and stable traffic sign detection is a key technology to achieve L3 driving automation, and its performance has been significantly improved by the development of deep learning technology in recent years. However, the current traffic sign detection has inadequate difficulty resisting anti-attack ability and even does not have basic defense capability. To solve this critical issue, an adversarial patch attack defense model IYOLO-TS is proposed in this paper. The main innovation is to simulate the conditions of traffic signs being partially damaged, obscured or maliciously modified in real world by training the attack patches, and then add the attacked classes in the last layer of the YOLOv2 which are corresponding to the original detection categories, and finally the attack patch obtained from the training is used to complete the adversarial training of the detection model. The attack patch is obtained by first using RP2 algorithm to attack the detection model and then training on the blank patch. In order to verify the defense effective of the proposed IYOLO-TS model, we constructed a patch dataset LISA-Mask containing 50 different mask generation patches of 33000 sheets, and then training dataset by combining LISA and LISA-Mask datasets. The experiment results show that the mAP of the proposed IYOLO-TS is up to 98.12%. Compared with YOLOv2, it improved the defense ability against patch attacks and has the real-time detection ability. It can be considered that the proposed method has strong practicality and achieves a tradeoff between design complexity and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Yuxiao, Qingfeng Chen, Xinkun Hao, Haiming Pan, Qian Yu, and Kexin Huang. "Defense Against Adversarial Attack on Knowledge Graph Embedding." In Emerging Trends in Cybersecurity Applications, 441–61. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-09640-2_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yin, Zhizhou, Wei Liu, and Sanjay Chawla. "Adversarial Attack, Defense, and Applications with Deep Learning Frameworks." In Deep Learning Applications for Cyber Security, 1–25. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-13057-2_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shibly, Kabid Hassan, Md Delwar Hossain, Hiroyuki Inoue, Yuzo Taenaka, and Youki Kadobayashi. "Autonomous Driving Model Defense Study on Hijacking Adversarial Attack." In Lecture Notes in Computer Science, 546–57. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-15937-4_46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Vasconcellos Vargas, Danilo. "Learning Systems Under Attack—Adversarial Attacks, Defenses and Beyond." In Autonomous Vehicles, 147–61. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-9255-3_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Adversarial Attack and Defense"

1

Wu, Huijun, Chen Wang, Yuriy Tyshetskiy, Andrew Docherty, Kai Lu, and Liming Zhu. "Adversarial Examples for Graph Data: Deep Insights into Attack and Defense." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/669.

Full text
Abstract:
Graph deep learning models, such as graph convolutional networks (GCN) achieve state-of-the-art performance for tasks on graph data. However, similar to other deep learning models, graph deep learning models are susceptible to adversarial attacks. However, compared with non-graph data the discrete nature of the graph connections and features provide unique challenges and opportunities for adversarial attacks and defenses. In this paper, we propose techniques for both an adversarial attack and a defense against adversarial attacks. Firstly, we show that the problem of discrete graph connections and the discrete features of common datasets can be handled by using the integrated gradient technique that accurately determines the effect of changing selected features or edges while still benefiting from parallel computations. In addition, we show that an adversarially manipulated graph using a targeted attack statistically differs from un-manipulated graphs. Based on this observation, we propose a defense approach which can detect and recover a potential adversarial perturbation. Our experiments on a number of datasets show the effectiveness of the proposed techniques.
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Xiao, Siyue Wang, Pin-Yu Chen, Yanzhi Wang, Brian Kulis, Xue Lin, and Sang Chin. "Protecting Neural Networks with Hierarchical Random Switching: Towards Better Robustness-Accuracy Trade-off for Stochastic Defenses." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/833.

Full text
Abstract:
Despite achieving remarkable success in various domains, recent studies have uncovered the vulnerability of deep neural networks to adversarial perturbations, creating concerns on model generalizability and new threats such as prediction-evasive misclassification or stealthy reprogramming. Among different defense proposals, stochastic network defenses such as random neuron activation pruning or random perturbation to layer inputs are shown to be promising for attack mitigation. However, one critical drawback of current defenses is that the robustness enhancement is at the cost of noticeable performance degradation on legitimate data, e.g., large drop in test accuracy.This paper is motivated by pursuing for a better trade-off between adversarial robustness and test accuracy for stochastic network defenses. We propose Defense Efficiency Score (DES), a comprehensive metric that measures the gain in unsuccessful attack attempts at the cost of drop in test accuracy of any defense. To achieve a better DES, we propose hierarchical random switching (HRS), which protects neural networks through a novel randomization scheme. A HRS-protected model contains several blocks of randomly switching channels to prevent adversaries from exploiting fixed model structures and parameters for their malicious purposes. Extensive experiments show that HRS is superior in defending against state-of-the-art white-box and adaptive adversarial misclassification attacks. We also demonstrate the effectiveness of HRS in defending adversarial reprogramming, which is the first defense against adversarial programs. Moreover, in most settings the average DES of HRS is at least 5X higher than current stochastic network defenses, validating its significantly improved robustness-accuracy trade-off.
APA, Harvard, Vancouver, ISO, and other styles
3

Nguyen, Thanh H., Arunesh Sinha, and He He. "Partial Adversarial Behavior Deception in Security Games." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/40.

Full text
Abstract:
Learning attacker behavior is an important research topic in security games as security agencies are often uncertain about attackers' decision making. Previous work has focused on developing various behavioral models of attackers based on historical attack data. However, a clever attacker can manipulate its attacks to fail such attack-driven learning, leading to ineffective defense strategies. We study attacker behavior deception with three main contributions. First, we propose a new model, named partial behavior deception model, in which there is a deceptive attacker (among multiple attackers) who controls a portion of attacks. Our model captures real-world security scenarios such as wildlife protection in which multiple poachers are present. Second, we introduce a new scalable algorithm, GAMBO, to compute an optimal deception strategy of the deceptive attacker. Our algorithm employs the projected gradient descent and uses the implicit function theorem for the computation of gradient. Third, we conduct a comprehensive set of experiments, showing a significant benefit for the attacker and loss for the defender due to attacker deception.
APA, Harvard, Vancouver, ISO, and other styles
4

Xiao, Chaowei, Bo Li, Jun-yan Zhu, Warren He, Mingyan Liu, and Dawn Song. "Generating Adversarial Examples with Adversarial Networks." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/543.

Full text
Abstract:
Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires more research efforts. In this paper, we propose AdvGAN to generate adversarial exam- ples with generative adversarial networks (GANs), which can learn and approximate the distribution of original instances. For AdvGAN, once the generator is trained, it can generate perturbations efficiently for any instance, so as to potentially accelerate adversarial training as defenses. We apply Adv- GAN in both semi-whitebox and black-box attack settings. In semi-whitebox attacks, there is no need to access the original target model after the generator is trained, in contrast to traditional white-box attacks. In black-box attacks, we dynamically train a distilled model for the black-box model and optimize the generator accordingly. Adversarial examples generated by AdvGAN on different target models have high attack success rate under state-of-the-art defenses compared to other attacks. Our attack has placed the first with 92.76% accuracy on a public MNIST black-box attack challenge.
APA, Harvard, Vancouver, ISO, and other styles
5

Xu, Kaidi, Hongge Chen, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Mingyi Hong, and Xue Lin. "Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/550.

Full text
Abstract:
Graph neural networks (GNNs) which apply the deep neural networks to graph data have achieved significant performance for the task of semi-supervised node classification. However, only few work has addressed the adversarial robustness of GNNs. In this paper, we first present a novel gradient-based attack method that facilitates the difficulty of tackling discrete graph data. When comparing to current adversarial attacks on GNNs, the results show that by only perturbing a small number of edge perturbations, including addition and deletion, our optimization-based attack can lead to a noticeable decrease in classification performance. Moreover, leveraging our gradient-based attack, we propose the first optimization-based adversarial training for GNNs. Our method yields higher robustness against both different gradient based and greedy attack methods without sacrifice classification accuracy on original graph.
APA, Harvard, Vancouver, ISO, and other styles
6

Simonetto, Thibault, Salijona Dyrmishi, Salah Ghamizi, Maxime Cordy, and Yves Le Traon. "A Unified Framework for Adversarial Attack and Defense in Constrained Feature Space." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/183.

Full text
Abstract:
The generation of feasible adversarial examples is necessary for properly assessing models that work in constrained feature space. However, it remains a challenging task to enforce constraints into attacks that were designed for computer vision. We propose a unified framework to generate feasible adversarial examples that satisfy given domain constraints. Our framework can handle both linear and non-linear constraints. We instantiate our framework into two algorithms: a gradient-based attack that introduces constraints in the loss function to maximize, and a multi-objective search algorithm that aims for misclassification, perturbation minimization, and constraint satisfaction. We show that our approach is effective in four different domains, with a success rate of up to 100%, where state-of-the-art attacks fail to generate a single feasible example. In addition to adversarial retraining, we propose to introduce engineered non-convex constraints to improve model adversarial robustness. We demonstrate that this new defense is as effective as adversarial retraining. Our framework forms the starting point for research on constrained adversarial attacks and provides relevant baselines and datasets that future research can exploit.
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Yeni, Hany S. Abdel-Khalik, and Elisa Bertino. "Online Adversarial Learning of Reactor State." In 2018 26th International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/icone26-82372.

Full text
Abstract:
This paper is in support of our recent efforts to designing intelligent defenses against false data injection attacks, where false data are injected in the raw data used to control the reactor. Adopting a game-model between the attacker and the defender, we focus here on how the attacker may estimate reactor state in order to inject an attack that can bypass normal reactor anomaly and outlier detection checks. This approach is essential to designing defensive strategies that can anticipate the attackers moves. More importantly, it is to alert the community that defensive methods based on approximate physics models could be bypassed by the attacker who can approximate the models in an online mode during a lie-in-wait period. For illustration, we employ a simplified point kinetics model and show how an attacker, once gaining access to the reactor raw data, i.e., instrumentation readings, can inject small perturbations to learn the reactor dynamic behavior. In our context, this equates to estimating the reactivity feedback coefficients, e.g., Doppler, Xenon poisoning, etc. We employ a non-parametric learning approach that employs alternating conditional estimation in conjunction with discrete Fourier transform and curve fitting techniques to estimate reactivity coefficients. An Iranian model of the Bushehr reactor is employed for demonstration. Results indicate that very accurate estimation of reactor state could be achieved using the proposed learning method.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Chaoning, Philipp Benz, Chenguo Lin, Adil Karjauv, Jing Wu, and In So Kweon. "A Survey on Universal Adversarial Attack." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/635.

Full text
Abstract:
The intriguing phenomenon of adversarial examples has attracted significant attention in machine learning and what might be more surprising to the community is the existence of universal adversarial perturbations (UAPs), i.e. a single perturbation to fool the target DNN for most images. With the focus on UAP against deep classifiers, this survey summarizes the recent progress on universal adversarial attacks, discussing the challenges from both the attack and defense sides, as well as the reason for the existence of UAP. We aim to extend this work as a dynamic survey that will regularly update its content to follow new works regarding UAP or universal attack in a wide range of domains, such as image, audio, video, text, etc. Relevant updates will be discussed at: https://bit.ly/2SbQlLG. We welcome authors of future works in this field to contact us for including your new findings.
APA, Harvard, Vancouver, ISO, and other styles
9

Chhabra, Saheb, Akshay Agarwal, Richa Singh, and Mayank Vatsa. "Attack Agnostic Adversarial Defense via Visual Imperceptible Bound." In 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021. http://dx.doi.org/10.1109/icpr48806.2021.9412663.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhao, Zhe, Guangke Chen, Jingyi Wang, Yiwei Yang, Fu Song, and Jun Sun. "Attack as defense: characterizing adversarial examples using robustness." In ISSTA '21: 30th ACM SIGSOFT International Symposium on Software Testing and Analysis. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3460319.3464822.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Adversarial Attack and Defense"

1

Maxion, Roy A., Kevin S. Killourhy, and Kymie M. Tan. Developing a Defense-Centric Attack Taxonomy. Fort Belvoir, VA: Defense Technical Information Center, May 2005. http://dx.doi.org/10.21236/ada435079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Best, Carole N. Computer Network Defense and Attack: Information Warfare in the Department of Defense. Fort Belvoir, VA: Defense Technical Information Center, April 2001. http://dx.doi.org/10.21236/ada394187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hanson, Kraig. Organization of DoD Computer Network Defense, Exploitation, and Attack Forces. Fort Belvoir, VA: Defense Technical Information Center, March 2009. http://dx.doi.org/10.21236/ada500822.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Watanabe, Nathan K., and Shannon M. Huffman. Missile Defense Attack Operations (Joint Force Quartery, Winter 2000-2001). Fort Belvoir, VA: Defense Technical Information Center, January 2001. http://dx.doi.org/10.21236/ada426706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Blackert, W. J., R. L. Hom, A. K. Castner, R. M. Jokerst, and D. M. Gregg. Distributed Denial of Service-Defense Attack Tradeoff Analysis (DDOS-DATA). Fort Belvoir, VA: Defense Technical Information Center, December 2004. http://dx.doi.org/10.21236/ada429339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

White, Gregory B. Center for Infrastructure Assurance and Security - Attack and Defense Exercises. Fort Belvoir, VA: Defense Technical Information Center, June 2010. http://dx.doi.org/10.21236/ada523898.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

FU, Fabian. End-to-End and Network-wide Attack Defense Solution -Overhaul Carrier Network Security. Denmark: River Publishers, July 2016. http://dx.doi.org/10.13052/popcas006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Letchford, Joshua. Game Theory for Proactive Dynamic Defense and Attack Mitigation in Cyber-Physical Systems. Office of Scientific and Technical Information (OSTI), September 2016. http://dx.doi.org/10.2172/1330190.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Aliberti, David M. Preparing for a Nightmare: USNORTHCOM'S Homeland Defense Mission Against Chemical and Biological Attack. Fort Belvoir, VA: Defense Technical Information Center, May 2014. http://dx.doi.org/10.21236/ada609815.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Morrow, Walter. Report of the Defense Science Board Task Force on Deep Attack Weapons Mix Study (DAWMS). Fort Belvoir, VA: Defense Technical Information Center, January 1998. http://dx.doi.org/10.21236/ada345434.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography