Academic literature on the topic 'Backdoor Attack'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Backdoor Attack.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Backdoor Attack"

1

Saha, Aniruddha, Akshayvarun Subramanya, and Hamed Pirsiavash. "Hidden Trigger Backdoor Attacks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 11957–65. http://dx.doi.org/10.1609/aaai.v34i07.6871.

Full text
Abstract:
With the success of deep learning algorithms in various domains, studying adversarial attacks to secure deep models in real world applications has become an important research topic. Backdoor attacks are a form of adversarial attacks on deep networks where the attacker provides poisoned data to the victim to train the model with, and then activates the attack by showing a specific small trigger pattern at the test time. Most state-of-the-art backdoor attacks either provide mislabeled poisoning data that is possible to identify by visual inspection, reveal the trigger in the poisoned data, or use noise to hide the trigger. We propose a novel form of backdoor attack where poisoned data look natural with correct labels and also more importantly, the attacker hides the trigger in the poisoned data and keeps the trigger secret until the test time. We perform an extensive study on various image classification settings and show that our attack can fool the model by pasting the trigger at random locations on unseen images although the model performs well on clean data. We also show that our proposed attack cannot be easily defended using a state-of-the-art defense algorithm for backdoor attacks.
APA, Harvard, Vancouver, ISO, and other styles
2

Ning, Rui, Jiang Li, Chunsheng Xin, Hongyi Wu, and Chonggang Wang. "Hibernated Backdoor: A Mutual Information Empowered Backdoor Attack to Deep Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (June 28, 2022): 10309–18. http://dx.doi.org/10.1609/aaai.v36i9.21272.

Full text
Abstract:
We report a new neural backdoor attack, named Hibernated Backdoor, which is stealthy, aggressive and devastating. The backdoor is planted in a hibernated mode to avoid being detected. Once deployed and fine-tuned on end-devices, the hibernated backdoor turns into the active state that can be exploited by the attacker. To the best of our knowledge, this is the first hibernated neural backdoor attack. It is achieved by maximizing the mutual information (MI) between the gradients of regular and malicious data on the model. We introduce a practical algorithm to achieve MI maximization to effectively plant the hibernated backdoor. To evade adaptive defenses, we further develop a targeted hibernated backdoor, which can only be activated by specific data samples and thus achieves a higher degree of stealthiness. We show the hibernated backdoor is robust and cannot be removed by existing backdoor removal schemes. It has been fully tested on four datasets with two neural network architectures, compared to five existing backdoor attacks, and evaluated using seven backdoor detection schemes. The experiments demonstrate the effectiveness of the hibernated backdoor attack under various settings.
APA, Harvard, Vancouver, ISO, and other styles
3

Kwon, Hyun, and Sanghyun Lee. "Textual Backdoor Attack for the Text Classification System." Security and Communication Networks 2021 (October 22, 2021): 1–11. http://dx.doi.org/10.1155/2021/2938386.

Full text
Abstract:
Deep neural networks provide good performance for image recognition, speech recognition, text recognition, and pattern recognition. However, such networks are vulnerable to backdoor attacks. In a backdoor attack, normal data that do not include a specific trigger are correctly classified by the target model, but backdoor data that include the trigger are incorrectly classified by the target model. One advantage of a backdoor attack is that the attacker can use a specific trigger to attack at a desired time. In this study, we propose a backdoor attack targeting the BERT model, which is a classification system designed for use in the text domain. Under the proposed method, the model is additionally trained on a backdoor sentence that includes a specific trigger, and afterward, if the trigger is attached before or after an original sentence, it will be misclassified by the model. In our experimental evaluation, we used two movie review datasets (MR and IMDB). The results show that using the trigger word “ATTACK” at the beginning of an original sentence, the proposed backdoor method had a 100% attack success rate when approximately 1.0% and 0.9% of the training data consisted of backdoor samples, and it allowed the model to maintain an accuracy of 86.88% and 90.80% on the original samples in the MR and IMDB datasets, respectively.
APA, Harvard, Vancouver, ISO, and other styles
4

Ye, Jianbin, Xiaoyuan Liu, Zheng You, Guowei Li, and Bo Liu. "DriNet: Dynamic Backdoor Attack against Automatic Speech Recognization Models." Applied Sciences 12, no. 12 (June 7, 2022): 5786. http://dx.doi.org/10.3390/app12125786.

Full text
Abstract:
Automatic speech recognition (ASR) is popular in our daily lives (e.g., via voice assistants or voice input). Once its security attributes are destroyed, it poses as a severe threat to a user’s life and ‘property safety’. Prior research has demonstrated that ASR systems are vulnerable to backdoor attacks. A model embedded with a backdoor behaves normally on clean samples yet misclassifies malicious samples that contain triggers. Existing backdoor attacks have mostly been conducted in the image domain. However, they can not be applied in the audio domain because of poor transferability. This paper proposes a dynamic backdoor attack method against ASR models, named DriNet. Significantly, we designed a dynamic trigger generation network to craft a variety of audio triggers. It is trained jointly with the discriminative model incorporated with an attack success rate on poisoned samples and accuracy on clean samples. We demonstrate that DriNet achieves an attack success rate of 86.4% when infecting only 0.5% of the training set without reducing its accuracy. DriNet can still achieve comparable attack performance to backdoor attacks using static triggers, further enjoying richer attack patterns. We further evaluated DriNet’s resistance to a current state-of-the-art defense mechanism. The anomaly index of DriNet is more than 37.4% smaller than that of BadNets method. The triggers generated by DriNet are hard reverse, keeping DriNet from the detectors.
APA, Harvard, Vancouver, ISO, and other styles
5

Xu, Yixiao, Xiaolei Liu, Kangyi Ding, and Bangzhou Xin. "IBD: An Interpretable Backdoor-Detection Method via Multivariate Interactions." Sensors 22, no. 22 (November 10, 2022): 8697. http://dx.doi.org/10.3390/s22228697.

Full text
Abstract:
Recent work has shown that deep neural networks are vulnerable to backdoor attacks. In comparison with the success of backdoor-attack methods, existing backdoor-defense methods face a lack of theoretical foundations and interpretable solutions. Most defense methods are based on experience with the characteristics of previous attacks, but fail to defend against new attacks. In this paper, we propose IBD, an interpretable backdoor-detection method via multivariate interactions. Using information theory techniques, IBD reveals how the backdoor works from the perspective of multivariate interactions of features. Based on the interpretable theorem, IBD enables defenders to detect backdoor models and poisoned examples without introducing additional information about the specific attack method. Experiments on widely used datasets and models show that IBD achieves a 78% increase in average in detection accuracy and an order-of-magnitude reduction in time cost compared with existing backdoor-detection methods.
APA, Harvard, Vancouver, ISO, and other styles
6

Xiang, Zhen, David J. Miller, Hang Wang, and George Kesidis. "Detecting Scene-Plausible Perceptible Backdoors in Trained DNNs Without Access to the Training Set." Neural Computation 33, no. 5 (April 13, 2021): 1329–71. http://dx.doi.org/10.1162/neco_a_01376.

Full text
Abstract:
Abstract Backdoor data poisoning attacks add mislabeled examples to the training set, with an embedded backdoor pattern, so that the classifier learns to classify to a target class whenever the backdoor pattern is present in a test sample. Here, we address posttraining detection of scene-plausible perceptible backdoors, a type of backdoor attack that can be relatively easily fashioned, particularly against DNN image classifiers. A post-training defender does not have access to the potentially poisoned training set, only to the trained classifier, as well as some unpoisoned examples that need not be training samples. Without the poisoned training set, the only information about a backdoor pattern is encoded in the DNN's trained weights. This detection scenario is of great import considering legacy and proprietary systems, cell phone apps, as well as training outsourcing, where the user of the classifier will not have access to the entire training set. We identify two important properties of scene-plausible perceptible backdoor patterns, spatial invariance and robustness, based on which we propose a novel detector using the maximum achievable misclassification fraction (MAMF) statistic. We detect whether the trained DNN has been backdoor-attacked and infer the source and target classes. Our detector outperforms existing detectors and, coupled with an imperceptible backdoor detector, helps achieve posttraining detection of most evasive backdoors of interest.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhao, Feng, Li Zhou, Qi Zhong, Rushi Lan, and Leo Yu Zhang. "Natural Backdoor Attacks on Deep Neural Networks via Raindrops." Security and Communication Networks 2022 (March 26, 2022): 1–11. http://dx.doi.org/10.1155/2022/4593002.

Full text
Abstract:
Recently, deep learning has made significant inroads into the Internet of Things due to its great potential for processing big data. Backdoor attacks, which try to influence model prediction on specific inputs, have become a serious threat to deep neural network models. However, because the poisoned data used to plant a backdoor into the victim model typically follows a fixed specific pattern, most existing backdoor attacks can be readily prevented by common defense. In this paper, we leverage natural behavior and present a stealthy backdoor attack for image classification tasks: the raindrop backdoor attack (RDBA). We use raindrops as the backdoor trigger, and they are naturally merged with clean instances to synthesize poisoned data that are close to their natural counterparts in the rain. The raindrops dispersed over images are more diversified than the triggers in the literature, which are fixed, confined, and unpleasant patterns to the host content, making the triggers more stealthy. Extensive experiments on ImageNet and GTSRB datasets demonstrate the fidelity, effectiveness, stealthiness, and sustainability of RDBA in attacking models with current popular defense mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
8

Fang, Shihong, and Anna Choromanska. "Backdoor Attacks on the DNN Interpretation System." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 1 (June 28, 2022): 561–70. http://dx.doi.org/10.1609/aaai.v36i1.19935.

Full text
Abstract:
Interpretability is crucial to understand the inner workings of deep neural networks (DNNs). Many interpretation methods help to understand the decision-making of DNNs by generating saliency maps that highlight parts of the input image that contribute the most to the prediction made by the DNN. In this paper we design a backdoor attack that alters the saliency map produced by the network for an input image with a specific trigger pattern while not losing the prediction performance significantly. The saliency maps are incorporated in the penalty term of the objective function that is used to train a deep model and its influence on model training is conditioned upon the presence of a trigger. We design two types of attacks: a targeted attack that enforces a specific modification of the saliency map and a non-targeted attack when the importance scores of the top pixels from the original saliency map are significantly reduced. We perform empirical evaluations of the proposed backdoor attacks on gradient-based interpretation methods, Grad-CAM and SimpleGrad, and a gradient-free scheme, VisualBackProp, for a variety of deep learning architectures. We show that our attacks constitute a serious security threat to the reliability of the interpretation methods when deploying models developed by untrusted sources. We furthermore show that existing backdoor defense mechanisms are ineffective in detecting our attacks. Finally, we demonstrate that the proposed methodology can be used in an inverted setting, where the correct saliency map can be obtained only in the presence of a trigger (key), effectively making the interpretation system available only to selected users.
APA, Harvard, Vancouver, ISO, and other styles
9

Ozdayi, Mustafa Safa, Murat Kantarcioglu, and Yulia R. Gel. "Defending against Backdoors in Federated Learning with Robust Learning Rate." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 10 (May 18, 2021): 9268–76. http://dx.doi.org/10.1609/aaai.v35i10.17118.

Full text
Abstract:
Federated learning (FL) allows a set of agents to collaboratively train a model without sharing their potentially sensitive data. This makes FL suitable for privacy-preserving applications. At the same time, FL is susceptible to adversarial attacks due to decentralized and unvetted data. One important line of attacks against FL is the backdoor attacks. In a backdoor attack, an adversary tries to embed a backdoor functionality to the model during training that can later be activated to cause a desired misclassification. To prevent backdoor attacks, we propose a lightweight defense that requires minimal change to the FL protocol. At a high level, our defense is based on carefully adjusting the aggregation server's learning rate, per dimension and per round, based on the sign information of agents' updates. We first conjecture the necessary steps to carry a successful backdoor attack in FL setting, and then, explicitly formulate the defense based on our conjecture. Through experiments, we provide empirical evidence that supports our conjecture, and we test our defense against backdoor attacks under different settings. We observe that either backdoor is completely eliminated, or its accuracy is significantly reduced. Overall, our experiments suggest that our defense significantly outperforms some of the recently proposed defenses in the literature. We achieve this by having minimal influence over the accuracy of the trained models. In addition, we also provide convergence rate analysis for our proposed scheme.
APA, Harvard, Vancouver, ISO, and other styles
10

KWON, Hyun, Hyunsoo YOON, and Ki-Woong PARK. "Multi-Targeted Backdoor: Indentifying Backdoor Attack for Multiple Deep Neural Networks." IEICE Transactions on Information and Systems E103.D, no. 4 (April 1, 2020): 883–87. http://dx.doi.org/10.1587/transinf.2019edl8170.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Backdoor Attack"

1

Turner, Alexander M. S. M. Massachusetts Institute of Technology. "Exploring the landscape of backdoor attacks on deep neural network models." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123127.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 71-75).
Deep neural networks have recently been demonstrated to be vulnerable to backdoor attacks. Specifically, by introducing a small set of training inputs, an adversary is able to plant a backdoor in the trained model that enables them to fully control the model's behavior during inference. In this thesis, the landscape of these attacks is investigated from both the perspective of an adversary seeking an effective attack and a practitioner seeking protection against them. While the backdoor attacks that have been previously demonstrated are very powerful, they crucially rely on allowing the adversary to introduce arbitrary inputs that are -- often blatantly -- mislabelled. As a result, the introduced inputs are likely to raise suspicion whenever even a rudimentary data filtering scheme flags them as outliers. This makes label-consistency -- the condition that inputs are consistent with their labels -- crucial for these attacks to remain undetected. We draw on adversarial perturbations and generative methods to develop a framework for executing efficient, yet label-consistent, backdoor attacks. Furthermore, we propose the use of differential privacy as a defence against backdoor attacks. This prevents the model from relying heavily on features present in few samples. As we do not require formal privacy guarantees, we are able to relax the requirements imposed by differential privacy and instead evaluate our methods on the explicit goal of avoiding the backdoor attack. We propose a method that uses a relaxed differentially private training procedure to achieve empirical protection from backdoor attacks with only a moderate decrease in acccuacy on natural inputs.
by Alexander M. Turner.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
2

(9034049), Miguel Villarreal-Vasquez. "Anomaly Detection and Security Deep Learning Methods Under Adversarial Situation." Thesis, 2020.

Find full text
Abstract:

Advances in Artificial Intelligence (AI), or more precisely on Neural Networks (NNs), and fast processing technologies (e.g. Graphic Processing Units or GPUs) in recent years have positioned NNs as one of the main machine learning algorithms used to solved a diversity of problems in both academia and the industry. While they have been proved to be effective in solving many tasks, the lack of security guarantees and understanding of their internal processing disrupts their wide adoption in general and cybersecurity-related applications. In this dissertation, we present the findings of a comprehensive study aimed to enable the absorption of state-of-the-art NN algorithms in the development of enterprise solutions. Specifically, this dissertation focuses on (1) the development of defensive mechanisms to protect NNs against adversarial attacks and (2) application of NN models for anomaly detection in enterprise networks.

In this state of affairs, this work makes the following contributions. First, we performed a thorough study of the different adversarial attacks against NNs. We concentrate on the attacks referred to as trojan attacks and introduce a novel model hardening method that removes any trojan (i.e. misbehavior) inserted to the NN models at training time. We carefully evaluate our method and establish the correct metrics to test the efficiency of defensive methods against these types of attacks: (1) accuracy with benign data, (2) attack success rate, and (3) accuracy with adversarial data. Prior work evaluates their solutions using the first two metrics only, which do not suffice to guarantee robustness against untargeted attacks. Our method is compared with the state-of-the-art. The obtained results show our method outperforms it. Second, we proposed a novel approach to detect anomalies using LSTM-based models. Our method analyzes at runtime the event sequences generated by the Endpoint Detection and Response (EDR) system of a renowned security company running and efficiently detects uncommon patterns. The new detecting method is compared with the EDR system. The results show that our method achieves a higher detection rate. Finally, we present a Moving Target Defense technique that smartly reacts upon the detection of anomalies so as to also mitigate the detected attacks. The technique efficiently replaces the entire stack of virtual nodes, making ongoing attacks in the system ineffective.

APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Backdoor Attack"

1

Michael A, Newton. Part IV The ICC and its Applicable Law, 29 Charging War Crimes: Policy and Prognosis from a Military Perspective. Oxford University Press, 2015. http://dx.doi.org/10.1093/law/9780198705161.003.0029.

Full text
Abstract:
The Rome Statute was designed to largely align criminal norms with actual state practice based on the realities of warfare. Article 8 embodied notable new refinements (e.g. in relation to disproportionate attack under Article 8(2)(b)(iv)), but did so against a backdrop of pragmatic military practice. This chapter dissects the structure of war crimes under Rome Statute to demonstrate this deliberate intention of Article 8 and then describes the correlative considerations related to charging practices for the maturing institution, including command responsibility. When properly understood and applied in light of the Elements of Crimes, the Court’s charging decisions with respect to war crimes ought to reflect the paradox that its operative provisions are at once revolutionary yet broadly reflective of the actual practice of warfare.
APA, Harvard, Vancouver, ISO, and other styles
2

Koh Swee, Yen. Part II Investor-State Arbitration in the Energy Sector, 14 Energy Investor-State Disputes in Asia. Oxford University Press, 2018. http://dx.doi.org/10.1093/law/9780198805786.003.0014.

Full text
Abstract:
This chapter takes a critical view on some energy-related investor-state disputes in Asia which have ‘left a bitter taste in the Host State's mouth’. Using selected case studies, the chapter concludes that some Asian countries, who once saw agreeing to investor-state arbitration as a means to attract investment, are nowadays more reticent towards this type of dispute resolution. The chapter discusses how to revive investor-state arbitration in Asia. In particular, it considers investor-state arbitration against the backdrop of recent growth in outward Asian investment. It emphasizes the importance of regional and international energy cooperation and initiatives such as the Association of South-East Asian Nations (ASEAN) Comprehensive Investment Agreement.
APA, Harvard, Vancouver, ISO, and other styles
3

Parsons, Christopher. High-Skilled Migration in Times of Global Economic Crisis. Edited by Mathias Czaika. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198815273.003.0002.

Full text
Abstract:
This chapter uses two pioneering databases to analyse the implications of the global economic crisis on international migration. The first details inflows of migrant workers of 185 nationalities to ten OECD destinations, disaggregated by skill level between 2000 and 2012. The second comprises immigration policies implemented by nineteen OECD countries between 2000 and 2012. It distinguishes between six skill-selective admission policies, six post-entry policy instruments, and three bilateral agreements. The preliminary analysis is presented against the backdrop of the crisis, which negatively affected annual inflows of highly and other skilled migrants between 2007 and 2009, although these resumed an upward trend thereafter. The starkest trends in policy terms include: the diffusion of student jobseeker visas, the relative stability in the prevalence of skill-selective policies in the wake of the crisis, a greater use of financial incentives to attract high-skilled workers, and increased employer transferability for migrants at destination.
APA, Harvard, Vancouver, ISO, and other styles
4

Chorev, Nitsan. Give and Take. Princeton University Press, 2019. http://dx.doi.org/10.23943/princeton/9780691197845.001.0001.

Full text
Abstract:
This book looks at local drug manufacturing in Kenya, Tanzania, and Uganda, from the early 1980s to the present, to understand the impact of foreign aid on industrial development. While foreign aid has been attacked by critics as wasteful, counterproductive, or exploitative, this book makes a clear case for the effectiveness of what it terms “developmental foreign aid.” Against the backdrop of Africa’s pursuit of economic self-sufficiency, the battle against AIDS and malaria, and bitter negotiations over affordable drugs, the book offers an important corrective to popular views on foreign aid and development. It shows that when foreign aid has provided markets, monitoring, and mentoring, it has supported the emergence and upgrading of local production. In instances where donors were willing to procure local drugs, they created new markets that gave local entrepreneurs an incentive to produce new types of drugs. In turn, when donors enforced exacting standards as a condition to access those markets, they gave these producers an incentive to improve quality standards. And where technical know-how was not readily available and donors provided mentoring, local producers received the guidance necessary for improving production processes. Without losing sight of domestic political-economic conditions, historical legacies, and foreign aid’s own internal contradictions, the book presents new insights into the conditions under which foreign aid can be effective.
APA, Harvard, Vancouver, ISO, and other styles
5

Hain, Kathryn A. Epilogue. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780190622183.003.0017.

Full text
Abstract:
KHAYZURAN’S MANIPULATION of three generations of Abbasid caliphs and courtiers make her probably the best known concubine of the Abbasid court, a place and time still famous as the backdrop for the stories of The Arabian Nights. As the mother of al-Hadi (r. 785–786) and Harun al-Rashid (r. 786–809), she provides us an early example of the social mobility and wealth that an enslaved woman could attain in Islamic society. Nabia Abbott, a pioneer scholar in English on early Muslim women, wrote a biography of Khayzuran. According to her work in the Arabic sources, slavers in Yemen kidnapped this lithe girl, named her Khayzuran (“Slender Reed”), and put her through musical training in Mecca to increase her value before selling her to the caliph on the Hajj. After Khayzuran secured power in the palace, she sent royal envoys to Yemen to search for her family. They found her father to be no more than a roughly dressed freedman working in the fields. This slave concubine who became queen mother influenced royal appointments and dominated the courtiers, her spouse, and her sons, enabling her to funnel incredible wealth to her own treasury. At the time of her death, it was recorded that her yearly income consumed half the land taxes of the empire. Her estate included a huge palace with over 1,000 slaves to serve her, gold, jewels, and 18,000 silk brocade dresses. Not bad for a skinny farm kid from Yemen....
APA, Harvard, Vancouver, ISO, and other styles
6

Dasgupta, Ushashi. Charles Dickens and the Properties of Fiction. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198859116.001.0001.

Full text
Abstract:
This book explores the significance of rental culture in Charles Dickens’s fiction and journalism. It reveals tenancy, or the leasing of real estate in exchange for money, to be a governing force in everyday life in the nineteenth century. It casts a light into back attics and landladies’ parlours, and follows a host of characters—from slum landlords exploiting their tenants, to pairs of friends deciding to live together and share the rent. In this period, tenancy shaped individuals, structured communities, and fascinated writers. The vast majority of London’s population had an immediate economic relationship with the houses and rooms they inhabited, and Dickens was highly attuned to the social, psychological, and imaginative corollaries of this phenomenon. He may have been read as an overwhelming proponent of middle-class domestic ideology, but if we look closely, we see that his fictional universe is a dense network of rented spaces. He is comfortable in what he calls the ‘lodger world’, and he locates versions of home in a multitude of unlikely places. These are not mere settings, waiting to be recreated faithfully; rented space does not simply provide a backdrop for incident in the nineteenth-century novel. Instead, it plays an important part in influencing what takes place. For Dickens, to write about tenancy can often mean to write about writing—character, authorship, and literary collaboration. More than anything, he celebrates the fact that unassuming houses brim with narrative potential: comedies, romances, mysteries, and comings-of-age take place behind their doors.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Backdoor Attack"

1

Liu, Yunfei, Xingjun Ma, James Bailey, and Feng Lu. "Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks." In Computer Vision – ECCV 2020, 182–99. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58607-2_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Yu, Haomiao Yang, Jiasheng Li, and Mengyu Ge. "A Pragmatic Label-Specific Backdoor Attack." In Communications in Computer and Information Science, 149–62. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-8445-7_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Xiong, Yayuan, Fengyuan Xu, Sheng Zhong, and Qun Li. "Escaping Backdoor Attack Detection of Deep Learning." In ICT Systems Security and Privacy Protection, 431–45. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58201-2_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gao, Xiangyu, and Meikang Qiu. "Energy-Based Learning for Preventing Backdoor Attack." In Knowledge Science, Engineering and Management, 706–21. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-10989-8_56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Luo, Yuxiao, Jianwei Tai, Xiaoqi Jia, and Shengzhi Zhang. "Practical Backdoor Attack Against Speaker Recognition System." In Information Security Practice and Experience, 468–84. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-21280-2_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Tong, Yuan Yao, Feng Xu, Shengwei An, Hanghang Tong, and Ting Wang. "An Invisible Black-Box Backdoor Attack Through Frequency Domain." In Lecture Notes in Computer Science, 396–413. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19778-9_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sheng, Yu, Rong Chen, Guanyu Cai, and Li Kuang. "Backdoor Attack of Graph Neural Networks Based on Subgraph Trigger." In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 276–96. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-92638-0_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Phan, Huy, Cong Shi, Yi Xie, Tianfang Zhang, Zhuohang Li, Tianming Zhao, Jian Liu, Yan Wang, Yingying Chen, and Bo Yuan. "RIBAC: Towards Robust and Imperceptible Backdoor Attack against Compact DNN." In Lecture Notes in Computer Science, 708–24. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19772-7_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Jianming, Li Luo, and Xueyan Wang. "Backdoor Attack Against Deep Learning-Based Autonomous Driving with Fogging." In Communications in Computer and Information Science, 247–56. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-7943-9_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yang, Xiu-gui, Xiang-yun Qian, Rui Zhang, Ning Huang, and Hui Xia. "Low-Poisoning Rate Invisible Backdoor Attack Based on Important Neurons." In Wireless Algorithms, Systems, and Applications, 375–83. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19214-2_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Backdoor Attack"

1

Wang, Lun, Zaynah Javed, Xian Wu, Wenbo Guo, Xinyu Xing, and Dawn Song. "BACKDOORL: Backdoor Attack against Competitive Reinforcement Learning." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/509.

Full text
Abstract:
Recent research has confirmed the feasibility of backdoor attacks in deep reinforcement learning (RL) systems. However, the existing attacks require the ability to arbitrarily modify an agent's observation, constraining the application scope to simple RL systems such as Atari games. In this paper, we migrate backdoor attacks to more complex RL systems involving multiple agents and explore the possibility of triggering the backdoor without directly manipulating the agent's observation. As a proof of concept, we demonstrate that an adversary agent can trigger the backdoor of the victim agent with its own action in two-player competitive RL systems. We prototype and evaluate BackdooRL in four competitive environments. The results show that when the backdoor is activated, the winning rate of the victim drops by 17% to 37% compared to when not activated. The videos are hosted at https://github.com/wanglun1996/multi_agent_rl_backdoor_videos.
APA, Harvard, Vancouver, ISO, and other styles
2

Xia, Jun, Ting Wang, Jiepin Ding, Xian Wei, and Mingsong Chen. "Eliminating Backdoor Triggers for Deep Neural Networks Using Attention Relation Graph Distillation." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/206.

Full text
Abstract:
Due to the prosperity of Artificial Intelligence (AI) techniques, more and more backdoors are designed by adversaries to attack Deep Neural Networks (DNNs). Although the state-of-the-art method Neural Attention Distillation (NAD) can effectively erase backdoor triggers from DNNs, it still suffers from non-negligible Attack Success Rate (ASR) together with lowered classification ACCuracy (ACC), since NAD focuses on backdoor defense using attention features (i.e., attention maps) of the same order. In this paper, we introduce a novel backdoor defense framework named Attention Relation Graph Distillation (ARGD), which fully explores the correlation among attention features with different orders using our proposed Attention Relation Graphs (ARGs). Based on the alignment of ARGs between teacher and student models during knowledge distillation, ARGD can more effectively eradicate backdoors than NAD. Comprehensive experimental results show that, against six latest backdoor attacks, ARGD outperforms NAD by up to 94.85% reduction in ASR, while ACC can be improved by up to 3.23%.
APA, Harvard, Vancouver, ISO, and other styles
3

Ren, Yankun, Longfei Li, and Jun Zhou. "Simtrojan: Stealthy Backdoor Attack." In 2021 IEEE International Conference on Image Processing (ICIP). IEEE, 2021. http://dx.doi.org/10.1109/icip42928.2021.9506313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yang, Shuiqiao, Bao Gia Doan, Paul Montague, Olivier De Vel, Tamas Abraham, Seyit Camtepe, Damith C. Ranasinghe, and Salil S. Kanhere. "Transferable Graph Backdoor Attack." In RAID 2022: 25th International Symposium on Research in Attacks, Intrusions and Defenses. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3545948.3545976.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Shuang, Hongwei Li, and Hanxiao Chen. "Stand-in Backdoor: A Stealthy and Powerful Backdoor Attack." In GLOBECOM 2021 - 2021 IEEE Global Communications Conference. IEEE, 2021. http://dx.doi.org/10.1109/globecom46510.2021.9685762.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Xia, Pengfei, Ziqiang Li, Wei Zhang, and Bin Li. "Data-Efficient Backdoor Attacks." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/554.

Full text
Abstract:
Recent studies have proven that deep neural networks are vulnerable to backdoor attacks. Specifically, by mixing a small number of poisoned samples into the training set, the behavior of the trained model can be maliciously controlled. Existing attack methods construct such adversaries by randomly selecting some clean data from the benign set and then embedding a trigger into them. However, this selection strategy ignores the fact that each poisoned sample contributes inequally to the backdoor injection, which reduces the efficiency of poisoning. In this paper, we formulate improving the poisoned data efficiency by the selection as an optimization problem and propose a Filtering-and-Updating Strategy (FUS) to solve it. The experimental results on CIFAR-10 and ImageNet-10 indicate that the proposed method is effective: the same attack success rate can be achieved with only 47% to 75% of the poisoned sample volume compared to the random selection strategy. More importantly, the adversaries selected according to one setting can generalize well to other settings, exhibiting strong transferability. The prototype code of our method is now available at https://github.com/xpf/Data-Efficient-Backdoor-Attacks.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhai, Tongqing, Yiming Li, Ziqi Zhang, Baoyuan Wu, Yong Jiang, and Shu-Tao Xia. "Backdoor Attack Against Speaker Verification." In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021. http://dx.doi.org/10.1109/icassp39728.2021.9413468.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kees, Natasha, Yaxuan Wang, Yiling Jiang, Fang Lue, and Patrick P. K. Chan. "Segmentation Based Backdoor Attack Detection." In 2020 International Conference on Machine Learning and Cybernetics (ICMLC). IEEE, 2020. http://dx.doi.org/10.1109/icmlc51923.2020.9469037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Javid, Farshad, and Mina Zolfy Lighvan. "Honeypots Vulnerabilities to Backdoor Attack." In 2021 International Conference on Information Security and Cryptology (ISCTURKEY). IEEE, 2021. http://dx.doi.org/10.1109/iscturkey53027.2021.9654401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhong, Nan, Zhenxing Qian, and Xinpeng Zhang. "Imperceptible Backdoor Attack: From Input Space to Feature Representation." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/242.

Full text
Abstract:
Backdoor attacks are rapidly emerging threats to deep neural networks (DNNs). In the backdoor attack scenario, attackers usually implant the backdoor into the target model by manipulating the training dataset or training process. Then, the compromised model behaves normally for benign input yet makes mistakes when the pre-defined trigger appears. In this paper, we analyze the drawbacks of existing attack approaches and propose a novel imperceptible backdoor attack. We treat the trigger pattern as a special kind of noise following a multinomial distribution. A U-net-based network is employed to generate concrete parameters of multinomial distribution for each benign input. This elaborated trigger ensures that our approach is invisible to both humans and statistical detection. Besides the design of the trigger, we also consider the robustness of our approach against model diagnose-based defences. We force the feature representation of malicious input stamped with the trigger to be entangled with the benign one. We demonstrate the effectiveness and robustness against multiple state-of-the-art defences through extensive datasets and networks. Our trigger only modifies less than 1\% pixels of a benign image while the modification magnitude is 1. Our source code is available at https://github.com/Ekko-zn/IJCAI2022-Backdoor.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Backdoor Attack"

1

Lewis, Dustin, ed. Database of States’ Statements (August 2011–October 2016) concerning Use of Force in relation to Syria. Harvard Law School Program on International Law and Armed Conflict, May 2017. http://dx.doi.org/10.54813/ekmb4241.

Full text
Abstract:
Many see armed conflict in Syria as a flashpoint for international law. The situation raises numerous unsettling questions, not least concerning normative foundations of the contemporary collective-security and human-security systems, including the following: Amid recurring reports of attacks directed against civilian populations and hospitals with seeming impunity, what loss of legitimacy might law suffer? May—and should—states forcibly intervene to prevent (more) chemical-weapons attacks? If the government of Syria is considered unwilling or unable to obviate terrorist threats from spilling over its borders into other countries, may another state forcibly intervene to protect itself (and others), even without Syria’s consent and without an express authorization of the U.N. Security Council? What began in Daraa in 2011 as protests escalated into armed conflict. Today, armed conflict in Syria implicates a multitude of people, organizations, states, and entities. Some are obvious, such as the civilian population, the government, and organized armed groups (including designated terrorist organizations, for example the Islamic State of Iraq and Syria, or ISIS). Other implicated actors might be less obvious. They include dozens of third states that have intervened or otherwise acted in relation to armed conflict in Syria; numerous intergovernmental bodies; diverse domestic, foreign, and international courts; and seemingly innumerable NGOs. Over time, different states have adopted wide-ranging and diverse approaches to undertaking measures (or not) concerning armed conflict in Syria, whether in relation to the government, one or more armed opposition groups, or the civilian population. Especially since mid-2014, a growing number of states have undertaken military operations directed against ISIS in Syria. For at least a year-and-a-half, Russia has bolstered military strategies of the Syrian government. At least one state (the United States) has directed an operation against a Syrian military base. And, more broadly, many states provide (other) forms of support or assistance to the government of Syria, to armed opposition groups, or to the civilian population. Against that backdrop, the Harvard Law School Program on International Law and Armed Conflict (HLS PILAC) set out to collect states’ statements made from August 2011 through November 2016 concerning use of force in relation to Syria. A primary aim of the database is to provide a comparatively broad set of reliable resources regarding states’ perspectives, with a focus on legal parameters. A premise underlying the database is that through careful documentation of diverse approaches, we can better understand those perspectives. The intended audience of the database is legal practitioners. The database is composed of statements made on behalf of states and/or by state officials. For the most part, the database focuses on statements regarding legal parameters concerning use of force in relation to Syria. HLS PILAC does not pass judgment on whether each statement is necessarily legally salient for purposes of international law. Nor does HLS PILAC seek to determine whether a particular statement may be understood as an expression of opinio juris or an act of state practice (though it might be).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography