Journal articles on the topic 'Backdoor Attack'

To see the other types of publications on this topic, follow the link: Backdoor Attack.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Backdoor Attack.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Saha, Aniruddha, Akshayvarun Subramanya, and Hamed Pirsiavash. "Hidden Trigger Backdoor Attacks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 11957–65. http://dx.doi.org/10.1609/aaai.v34i07.6871.

Full text
Abstract:
With the success of deep learning algorithms in various domains, studying adversarial attacks to secure deep models in real world applications has become an important research topic. Backdoor attacks are a form of adversarial attacks on deep networks where the attacker provides poisoned data to the victim to train the model with, and then activates the attack by showing a specific small trigger pattern at the test time. Most state-of-the-art backdoor attacks either provide mislabeled poisoning data that is possible to identify by visual inspection, reveal the trigger in the poisoned data, or use noise to hide the trigger. We propose a novel form of backdoor attack where poisoned data look natural with correct labels and also more importantly, the attacker hides the trigger in the poisoned data and keeps the trigger secret until the test time. We perform an extensive study on various image classification settings and show that our attack can fool the model by pasting the trigger at random locations on unseen images although the model performs well on clean data. We also show that our proposed attack cannot be easily defended using a state-of-the-art defense algorithm for backdoor attacks.
APA, Harvard, Vancouver, ISO, and other styles
2

Ning, Rui, Jiang Li, Chunsheng Xin, Hongyi Wu, and Chonggang Wang. "Hibernated Backdoor: A Mutual Information Empowered Backdoor Attack to Deep Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (June 28, 2022): 10309–18. http://dx.doi.org/10.1609/aaai.v36i9.21272.

Full text
Abstract:
We report a new neural backdoor attack, named Hibernated Backdoor, which is stealthy, aggressive and devastating. The backdoor is planted in a hibernated mode to avoid being detected. Once deployed and fine-tuned on end-devices, the hibernated backdoor turns into the active state that can be exploited by the attacker. To the best of our knowledge, this is the first hibernated neural backdoor attack. It is achieved by maximizing the mutual information (MI) between the gradients of regular and malicious data on the model. We introduce a practical algorithm to achieve MI maximization to effectively plant the hibernated backdoor. To evade adaptive defenses, we further develop a targeted hibernated backdoor, which can only be activated by specific data samples and thus achieves a higher degree of stealthiness. We show the hibernated backdoor is robust and cannot be removed by existing backdoor removal schemes. It has been fully tested on four datasets with two neural network architectures, compared to five existing backdoor attacks, and evaluated using seven backdoor detection schemes. The experiments demonstrate the effectiveness of the hibernated backdoor attack under various settings.
APA, Harvard, Vancouver, ISO, and other styles
3

Kwon, Hyun, and Sanghyun Lee. "Textual Backdoor Attack for the Text Classification System." Security and Communication Networks 2021 (October 22, 2021): 1–11. http://dx.doi.org/10.1155/2021/2938386.

Full text
Abstract:
Deep neural networks provide good performance for image recognition, speech recognition, text recognition, and pattern recognition. However, such networks are vulnerable to backdoor attacks. In a backdoor attack, normal data that do not include a specific trigger are correctly classified by the target model, but backdoor data that include the trigger are incorrectly classified by the target model. One advantage of a backdoor attack is that the attacker can use a specific trigger to attack at a desired time. In this study, we propose a backdoor attack targeting the BERT model, which is a classification system designed for use in the text domain. Under the proposed method, the model is additionally trained on a backdoor sentence that includes a specific trigger, and afterward, if the trigger is attached before or after an original sentence, it will be misclassified by the model. In our experimental evaluation, we used two movie review datasets (MR and IMDB). The results show that using the trigger word “ATTACK” at the beginning of an original sentence, the proposed backdoor method had a 100% attack success rate when approximately 1.0% and 0.9% of the training data consisted of backdoor samples, and it allowed the model to maintain an accuracy of 86.88% and 90.80% on the original samples in the MR and IMDB datasets, respectively.
APA, Harvard, Vancouver, ISO, and other styles
4

Ye, Jianbin, Xiaoyuan Liu, Zheng You, Guowei Li, and Bo Liu. "DriNet: Dynamic Backdoor Attack against Automatic Speech Recognization Models." Applied Sciences 12, no. 12 (June 7, 2022): 5786. http://dx.doi.org/10.3390/app12125786.

Full text
Abstract:
Automatic speech recognition (ASR) is popular in our daily lives (e.g., via voice assistants or voice input). Once its security attributes are destroyed, it poses as a severe threat to a user’s life and ‘property safety’. Prior research has demonstrated that ASR systems are vulnerable to backdoor attacks. A model embedded with a backdoor behaves normally on clean samples yet misclassifies malicious samples that contain triggers. Existing backdoor attacks have mostly been conducted in the image domain. However, they can not be applied in the audio domain because of poor transferability. This paper proposes a dynamic backdoor attack method against ASR models, named DriNet. Significantly, we designed a dynamic trigger generation network to craft a variety of audio triggers. It is trained jointly with the discriminative model incorporated with an attack success rate on poisoned samples and accuracy on clean samples. We demonstrate that DriNet achieves an attack success rate of 86.4% when infecting only 0.5% of the training set without reducing its accuracy. DriNet can still achieve comparable attack performance to backdoor attacks using static triggers, further enjoying richer attack patterns. We further evaluated DriNet’s resistance to a current state-of-the-art defense mechanism. The anomaly index of DriNet is more than 37.4% smaller than that of BadNets method. The triggers generated by DriNet are hard reverse, keeping DriNet from the detectors.
APA, Harvard, Vancouver, ISO, and other styles
5

Xu, Yixiao, Xiaolei Liu, Kangyi Ding, and Bangzhou Xin. "IBD: An Interpretable Backdoor-Detection Method via Multivariate Interactions." Sensors 22, no. 22 (November 10, 2022): 8697. http://dx.doi.org/10.3390/s22228697.

Full text
Abstract:
Recent work has shown that deep neural networks are vulnerable to backdoor attacks. In comparison with the success of backdoor-attack methods, existing backdoor-defense methods face a lack of theoretical foundations and interpretable solutions. Most defense methods are based on experience with the characteristics of previous attacks, but fail to defend against new attacks. In this paper, we propose IBD, an interpretable backdoor-detection method via multivariate interactions. Using information theory techniques, IBD reveals how the backdoor works from the perspective of multivariate interactions of features. Based on the interpretable theorem, IBD enables defenders to detect backdoor models and poisoned examples without introducing additional information about the specific attack method. Experiments on widely used datasets and models show that IBD achieves a 78% increase in average in detection accuracy and an order-of-magnitude reduction in time cost compared with existing backdoor-detection methods.
APA, Harvard, Vancouver, ISO, and other styles
6

Xiang, Zhen, David J. Miller, Hang Wang, and George Kesidis. "Detecting Scene-Plausible Perceptible Backdoors in Trained DNNs Without Access to the Training Set." Neural Computation 33, no. 5 (April 13, 2021): 1329–71. http://dx.doi.org/10.1162/neco_a_01376.

Full text
Abstract:
Abstract Backdoor data poisoning attacks add mislabeled examples to the training set, with an embedded backdoor pattern, so that the classifier learns to classify to a target class whenever the backdoor pattern is present in a test sample. Here, we address posttraining detection of scene-plausible perceptible backdoors, a type of backdoor attack that can be relatively easily fashioned, particularly against DNN image classifiers. A post-training defender does not have access to the potentially poisoned training set, only to the trained classifier, as well as some unpoisoned examples that need not be training samples. Without the poisoned training set, the only information about a backdoor pattern is encoded in the DNN's trained weights. This detection scenario is of great import considering legacy and proprietary systems, cell phone apps, as well as training outsourcing, where the user of the classifier will not have access to the entire training set. We identify two important properties of scene-plausible perceptible backdoor patterns, spatial invariance and robustness, based on which we propose a novel detector using the maximum achievable misclassification fraction (MAMF) statistic. We detect whether the trained DNN has been backdoor-attacked and infer the source and target classes. Our detector outperforms existing detectors and, coupled with an imperceptible backdoor detector, helps achieve posttraining detection of most evasive backdoors of interest.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhao, Feng, Li Zhou, Qi Zhong, Rushi Lan, and Leo Yu Zhang. "Natural Backdoor Attacks on Deep Neural Networks via Raindrops." Security and Communication Networks 2022 (March 26, 2022): 1–11. http://dx.doi.org/10.1155/2022/4593002.

Full text
Abstract:
Recently, deep learning has made significant inroads into the Internet of Things due to its great potential for processing big data. Backdoor attacks, which try to influence model prediction on specific inputs, have become a serious threat to deep neural network models. However, because the poisoned data used to plant a backdoor into the victim model typically follows a fixed specific pattern, most existing backdoor attacks can be readily prevented by common defense. In this paper, we leverage natural behavior and present a stealthy backdoor attack for image classification tasks: the raindrop backdoor attack (RDBA). We use raindrops as the backdoor trigger, and they are naturally merged with clean instances to synthesize poisoned data that are close to their natural counterparts in the rain. The raindrops dispersed over images are more diversified than the triggers in the literature, which are fixed, confined, and unpleasant patterns to the host content, making the triggers more stealthy. Extensive experiments on ImageNet and GTSRB datasets demonstrate the fidelity, effectiveness, stealthiness, and sustainability of RDBA in attacking models with current popular defense mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
8

Fang, Shihong, and Anna Choromanska. "Backdoor Attacks on the DNN Interpretation System." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 1 (June 28, 2022): 561–70. http://dx.doi.org/10.1609/aaai.v36i1.19935.

Full text
Abstract:
Interpretability is crucial to understand the inner workings of deep neural networks (DNNs). Many interpretation methods help to understand the decision-making of DNNs by generating saliency maps that highlight parts of the input image that contribute the most to the prediction made by the DNN. In this paper we design a backdoor attack that alters the saliency map produced by the network for an input image with a specific trigger pattern while not losing the prediction performance significantly. The saliency maps are incorporated in the penalty term of the objective function that is used to train a deep model and its influence on model training is conditioned upon the presence of a trigger. We design two types of attacks: a targeted attack that enforces a specific modification of the saliency map and a non-targeted attack when the importance scores of the top pixels from the original saliency map are significantly reduced. We perform empirical evaluations of the proposed backdoor attacks on gradient-based interpretation methods, Grad-CAM and SimpleGrad, and a gradient-free scheme, VisualBackProp, for a variety of deep learning architectures. We show that our attacks constitute a serious security threat to the reliability of the interpretation methods when deploying models developed by untrusted sources. We furthermore show that existing backdoor defense mechanisms are ineffective in detecting our attacks. Finally, we demonstrate that the proposed methodology can be used in an inverted setting, where the correct saliency map can be obtained only in the presence of a trigger (key), effectively making the interpretation system available only to selected users.
APA, Harvard, Vancouver, ISO, and other styles
9

Ozdayi, Mustafa Safa, Murat Kantarcioglu, and Yulia R. Gel. "Defending against Backdoors in Federated Learning with Robust Learning Rate." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 10 (May 18, 2021): 9268–76. http://dx.doi.org/10.1609/aaai.v35i10.17118.

Full text
Abstract:
Federated learning (FL) allows a set of agents to collaboratively train a model without sharing their potentially sensitive data. This makes FL suitable for privacy-preserving applications. At the same time, FL is susceptible to adversarial attacks due to decentralized and unvetted data. One important line of attacks against FL is the backdoor attacks. In a backdoor attack, an adversary tries to embed a backdoor functionality to the model during training that can later be activated to cause a desired misclassification. To prevent backdoor attacks, we propose a lightweight defense that requires minimal change to the FL protocol. At a high level, our defense is based on carefully adjusting the aggregation server's learning rate, per dimension and per round, based on the sign information of agents' updates. We first conjecture the necessary steps to carry a successful backdoor attack in FL setting, and then, explicitly formulate the defense based on our conjecture. Through experiments, we provide empirical evidence that supports our conjecture, and we test our defense against backdoor attacks under different settings. We observe that either backdoor is completely eliminated, or its accuracy is significantly reduced. Overall, our experiments suggest that our defense significantly outperforms some of the recently proposed defenses in the literature. We achieve this by having minimal influence over the accuracy of the trained models. In addition, we also provide convergence rate analysis for our proposed scheme.
APA, Harvard, Vancouver, ISO, and other styles
10

KWON, Hyun, Hyunsoo YOON, and Ki-Woong PARK. "Multi-Targeted Backdoor: Indentifying Backdoor Attack for Multiple Deep Neural Networks." IEICE Transactions on Information and Systems E103.D, no. 4 (April 1, 2020): 883–87. http://dx.doi.org/10.1587/transinf.2019edl8170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Cheng, Siyuan, Yingqi Liu, Shiqing Ma, and Xiangyu Zhang. "Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (May 18, 2021): 1148–56. http://dx.doi.org/10.1609/aaai.v35i2.16201.

Full text
Abstract:
Trojan (backdoor) attack is a form of adversarial attack on deep neural networks where the attacker provides victims with a model trained/retrained on malicious data. The backdoor can be activated when a normal input is stamped with a certain pattern called trigger, causing misclassification. Many existing trojan attacks have their triggers being input space patches/objects (e.g., a polygon with solid color) or simple input transformations such as Instagram filters. These simple triggers are susceptible to recent backdoor detection algorithms. We propose a novel deep feature space trojan attack with five characteristics: effectiveness, stealthiness, controllability, robustness and reliance on deep features. We conduct extensive experiments on 9 image classifiers on various datasets including ImageNet to demonstrate these properties and show that our attack can evade state-of-the-art defense.
APA, Harvard, Vancouver, ISO, and other styles
12

Chen, Congcong, Lifei Wei, Lei Zhang, Ya Peng, and Jianting Ning. "DeepGuard: Backdoor Attack Detection and Identification Schemes in Privacy-Preserving Deep Neural Networks." Security and Communication Networks 2022 (October 10, 2022): 1–20. http://dx.doi.org/10.1155/2022/2985308.

Full text
Abstract:
Deep neural networks (DNNs) have profoundly changed our lifeways in recent years. The cost of training a complicated DNN model is always overwhelming for most users with limited computation and storage resources. Consequently, an increasing number of people are considering to resort to a cloud for an outsourced DNN model training. However, the DNN models training process outsourced to the cloud faces privacy and security issues due to the semi-honest and malicious cloud environments. To preserve the privacy of the data and the parameters in DNN models during the outsourced training and to detect whether the models are injected with backdoors, this paper presents DeepGuard, a framework of privacy-preserving backdoor detection and identification in an outsourced cloud environment for multi-participant computation. In particular, we design a privacy-preserving reverse engineering algorithm for recovering the triggers and detecting the backdoor attacks among three cooperative but non-collusion servers. Moreover, we propose a backdoor identification algorithm adapting to single-label and multi-label attack detection. Finally, extensive experiments on the prevailing datasets such as MNIST, SVHN, and GTSRB confirm the effectiveness and efficiency of backdoor detection and identification in a privacy-preserving DNN model.
APA, Harvard, Vancouver, ISO, and other styles
13

Zhang, Jie, Chen Dongdong, Qidong Huang, Jing Liao, Weiming Zhang, Huamin Feng, Gang Hua, and Nenghai Yu. "Poison Ink: Robust and Invisible Backdoor Attack." IEEE Transactions on Image Processing 31 (2022): 5691–705. http://dx.doi.org/10.1109/tip.2022.3201472.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Reddy, Gopaldinne Sanjeev, and Sripada Manasa Lakshmi. "Exploring adversarial attacks against malware classifiers in the backdoor poisoning attack." IOP Conference Series: Materials Science and Engineering 1022 (January 19, 2021): 012037. http://dx.doi.org/10.1088/1757-899x/1022/1/012037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Yan, Zhicong, Gaolei Li, Yuan TIan, Jun Wu, Shenghong Li, Mingzhe Chen, and H. Vincent Poor. "DeHiB: Deep Hidden Backdoor Attack on Semi-supervised Learning via Adversarial Perturbation." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (May 18, 2021): 10585–93. http://dx.doi.org/10.1609/aaai.v35i12.17266.

Full text
Abstract:
The threat of data-poisoning backdoor attacks on learning algorithms typically comes from the labeled data. However, in deep semi-supervised learning (SSL), unknown threats mainly stem from the unlabeled data. In this paper, we propose a novel deep hidden backdoor (DeHiB) attack scheme for SSL-based systems. In contrast to the conventional attacking methods, the DeHiB can inject malicious unlabeled training data to the semi-supervised learner so as to enable the SSL model to output premeditated results. In particular, a robust adversarial perturbation generator regularized by a unified objective function is proposed to generate poisoned data. To alleviate the negative impact of the trigger patterns on model accuracy and improve the attack success rate, a novel contrastive data poisoning strategy is designed. Using the proposed data poisoning scheme, one can implant the backdoor into the SSL model using the raw data without hand-crafted labels. Extensive experiments based on CIFAR10 and CIFAR100 datasets demonstrated the effectiveness and crypticity of the proposed scheme.
APA, Harvard, Vancouver, ISO, and other styles
16

Xue, Mingfu, Can He, Jian Wang, and Weiqiang Liu. "Backdoors hidden in facial features: a novel invisible backdoor attack against face recognition systems." Peer-to-Peer Networking and Applications 14, no. 3 (January 8, 2021): 1458–74. http://dx.doi.org/10.1007/s12083-020-01031-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Fan, Mingyuan, Xue Du, Ximeng Liu, and Wenzhong Guo. "VarDefense: Variance-Based Defense against Poison Attack." Wireless Communications and Mobile Computing 2021 (December 9, 2021): 1–9. http://dx.doi.org/10.1155/2021/1974822.

Full text
Abstract:
The emergence of poison attack brings a serious risk to deep neural networks (DNNs). Specifically, an adversary can poison the training dataset to train a backdoor model, which behaves fine on clean data but induces targeted misclassification on arbitrary data with the crafted trigger. However, previous defense methods have to purify the backdoor model with the compromising degradation of performance. In this paper, to relieve the problem, a novel defense method VarDefense is proposed, which leverages an effective metric, i.e., variance, and purifying strategy. In detail, variance is adopted to distinguish the bad neurons that play a core role in poison attack and then purifying the bad neurons. Moreover, we find that the bad neurons are generally located in the later layers of the backdoor model because the earlier layers only extract general features. Based on it, we design a proper purifying strategy where only later layers of the backdoor model are purified and in this way, the degradation of performance is greatly reduced, compared to previous defense methods. Extensive experiments show that the performance of VarDefense significantly surpasses state-of-the-art defense methods.
APA, Harvard, Vancouver, ISO, and other styles
18

Yang, Zhichao, Rongmao Chen, Chao Li, Longjiang Qu, and Guomin Yang. "On the Security of LWE Cryptosystem against Subversion Attacks." Computer Journal 63, no. 4 (September 10, 2019): 495–507. http://dx.doi.org/10.1093/comjnl/bxz084.

Full text
Abstract:
Abstract Subversion of cryptography has received wide attentions especially after the Snowden Revelations in 2013. Most of the currently proposed subversion attacks essentially rely on the freedom of randomness choosing in the cryptographic protocol to hide backdoors embedded in the cryptosystems. Despite the fact that significant progresses in this line of research have been made, most of them mainly considered the classical setting, while the research gap regarding subversion attacks against post-quantum cryptography remains tremendous. Inspired by this observation, we investigate a subversion attack against existing protocol that is proved post-quantum secure. Particularly, we show an efficient way to undetectably subvert the well-known lattice-based encryption scheme proposed by Regev (STOC 2005). Our subversion enables the subverted algorithm to stealthily leak arbitrary messages to the outsider who knows the backdoor. Through theoretical analysis and experimental observations, we demonstrate that the subversion attack against the LWE encryption scheme is feasible and practical.
APA, Harvard, Vancouver, ISO, and other styles
19

KWON, Hyun. "Multi-Model Selective Backdoor Attack with Different Trigger Positions." IEICE Transactions on Information and Systems E105.D, no. 1 (January 1, 2022): 170–74. http://dx.doi.org/10.1587/transinf.2021edl8054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Dai, Jiazhu, Chuanshuai Chen, and Yufeng Li. "A Backdoor Attack Against LSTM-Based Text Classification Systems." IEEE Access 7 (2019): 138872–78. http://dx.doi.org/10.1109/access.2019.2941376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Yang, Jie, Jun Zheng, Haochen Wang, Jiaxing Li, Haipeng Sun, Weifeng Han, Nan Jiang, and Yu-An Tan. "Edge-Cloud Collaborative Defense against Backdoor Attacks in Federated Learning." Sensors 23, no. 3 (January 17, 2023): 1052. http://dx.doi.org/10.3390/s23031052.

Full text
Abstract:
Federated learning has a distributed collaborative training mode, widely used in IoT scenarios of edge computing intelligent services. However, federated learning is vulnerable to malicious attacks, mainly backdoor attacks. Once an edge node implements a backdoor attack, the embedded backdoor mode will rapidly expand to all relevant edge nodes, which poses a considerable challenge to security-sensitive edge computing intelligent services. In the traditional edge collaborative backdoor defense method, only the cloud server is trusted by default. However, edge computing intelligent services have limited bandwidth and unstable network connections, which make it impossible for edge devices to retrain their models or update the global model. Therefore, it is crucial to detect whether the data of edge nodes are polluted in time. This paper proposes a layered defense framework for edge-computing intelligent services. At the edge, we combine the gradient rising strategy and attention self-distillation mechanism to maximize the correlation between edge device data and edge object categories and train a clean model as much as possible. On the server side, we first implement a two-layer backdoor detection mechanism to eliminate backdoor updates and use the attention self-distillation mechanism to restore the model performance. Our results show that the two-stage defense mode is more suitable for the security protection of edge computing intelligent services. It can not only weaken the effectiveness of the backdoor at the edge end but also conduct this defense at the server end, making the model more secure. The precision of our model on the main task is almost the same as that of the clean model.
APA, Harvard, Vancouver, ISO, and other styles
22

Kwon, Hyun, and Yongchul Kim. "BlindNet backdoor: Attack on deep neural network using blind watermark." Multimedia Tools and Applications 81, no. 5 (January 7, 2022): 6217–34. http://dx.doi.org/10.1007/s11042-021-11135-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Yoshida, Kota, and Takeshi Fujino. "Countermeasure against Backdoor Attack on Neural Networks Utilizing Knowledge Distillation." Journal of Signal Processing 24, no. 4 (July 15, 2020): 141–44. http://dx.doi.org/10.2299/jsp.24.141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Derui, Sheng Wen, Alireza Jolfaei, Mohammad Sayad Haghighi, Surya Nepal, and Yang Xiang. "On the Neural Backdoor of Federated Generative Models in Edge Computing." ACM Transactions on Internet Technology 22, no. 2 (May 31, 2022): 1–21. http://dx.doi.org/10.1145/3425662.

Full text
Abstract:
Edge computing, as a relatively recent evolution of cloud computing architecture, is the newest way for enterprises to distribute computational power and lower repetitive referrals to central authorities. In the edge computing environment, Generative Models (GMs) have been found to be valuable and useful in machine learning tasks such as data augmentation and data pre-processing. Federated learning and distributed learning refer to training machine learning models in the edge computing network. However, federated learning and distributed learning also bring additional risks to GMs since all peers in the network have access to the model under training. In this article, we study the vulnerabilities of federated GMs to data-poisoning-based backdoor attacks via gradient uploading. We additionally enhance the attack to reduce the required poisonous data samples and cope with dynamic network environments. Last but not least, the attacks are formally proven to be stealthy and effective toward federated GMs. According to the experiments, neural backdoors can be successfully embedded by including merely 5\% poisonous samples in the local training dataset of an attacker.
APA, Harvard, Vancouver, ISO, and other styles
25

Gumilang, Paulus Miki Resa, and Dian Widiyanto Chandra. "Implementasi dan modifikasi WebShell untuk monitoring serangan berbasis website." AITI 18, no. 1 (July 19, 2021): 54–68. http://dx.doi.org/10.24246/aiti.v18i1.54-68.

Full text
Abstract:
Backdoor is a code commonly used by hackers to gain access to a web page illegally, backdoor or also called a webshell at this time is still very often used by hackers to carry out attacks on a web page, but in handling attacks using a webshell it is not always possible to detected quickly and even take months to realize the web page has embedded webshell. To deal with these problems, we need an application that can quickly detect attacks carried out by embedding a webshell on a web page. The purpose of this study is to modify an existing webshell so that it can be monitored when used by hackers to attack a website, the monitoring process is carried out using a web page created by the author. The results of the discussion of this study can be used to quickly detect attacks that using modified webshell.
APA, Harvard, Vancouver, ISO, and other styles
26

Kwon, Hyun, and Sanghyun Lee. "Toward Backdoor Attacks for Image Captioning Model in Deep Neural Networks." Security and Communication Networks 2022 (August 16, 2022): 1–10. http://dx.doi.org/10.1155/2022/1525052.

Full text
Abstract:
Deep neural networks perform well in image recognition, speech recognition, and text recognition fields. The image caption model provides captions for images by generating text after image recognition. After extracting features from the original image, this model generates a representation vector and provides captions for the image by generating text through a recursive neural network. However, this image caption model has weaknesses in the backdoor sample. In this paper, we propose a method for generating backdoor samples for image caption models. By adding a specific trigger to the original sample, this proposed method creates a backdoor sample that is misrecognized as a target class by the target model. The MS-COCO dataset was used as the experimental dataset, and Tensorflow was used as the machine learning library. When the trigger size of the backdoor sample is 4%, experimental results show that the average attack success rate of the backdoor sample is 96.67%, and the average error rate of the original sample is 9.65%.
APA, Harvard, Vancouver, ISO, and other styles
27

M. Alghazzawi, Daniyal, Osama Bassam J. Rabie, Surbhi Bhatia, and Syed Hamid Hasan. "An Improved Optimized Model for Invisible Backdoor Attack Creation Using Steganography." Computers, Materials & Continua 72, no. 1 (2022): 1173–93. http://dx.doi.org/10.32604/cmc.2022.022748.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Ab Wahab, Hidayat Ul Hazazi, and Jasni Mohamad Zain. "WINDOWS PRIVILEGE ESCALATION THROUGH NETWORK BACKDOOR AND INFORMATION MINING USING USB HACKTOOL." MALAYSIAN JOURNAL OF COMPUTING 3, no. 1 (June 29, 2018): 12. http://dx.doi.org/10.24191/mjoc.v3i1.4811.

Full text
Abstract:
A privilege escalation in the Windows system can be defined as a method of gaining access to the kernel system and allowing the user to have an administrative access to the local admin account system on the computer. This paper describes the proof of concept attack scheme using Universal Serial Bus (USB) Hacktool. The attack scheme, the same interaction on the physical access to the computer system could be accomplished by the attacker using a little effort on social engineering and specialized USB Hacktool to take over the computer system in full where it will collect valuable information and escalate the administrative privilege to gain unauthorized admin access which further attack can be done like setting up an open port for backdoor access. The evaluation of this paper gives a significant value as for educational purpose for proof of concept security project. The implementation on this project could help the responsible team to take necessary action toward physical security access to their computer or workstation.
APA, Harvard, Vancouver, ISO, and other styles
29

Ai, Zhuang, Nurbol Luktarhan, AiJun Zhou, and Dan Lv. "WebShell Attack Detection Based on a Deep Super Learner." Symmetry 12, no. 9 (August 24, 2020): 1406. http://dx.doi.org/10.3390/sym12091406.

Full text
Abstract:
WebShell is a common network backdoor attack that is characterized by high concealment and great harm. However, conventional WebShell detection methods can no longer cope with complex and flexible variations of WebShell attacks. Therefore, this paper proposes a deep super learner for attack detection. First, the collected data are deduplicated to prevent the influence of duplicate data on the result. Second, to detect the results of the algorithm, static and dynamic feature are taken as the feature of the algorithm to construct a comprehensive feature set. We then use the Word2Vec algorithm to vectorize the features. During this period, to prevent the outbreak of the number of features, we use a genetic algorithm to extract the validity of the feature dimension. Finally, we use a deep super learner to detect WebShell. The experimental results show that this algorithm can effectively detect WebShell, and its accuracy and recall are greatly improved.
APA, Harvard, Vancouver, ISO, and other styles
30

Guo, Wei, Benedetta Tondi, and Mauro Barni. "A Master Key backdoor for universal impersonation attack against DNN-based face verification." Pattern Recognition Letters 144 (April 2021): 61–67. http://dx.doi.org/10.1016/j.patrec.2021.01.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Hairab, Belal Ibrahim, Heba K. Aslan, Mahmoud Said Elsayed, Anca D. Jurcut, and Marianne A. Azer. "Anomaly Detection of Zero-Day Attacks Based on CNN and Regularization Techniques." Electronics 12, no. 3 (January 23, 2023): 573. http://dx.doi.org/10.3390/electronics12030573.

Full text
Abstract:
The rapid development of cyberattacks in the field of the Internet of things (IoT) introduces new security challenges regarding zero-day attacks. Intrusion-detection systems (IDS) are usually trained on specific attacks to protect the IoT application, but the attacks that are yet unknown for IDS (i.e., zero-day attacks) still represent challenges and concerns regarding users’ data privacy and security in those applications. Anomaly-detection methods usually depend on machine learning (ML)-based methods. Under the ML umbrella are classical ML-based methods, which are known to have low prediction quality and detection rates with regard to data that it has not yet been trained on. DL-based methods, especially convolutional neural networks (CNNs) with regularization methods, address this issue and give a better prediction quality with unknown data and avoid overfitting. In this paper, we evaluate and prove that the CNNs have a better ability to detect zero-day attacks, which are generated from nonbot attackers, compared to classical ML. We use classical ML, normal, and regularized CNN classifiers (L1, and L2 regularized). The training data consists of normal traffic data, and DDoS attack data, as it is the most common attack in the IoT. In order to give the full picture of this evaluation, the testing phase of those classifiers will include two scenarios, each having data with different attack distribution. One of these is the backdoor attack, and the other is the scanning attack. The results of the testing proves that the regularized CNN classifiers still perform better than the classical ML-based methods in detecting zero-day IoT attacks.
APA, Harvard, Vancouver, ISO, and other styles
32

Zimba, Aaron, and Mumbi Chishimba. "Exploitation of DNS Tunneling for Optimization of Data Exfiltration in Malware-free APT Intrusions." Zambia ICT Journal 1, no. 1 (December 11, 2017): 51–56. http://dx.doi.org/10.33260/zictjournal.v1i1.26.

Full text
Abstract:
One of the main goals of targeted attacks include data exfiltration. Attackers penetrate systems using various forms of attack vectors but the hurdle comes in exfiltrating the data. APT attackers even reside in a host for long periods of time whilst seeking the best option to exfiltrate data. Most data exfiltration techniques are prone to detection by intrusion detection system. Therefore, data exfiltration methodologies that generate little noise if any at all are attractive to attackers and can go undetected for long periods owing the low threshold of generated noise in form network traffic and system calls. In this paper, we present malware-free intrusion, an attack methodology which does not explicitly use malware to exfiltrate data. Our attack structure exploits the use of system services and resources not limited to RDP, PowerShell, Windows accessibility backdoor and DNS tunneling. Results show that it’s possible to exfiltrate data from vulnerable hosts using malwarefree intrusion as an infection vector and DNS tunneling as a data exfiltration technique. We test the attack on both Windows and Linux system over different networks. Mitigation techniques are suggested based on traffic analysis captured from the established secure DNS tunnels on the network.
APA, Harvard, Vancouver, ISO, and other styles
33

Liu, I.-Hsien, Jung-Shian Li, Yen-Chu Peng, and Chuan-Gang Liu. "A Robust Countermeasures for Poisoning Attacks on Deep Neural Networks of Computer Interaction Systems." Applied Sciences 12, no. 15 (August 1, 2022): 7753. http://dx.doi.org/10.3390/app12157753.

Full text
Abstract:
In recent years, human–computer interactions have begun to apply deep neural networks (DNNs), known as deep learning, to make them work more friendly. Nowadays, adversarial example attacks, poisoning attacks, and backdoor attacks are the typical attack examples for DNNs. In this paper, we focus on poisoning attacks and analyze three poisoning attacks on DNNs. We develop a countermeasure for poisoning attacks, which is Data Washing, an algorithm based on a denoising autoencoder. It can effectively alleviate the damages inflicted upon datasets caused by poisoning attacks. Furthermore, we also propose the Integrated Detection Algorithm (IDA) to detect various types of attacks. In our experiments, for Paralysis Attacks, Data Washing represents a significant improvement (0.5384) over accuracy increment, and can help IDA detect those attacks, while for Target Attacks, Data Washing makes it so that the false positive rate is reduced to just 1% and IDA can have a high accuracy detection rate of greater than 99%.
APA, Harvard, Vancouver, ISO, and other styles
34

Won, Yoo-Seung, Bo-Yeon Sim, and Jong-Yeon Park. "Key Schedule against Template Attack-Based Simple Power Analysis on a Single Target." Applied Sciences 10, no. 11 (May 30, 2020): 3804. http://dx.doi.org/10.3390/app10113804.

Full text
Abstract:
Since 2002, there have been active discussions on template attacks due to the robust performance of such attacks. There are reports of numerous proposals to improve the accuracy of prediction model in order to identify the point of interest. To date, many researchers have only focused on the performance of template attacks. In this paper, we introduce a new approach to retrieve the secret information in key schedules, without the profiling phase utilizing secret information. The template attack allows us to reveal the correct key even though the encryption/decryption processes have powerful countermeasures. More precisely, if the templates are sufficiently built in loading/saving the public information, in the extraction phase, the templates already created can be applied to the identical operation about secret information, which allows us to retrieve the secret information even if the countermeasures are theoretically robust. This suggestion becomes another backdoor to avoid hardened countermeasures. In order to demonstrate our proposal, we consider the Advanced Encryption Standard key schedule as a target for attack; however, it cannot be the target of non-profiling attacks in general. Finally, the Hamming weight information of the correct key could be recovered in an XMEGA128 chip, without the secret information. Moreover, we concentrate on the potential possibility of our suggestion since the performance cannot outperform the original methods used in such attacks.
APA, Harvard, Vancouver, ISO, and other styles
35

Reddy, Gopaldinne Sanjeev, and Sripada Manasa Lakshmi. "Retraction: Exploring adversarial attacks against malware classifiers in the backdoor poisoning attack (IOP Conf. Ser.: Mater. Sci. Eng. 1022 012037)." IOP Conference Series: Materials Science and Engineering 1022, no. 1 (January 1, 2021): 012125. http://dx.doi.org/10.1088/1757-899x/1022/1/012125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Altoub, Majed, Fahad AlQurashi, Tan Yigitcanlar, Juan M. Corchado, and Rashid Mehmood. "An Ontological Knowledge Base of Poisoning Attacks on Deep Neural Networks." Applied Sciences 12, no. 21 (October 31, 2022): 11053. http://dx.doi.org/10.3390/app122111053.

Full text
Abstract:
Deep neural networks (DNNs) have successfully delivered cutting-edge performance in several fields. With the broader deployment of DNN models on critical applications, the security of DNNs has become an active and yet nascent area. Attacks against DNNs can have catastrophic results, according to recent studies. Poisoning attacks, including backdoor attacks and Trojan attacks, are one of the growing threats against DNNs. Having a wide-angle view of these evolving threats is essential to better understand the security issues. In this regard, creating a semantic model and a knowledge graph for poisoning attacks can reveal the relationships between attacks across intricate data to enhance the security knowledge landscape. In this paper, we propose a DNN poisoning attack ontology (DNNPAO) that would enhance knowledge sharing and enable further advancements in the field. To do so, we have performed a systematic review of the relevant literature to identify the current state. We collected 28,469 papers from the IEEE, ScienceDirect, Web of Science, and Scopus databases, and from these papers, 712 research papers were screened in a rigorous process, and 55 poisoning attacks in DNNs were identified and classified. We extracted a taxonomy of the poisoning attacks as a scheme to develop DNNPAO. Subsequently, we used DNNPAO as a framework by which to create a knowledge base. Our findings open new lines of research within the field of AI security.
APA, Harvard, Vancouver, ISO, and other styles
37

Waliulu, Raditya Faisal, and Teguh Hidayat Iskandar Alam. "Reverse Engineering Analysis Statis Forensic Malware Webc2-Div." Insect (Informatics and Security): Jurnal Teknik Informatika 4, no. 1 (August 23, 2019): 15. http://dx.doi.org/10.33506/insect.v4i1.223.

Full text
Abstract:
At this paper focus on Malicious Software also known as Malware APT1 (Advance Persistent Threat) codename WEBC2-DIV the most variants malware has criteria consists of Virus, Worm, Trojan, Adware, Spyware, Backdoor either Rootkit. Although, malware could avoidance scanning antivirus but reverse engineering could be know how dangerous malware infect computer client. Lately, malware attack as a form espionage (cyberwar) one of the most topic on security internet, because of has massive impact. Forensic malware becomes indicator successful user to realized about malware infect. This research about reverse engineering. A few steps there are scanning, suspected packet in network and analysis of malware behavior and disassembler body malware.
APA, Harvard, Vancouver, ISO, and other styles
38

Wang, Weizheng, Zhuo Deng, and Jin Wang. "Enhancing Sensor Network Security with Improved Internal Hardware Design." Sensors 19, no. 8 (April 12, 2019): 1752. http://dx.doi.org/10.3390/s19081752.

Full text
Abstract:
With the rapid development of the Internet-of-Things (IoT), sensors are being widely applied in industry and human life. Sensor networks based on IoT have strong Information transmission and processing capabilities. The security of sensor networks is progressively crucial. Cryptographic algorithms are widely used in sensor networks to guarantee security. Hardware implementations are preferred, since software implementations offer lower throughout and require more computational resources. Cryptographic chips should be tested in a manufacturing process and in the field to ensure their quality. As a widely used design-for-testability (DFT) technique, scan design can enhance the testability of the chips by improving the controllability and observability of the internal flip-flops. However, it may become a backdoor to leaking sensitive information related to the cipher key, and thus, threaten the security of a cryptographic chip. In this paper, a secure scan test architecture was proposed to resist scan-based noninvasive attacks on cryptographic chips with boundary scan design. Firstly, the proposed DFT architecture provides the scan chain reset mechanism by gating a mode-switching detection signal into reset input of scan cells. The contents of scan chains will be erased when the working mode is switched between test mode and functional mode, and thus, it can deter mode-switching based noninvasive attacks. Secondly, loading the secret key into scan chains of cryptographic chips is prohibited in the test mode. As a result, the test-mode-only scan attack can also be thwarted. On the other hand, shift operation under functional mode is disabled to overcome scan attack in the functional mode. The proposed secure scheme ensures the security of cryptographic chips for sensor networks with extremely low area penalty.
APA, Harvard, Vancouver, ISO, and other styles
39

Matsuo, Yuki, and Kazuhiro Takemoto. "Backdoor Attacks on Deep Neural Networks via Transfer Learning from Natural Images." Applied Sciences 12, no. 24 (December 8, 2022): 12564. http://dx.doi.org/10.3390/app122412564.

Full text
Abstract:
Backdoor attacks are a serious security threat to open-source and outsourced development of computational systems based on deep neural networks (DNNs). In particular, the transferability of backdoors is remarkable; that is, they can remain effective after transfer learning is performed. Given that transfer learning from natural images is widely used in real-world applications, the question of whether backdoors can be transferred from neural models pretrained on natural images involves considerable security implications. However, this topic has not been evaluated rigorously in prior studies. Hence, in this study, we configured backdoors in 10 representative DNN models pretrained on a natural image dataset, and then fine-tuned the backdoored models via transfer learning for four real-world applications, including pneumonia classification from chest X-ray images, emergency response monitoring from aerial images, facial recognition, and age classification from images of faces. Our experimental results show that the backdoors generally remained effective after transfer learning from natural images, except for small DNN models. Moreover, the backdoors were difficult to detect using a common method. Our findings indicate that backdoor attacks can exhibit remarkable transferability in more realistic transfer learning processes, and highlight the need for the development of more advanced security countermeasures in developing systems using DNN models for sensitive or mission-critical applications.
APA, Harvard, Vancouver, ISO, and other styles
40

Gosselin, Rémi, Loïc Vieu, Faiza Loukil, and Alexandre Benoit. "Privacy and Security in Federated Learning: A Survey." Applied Sciences 12, no. 19 (October 1, 2022): 9901. http://dx.doi.org/10.3390/app12199901.

Full text
Abstract:
In recent years, privacy concerns have become a serious issue for companies wishing to protect economic models and comply with end-user expectations. In the same vein, some countries now impose, by law, constraints on data use and protection. Such context thus encourages machine learning to evolve from a centralized data and computation approach to decentralized approaches. Specifically, Federated Learning (FL) has been recently developed as a solution to improve privacy, relying on local data to train local models, which collaborate to update a global model that improves generalization behaviors. However, by definition, no computer system is entirely safe. Security issues, such as data poisoning and adversarial attack, can introduce bias in the model predictions. In addition, it has recently been shown that the reconstruction of private raw data is still possible. This paper presents a comprehensive study concerning various privacy and security issues related to federated learning. Then, we identify the state-of-the-art approaches that aim to counteract these problems. Findings from our study confirm that the current major security threats are poisoning, backdoor, and Generative Adversarial Network (GAN)-based attacks, while inference-based attacks are the most critical to the privacy of FL. Finally, we identify ongoing research directions on the topic. This paper could be used as a reference to promote cybersecurity-related research on designing FL-based solutions for alleviating future challenges.
APA, Harvard, Vancouver, ISO, and other styles
41

Hyun, Sangwon, Junsung Cho, Geumhwan Cho, and Hyoungshick Kim. "Design and Analysis of Push Notification-Based Malware on Android." Security and Communication Networks 2018 (July 9, 2018): 1–12. http://dx.doi.org/10.1155/2018/8510256.

Full text
Abstract:
Establishing secret command and control (C&C) channels from attackers is important in malware design. This paper presents design and analysis of malware architecture exploiting push notification services as C&C channels. The key feature of the push notification-based malware design is remote triggering, which allows attackers to trigger and execute their malware by push notifications. The use of push notification services as covert channels makes it difficult to distinguish this type of malware from other normal applications also using the same services. We implemented a backdoor prototype on Android devices as a proof-of-concept of the push notification-based malware and evaluated its stealthiness and feasibility. Our malware implementation effectively evaded the existing malware analysis tools such as 55 antimalware scanners from VirusTotal and SandDroid. In addition, our backdoor implementation successfully cracked about 98% of all the tested unlock secrets (either PINs or unlock patterns) in 5 seconds with only a fraction (less than 0.01%) of the total power consumption of the device. Finally, we proposed several defense strategies to mitigate push notification-based malware by carefully analyzing its attack process. Our defense strategies include filtering subscription requests for push notifications from suspicious applications, providing centralized management and access control of registration tokens of applications, detecting malicious push messages by analyzing message contents and characteristic patterns demonstrated by malicious push messages, and detecting malware by analyzing the behaviors of applications after receiving push messages.
APA, Harvard, Vancouver, ISO, and other styles
42

Segal, Yoram, and Ofer Hadar. "Covert channel implementation using motion vectors over H.264 compression." Revista de Estudos e Pesquisas Avançadas do Terceiro Setor 2, no. 2 (August 18, 2019): 111. http://dx.doi.org/10.31501/repats.v2i2.10567.

Full text
Abstract:
Embedding information inside video streaming is a hot topic in the world of video broadcasting. Information assimilation can be used for positive purposes, such as copyright protection. On the other hand, it can be used for malicious purposes such as a hostile takeover, remotely, on end-user devices. The basic idea of information assimilation technology within a video is to take advantage of the sequence of frames that flows between the video server and the viewer. Casting foreigner data into each frame such a hidden communication channel is created namely - covert channel. Attackers find the multimedia world in general and video streaming, an attractive backdoor for cyber-attacks. Multimedia covert channels provide reasonable bandwidth and long-lasting transmission streams, suitable for planting malicious information and therefore used as an exploit alternative. In this article, we propose a method to protect against attacks that use video payload for transferring confidential data using a covert channel. This work is part of a large-scale study of video attack methods. The goal of the study is to build a generic platform that will investigate the reliability of video sequences. The platform allows to encoding and decoding video. A plugin can be added to each encoder or decoder. Each plugin is an algorithm that is studied and developed in the framework of this study. One of the algorithms in this platform is information transmission over video using motion vectors. This method is the topic off this article.
APA, Harvard, Vancouver, ISO, and other styles
43

Matsuo, Yuki, and Kazuhiro Takemoto. "Backdoor Attacks to Deep Neural Network-Based System for COVID-19 Detection from Chest X-ray Images." Applied Sciences 11, no. 20 (October 14, 2021): 9556. http://dx.doi.org/10.3390/app11209556.

Full text
Abstract:
Open-source deep neural networks (DNNs) for medical imaging are significant in emergent situations, such as during the pandemic of the 2019 novel coronavirus disease (COVID-19), since they accelerate the development of high-performance DNN-based systems. However, adversarial attacks are not negligible during open-source development. Since DNNs are used as computer-aided systems for COVID-19 screening from radiography images, we investigated the vulnerability of the COVID-Net model, a representative open-source DNN for COVID-19 detection from chest X-ray images to backdoor attacks that modify DNN models and cause their misclassification when a specific trigger input is added. The results showed that backdoors for both non-targeted attacks, for which DNNs classify inputs into incorrect labels, and targeted attacks, for which DNNs classify inputs into a specific target class, could be established in the COVID-Net model using a small trigger and small fraction of training data. Moreover, the backdoors were effective for models fine-tuned from the backdoored COVID-Net models, although the performance of non-targeted attacks was limited. This indicated that backdoored models could be spread via fine-tuning (thereby becoming a significant security threat). The findings showed that emphasis is required on open-source development and practical applications of DNNs for COVID-19 detection.
APA, Harvard, Vancouver, ISO, and other styles
44

Matsuo, Yuki, and Kazuhiro Takemoto. "Backdoor Attacks to Deep Neural Network-Based System for COVID-19 Detection from Chest X-ray Images." Applied Sciences 11, no. 20 (October 14, 2021): 9556. http://dx.doi.org/10.3390/app11209556.

Full text
Abstract:
Open-source deep neural networks (DNNs) for medical imaging are significant in emergent situations, such as during the pandemic of the 2019 novel coronavirus disease (COVID-19), since they accelerate the development of high-performance DNN-based systems. However, adversarial attacks are not negligible during open-source development. Since DNNs are used as computer-aided systems for COVID-19 screening from radiography images, we investigated the vulnerability of the COVID-Net model, a representative open-source DNN for COVID-19 detection from chest X-ray images to backdoor attacks that modify DNN models and cause their misclassification when a specific trigger input is added. The results showed that backdoors for both non-targeted attacks, for which DNNs classify inputs into incorrect labels, and targeted attacks, for which DNNs classify inputs into a specific target class, could be established in the COVID-Net model using a small trigger and small fraction of training data. Moreover, the backdoors were effective for models fine-tuned from the backdoored COVID-Net models, although the performance of non-targeted attacks was limited. This indicated that backdoored models could be spread via fine-tuning (thereby becoming a significant security threat). The findings showed that emphasis is required on open-source development and practical applications of DNNs for COVID-19 detection.
APA, Harvard, Vancouver, ISO, and other styles
45

Li, Tao, Wenzhe Dong, Aiqun Hu, and Jinguang Han. "Task-Oriented Network Abnormal Behavior Detection Method." Security and Communication Networks 2022 (June 30, 2022): 1–13. http://dx.doi.org/10.1155/2022/3105291.

Full text
Abstract:
Since network systems have become increasingly large and complex, the limitations of traditional abnormal packet detection have gradually emerged. The existing detection methods mainly rely on the recognition of packet features, which lack the association of specific applications and result in hysteresis and inaccurate judgement. In this paper, a task-oriented abnormal packet behavior detection method is proposed, which creatively collects action identifications during the execution of network tasks and inserts security labels into communication packets. Specifically, this paper defines the network tasks as a collection of state and action sequences to achieve the fine-grained division of the execution of network tasks, performs Hash value matching based on random communication string and action identification sequence for packet authentication, and proposes a mechanism of action identification sequence matching and abnormal behavior decision-making based on a finite state machine, according to the fine-grained monitoring of task execution action sequence. Furthermore, to verify the validity of the anomaly detection method proposed in this paper, a prototype based on the FTP communication platform is constructed, on which the simulation experiments, including the DDOS attack and backdoor attack, are conducted. The experimental results show that the proposed task-oriented abnormal behavior detection method can effectively intercept network malicious data packets and realize the active security defense for network systems.
APA, Harvard, Vancouver, ISO, and other styles
46

Vishwakarma, Gopal, and Wonjun Lee. "Exploiting JTAG and Its Mitigation in IOT: A Survey." Future Internet 10, no. 12 (December 3, 2018): 121. http://dx.doi.org/10.3390/fi10120121.

Full text
Abstract:
Nowadays, companies are heavily investing in the development of “Internet of Things(IoT)” products. These companies usually and obviously hunt for lucrative business models. Currently, each person owns at least 3–4 devices (such as mobiles, personal computers, Google Assistant, Alexa, etc.) that are connected to the Internet 24/7. However, in the future, there might be hundreds of devices that will be constantly online behind each person, keeping track of body health, banking transactions, status of personal devices, etc. to make one’s life more efficient and streamlined. Thus, it is very crucial that each device should be highly secure since one’s life will become dependent on these devices. However, the current security of IoT devices is mainly focused on resiliency of device. In addition, less complex node devices are easily accessible to the public resulting in higher vulnerability. JTAG is an IEEE standard that has been defined to test proper mounting of components on PCBs (printed circuit boards) and has been extensively used by PCB manufacturers to date. This JTAG interface can be used as a backdoor entry to access and exploit devices, also defined as a physical attack. This attack can be used to make products malfunction, modify data, or, in the worst case, stop working. This paper reviews previous successful JTAG exploitations of well-known devices operating online and also reviews some proposed possible solutions to see how they can affect IoT products in a broader sense.
APA, Harvard, Vancouver, ISO, and other styles
47

Elhattab, Fatima, Sara Bouchenak, Rania Talbi, and Vlad Nitu. "Robust Federated Learning for Ubiquitous Computing through Mitigation of Edge-Case Backdoor Attacks." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, no. 4 (December 21, 2022): 1–27. http://dx.doi.org/10.1145/3569492.

Full text
Abstract:
Federated Learning (FL) allows several data owners to train a joint model without sharing their training data. Such a paradigm is useful for better privacy in many ubiquitous computing systems. However, FL is vulnerable to poisoning attacks, where malicious participants attempt to inject a backdoor task in the model at training time, along with the main task that the model was initially trained for. Recent works show that FL is particularly vulnerable to edge-case backdoors introduced by data points with unusual out-of-distribution features. Such attacks are among the most difficult to counter, and today's FL defense mechanisms usually fail to tackle them. In this paper, we present ARMOR, a defense mechanism that leverages adversarial learning to uncover edge-case backdoors. In contrast to most of existing FL defenses, ARMOR does not require real data samples and is compatible with secure aggregation, thus, providing better FL privacy protection. ARMOR relies on GANs (Generative Adversarial Networks) to extract data features from model updates, and uses the generated samples to test the activation of potential edge-case backdoors in the model. Our experimental evaluations with three widely used datasets and neural networks show that ARMOR can tackle edge-case backdoors with 95% resilience against attacks, and without hurting model quality.
APA, Harvard, Vancouver, ISO, and other styles
48

Chen, Chuanshuai, and Jiazhu Dai. "Mitigating backdoor attacks in LSTM-based text classification systems by Backdoor Keyword Identification." Neurocomputing 452 (September 2021): 253–62. http://dx.doi.org/10.1016/j.neucom.2021.04.105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Shao, Kun, Yu Zhang, Junan Yang, and Hui Liu. "Textual Backdoor Defense via Poisoned Sample Recognition." Applied Sciences 11, no. 21 (October 25, 2021): 9938. http://dx.doi.org/10.3390/app11219938.

Full text
Abstract:
Deep learning models are vulnerable to backdoor attacks. The success rate of textual backdoor attacks based on data poisoning in existing research is as high as 100%. In order to enhance the natural language processing model’s defense against backdoor attacks, we propose a textual backdoor defense method via poisoned sample recognition. Our method consists of two parts: the first step is to add a controlled noise layer after the model embedding layer, and to train a preliminary model with incomplete or no backdoor embedding, which reduces the effectiveness of poisoned samples. Then, we use the model to initially identify the poisoned samples in the training set so as to narrow the search range of the poisoned samples. The second step uses all the training data to train an infection model embedded in the backdoor, which is used to reclassify the samples selected in the first step, and finally identify the poisoned samples. Through detailed experiments, we have proved that our defense method can effectively defend against a variety of backdoor attacks (character-level, word-level and sentence-level backdoor attacks), and the experimental effect is better than the baseline method. For the BERT model trained by the IMDB dataset, this method can even reduce the success rate of word-level backdoor attacks to 0%.
APA, Harvard, Vancouver, ISO, and other styles
50

Fang, Yong, Mingyu Xie, and Cheng Huang. "PBDT: Python Backdoor Detection Model Based on Combined Features." Security and Communication Networks 2021 (September 14, 2021): 1–13. http://dx.doi.org/10.1155/2021/9923234.

Full text
Abstract:
Application security is essential in today’s highly development period. Backdoor is a means by which attackers can invade the system to achieve illegal purposes and damage users’ rights. It has posed a serious threat to network security. Thus, it is urgent to take adequate measures to defend such attacks. Previous research work was mainly focused on numerous PHP webshells, with less research on Python backdoor files. Language differences make the method not entirely applicable. This paper proposes a Python backdoor detection model named PBDT based on combined features. The model summarizes the common functional modules and functions in the backdoor files and extracts the number of calls in the text to form sample features. What is more, we consider the text’s statistical characteristics, including the information entropy, the longest string, etc., to identify the obfuscated Python code. Besides, the opcode sequence is used to represent code characteristics, such as TF-IDF vector and FastText classifier, to eliminate the influence of interference items. Finally, we introduce the Random Forest algorithm to build a classifier. Covering most types of backdoors, some samples are obfuscated, the model achieves an accuracy of 97.70%, and the TNR index is as high as 98.66%, showing a good classification performance in Python backdoor detection.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography