Journal articles on the topic 'Adversarial Defence'

To see the other types of publications on this topic, follow the link: Adversarial Defence.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Adversarial Defence.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Jiang, Guoteng, Zhuang Qian, Qiu-Feng Wang, Yan Wei, and Kaizhu Huang. "Adversarial Attack and Defence on Handwritten Chinese Character Recognition." Journal of Physics: Conference Series 2278, no. 1 (May 1, 2022): 012023. http://dx.doi.org/10.1088/1742-6596/2278/1/012023.

Full text
Abstract:
Abstract Deep Neural Networks (DNNs) have shown their powerful performance in classification; however, the robustness issue of DNNs has arisen as one primary concern, e.g., adversarial attack. So far as we know, there is not any reported work about the adversarial attack on handwritten Chinese character recognition (HCCR). To this end, the classical adversarial attack method (i.e., Projection Gradient Descent: PGD) is adopted to generate adversarial examples to evaluate the robustness of the HCCR model. Furthermore, in the training process, we use adversarial examples to improve the robustness of the HCCR model. In the experiments, we utilize a frequently-used DNN model on HCCR and evaluate its robustness on the benchmark dataset CASIA-HWDB. The experimental results show that its recognition accuracy is decreased severely on the adversarial examples, demonstrating the vulnerability of the current HCCR model. In addition, we can improve the recognition accuracy significantly after the adversarial training, demonstrating its effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
2

Huang, Bo, Zhiwei Ke, Yi Wang, Wei Wang, Linlin Shen, and Feng Liu. "Adversarial Defence by Diversified Simultaneous Training of Deep Ensembles." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (May 18, 2021): 7823–31. http://dx.doi.org/10.1609/aaai.v35i9.16955.

Full text
Abstract:
Learning-based classifiers are susceptible to adversarial examples. Existing defence methods are mostly devised on individual classifiers. Recent studies showed that it is viable to increase adversarial robustness by promoting diversity over an ensemble of models. In this paper, we propose adversarial defence by encouraging ensemble diversity on learning high-level feature representations and gradient dispersion in simultaneous training of deep ensemble networks. We perform extensive evaluations under white-box and black-box attacks including transferred examples and adaptive attacks. Our approach achieves a significant gain of up to 52% in adversarial robustness, compared with the baseline and the state-of-the-art method on image benchmarks with complex data scenes. The proposed approach complements the defence paradigm of adversarial training, and can further boost the performance. The source code is available at https://github.com/ALIS-Lab/AAAI2021-PDD.
APA, Harvard, Vancouver, ISO, and other styles
3

Pawlicki, Marek, and Ryszard S. Choraś. "Preprocessing Pipelines including Block-Matching Convolutional Neural Network for Image Denoising to Robustify Deep Reidentification against Evasion Attacks." Entropy 23, no. 10 (October 3, 2021): 1304. http://dx.doi.org/10.3390/e23101304.

Full text
Abstract:
Artificial neural networks have become the go-to solution for computer vision tasks, including problems of the security domain. One such example comes in the form of reidentification, where deep learning can be part of the surveillance pipeline. The use case necessitates considering an adversarial setting—and neural networks have been shown to be vulnerable to a range of attacks. In this paper, the preprocessing defences against adversarial attacks are evaluated, including block-matching convolutional neural network for image denoising used as an adversarial defence. The benefit of using preprocessing defences comes from the fact that it does not require the effort of retraining the classifier, which, in computer vision problems, is a computationally heavy task. The defences are tested in a real-life-like scenario of using a pre-trained, widely available neural network architecture adapted to a specific task with the use of transfer learning. Multiple preprocessing pipelines are tested and the results are promising.
APA, Harvard, Vancouver, ISO, and other styles
4

Lal, Sheeba, Saeed Ur Rehman, Jamal Hussain Shah, Talha Meraj, Hafiz Tayyab Rauf, Robertas Damaševičius, Mazin Abed Mohammed, and Karrar Hameed Abdulkareem. "Adversarial Attack and Defence through Adversarial Training and Feature Fusion for Diabetic Retinopathy Recognition." Sensors 21, no. 11 (June 7, 2021): 3922. http://dx.doi.org/10.3390/s21113922.

Full text
Abstract:
Due to the rapid growth in artificial intelligence (AI) and deep learning (DL) approaches, the security and robustness of the deployed algorithms need to be guaranteed. The security susceptibility of the DL algorithms to adversarial examples has been widely acknowledged. The artificially created examples will lead to different instances negatively identified by the DL models that are humanly considered benign. Practical application in actual physical scenarios with adversarial threats shows their features. Thus, adversarial attacks and defense, including machine learning and its reliability, have drawn growing interest and, in recent years, has been a hot topic of research. We introduce a framework that provides a defensive model against the adversarial speckle-noise attack, the adversarial training, and a feature fusion strategy, which preserves the classification with correct labelling. We evaluate and analyze the adversarial attacks and defenses on the retinal fundus images for the Diabetic Retinopathy recognition problem, which is considered a state-of-the-art endeavor. Results obtained on the retinal fundus images, which are prone to adversarial attacks, are 99% accurate and prove that the proposed defensive model is robust.
APA, Harvard, Vancouver, ISO, and other styles
5

Johnston, Ed. "The adversarial defence lawyer: Myths, disclosure and efficiency—A contemporary analysis of the role in the era of the Criminal Procedure Rules." International Journal of Evidence & Proof 24, no. 1 (August 26, 2019): 35–58. http://dx.doi.org/10.1177/1365712719867972.

Full text
Abstract:
This article contends that piecemeal changes to the adversarial process since the dawn of the new millennium have transformed the CJS. The advent of (near) compulsory disclosure means the defendant has to reveal many elements of his defence. This dilutes the adversarial battle and leaves a process which is managerialist in nature. The Early Guilty Plea system is a mechanism to increase the efficiency by stemming the amount of cases reaching the trial stage. This has an impact on the defence lawyer’s role and renders him conflicted between advancing the best interest of the client against other pre-trial obligations. This small empirical study suggests that classic adversarial lawyers are seen as a relic of a bygone era. The modern criminal justice system prioritises speed and efficiency. If a case reaches court, the defendant is treated as an ‘informational resource’ of the court reminiscent of his position in the 17th century.
APA, Harvard, Vancouver, ISO, and other styles
6

Xu, Enhui, Xiaolin Zhang, Yongping Wang, Shuai Zhang, Lixin Lu, and Li Xu. "WordRevert: Adversarial Examples Defence Method for Chinese Text Classification." IEEE Access 10 (2022): 28832–41. http://dx.doi.org/10.1109/access.2022.3157521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bruce, Neil. "Defence expenditures by countries in allied and adversarial relationships." Defence Economics 1, no. 3 (May 1990): 179–95. http://dx.doi.org/10.1080/10430719008404661.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Striletska, Oksana. "Establishment and Development of the Adversarial Principle in the Criminal Process." Path of Science 7, no. 7 (July 31, 2021): 1010–16. http://dx.doi.org/10.22178/pos.72-2.

Full text
Abstract:
The article is devoted to studying the history of the origin and development of adversarial principles in criminal proceedings. The evolution of the adversarial principle in the criminal process is studied in chronological order, in historical retrospective. Based on the development of legal regulations and the level of public administration, specific historical periods related to the development of the adversarial principle in criminal proceedings are distinguished. A retrospective suggests that adversarial proceedings should be taken as the basis for the organization of the entire criminal process. Only in this case, it is possible to clearly separate the functions of prosecution, defence, and resolution of criminal proceedings at all its stages and give the parties equal opportunities to provide evidence and defend their positions.
APA, Harvard, Vancouver, ISO, and other styles
9

Macfarlane, Julie. "The Anglican Church’s sexual abuse defence playbook." Theology 124, no. 3 (May 2021): 182–89. http://dx.doi.org/10.1177/0040571x211008547.

Full text
Abstract:
This article is written by a law professor who is also a survivor of sexual abuse by the Anglican priest Meirion Griffiths. In her attempt to get restitution, she recounts her experience of the adversarial tactics used against sexual abuse claimants by the Church of England through the Ecclesiastical Insurance Company. She argues, from her own experience, that the reliance on rape myth, the defence of limitations and the use of biased ‘expert’ medical witnesses are deeply offensive, especially when used by the Church.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Bowen, Benedetta Tondi, Xixiang Lv, and Mauro Barni. "Challenging the Adversarial Robustness of DNNs Based on Error-Correcting Output Codes." Security and Communication Networks 2020 (November 12, 2020): 1–11. http://dx.doi.org/10.1155/2020/8882494.

Full text
Abstract:
The existence of adversarial examples and the easiness with which they can be generated raise several security concerns with regard to deep learning systems, pushing researchers to develop suitable defence mechanisms. The use of networks adopting error-correcting output codes (ECOC) has recently been proposed to counter the creation of adversarial examples in a white-box setting. In this paper, we carry out an in-depth investigation of the adversarial robustness achieved by the ECOC approach. We do so by proposing a new adversarial attack specifically designed for multilabel classification architectures, like the ECOC-based one, and by applying two existing attacks. In contrast to previous findings, our analysis reveals that ECOC-based networks can be attacked quite easily by introducing a small adversarial perturbation. Moreover, the adversarial examples can be generated in such a way to achieve high probabilities for the predicted target class, hence making it difficult to use the prediction confidence to detect them. Our findings are proven by means of experimental results obtained on MNIST, CIFAR-10, and GTSRB classification tasks.
APA, Harvard, Vancouver, ISO, and other styles
11

Huang, Shize, Xiaowen Liu, Xiaolu Yang, Zhaoxin Zhang, and Lingyu Yang. "Two Improved Methods of Generating Adversarial Examples against Faster R-CNNs for Tram Environment Perception Systems." Complexity 2020 (September 22, 2020): 1–10. http://dx.doi.org/10.1155/2020/6814263.

Full text
Abstract:
Trams have increasingly deployed object detectors to perceive running conditions, and deep learning networks have been widely adopted by those detectors. Growing neural networks have incurred severe attacks such as adversarial example attacks, imposing threats to tram safety. Only if adversarial attacks are studied thoroughly, researchers can come up with better defence methods against them. However, most existing methods of generating adversarial examples have been devoted to classification, and none of them target tram environment perception systems. In this paper, we propose an improved projected gradient descent (PGD) algorithm and an improved Carlini and Wagner (C&W) algorithm to generate adversarial examples against Faster R-CNN object detectors. Experiments verify that both algorithms can successfully conduct nontargeted and targeted white-box digital attacks when trams are running. We also compare the performance of the two methods, including attack effects, similarity to clean images, and the generating time. The results show that both algorithms can generate adversarial examples within 220 seconds, a much shorter time, without decrease of the success rate.
APA, Harvard, Vancouver, ISO, and other styles
12

Kehoe, Aidan, Peter Wittek, Yanbo Xue, and Alejandro Pozas-Kerstjens. "Defence against adversarial attacks using classical and quantum-enhanced Boltzmann machines †." Machine Learning: Science and Technology 2, no. 4 (July 15, 2021): 045006. http://dx.doi.org/10.1088/2632-2153/abf834.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Gao, Wei, Yunqing Liu, Yi Zeng, Quanyang Liu, and Qi Li. "SAR Image Ship Target Detection Adversarial Attack and Defence Generalization Research." Sensors 23, no. 4 (February 17, 2023): 2266. http://dx.doi.org/10.3390/s23042266.

Full text
Abstract:
The synthetic aperture radar (SAR) image ship detection system needs to adapt to an increasingly complicated actual environment, and the requirements for the stability of the detection system continue to increase. Adversarial attacks deliberately add subtle interference to input samples and cause models to have high confidence in output errors. There are potential risks in a system, and input data that contain confrontation samples can be easily used by malicious people to attack the system. For a safe and stable model, attack algorithms need to be studied. The goal of traditional attack algorithms is to destroy models. When defending against attack samples, a system does not consider the generalization ability of the model. Therefore, this paper introduces an attack algorithm which can improve the generalization of models by based on the attributes of Gaussian noise, which is widespread in actual SAR systems. The attack data generated by this method have a strong effect on SAR ship detection models and can greatly reduce the accuracy of ship recognition models. While defending against attacks, filtering attack data can effectively improve the model defence capabilities. Defence training greatly improves the anti-attack capacity, and the generalization capacity of the model is improved accordingly.
APA, Harvard, Vancouver, ISO, and other styles
14

Johnston, Ed. "All Rise for the Interventionist." Journal of Criminal Law 80, no. 3 (June 2016): 201–13. http://dx.doi.org/10.1177/0022018316647870.

Full text
Abstract:
This paper will examine the changing role played by the judiciary in criminal trials. The paper examines the genesis of the adversarial criminal trial that was born out of lifting the prohibition on defence counsel in trials of treason. The paper will chart the rise of judicial passivity as lawyers dominated trials. Finally, the paper examines the rise of the interventionist judiciary in the wake of the Auld Review that launched an attack on the inefficiencies of the modern trial. To tackle the inefficiencies, the Criminal Procedure Rules allowed the judiciary to reassume a role of active case management. The impact an interventionist judiciary has for adversarial criminal justice is examined. The paper finds that a departure from traditional adversarial has occurred; the criminal justice process has shifted to a new form of process, driven by a managerial agenda.
APA, Harvard, Vancouver, ISO, and other styles
15

Poptchev, Peter. "NATO-EU Cooperation in Cybersecurity and Cyber Defence Offers Unrivalled Advantages." Information & Security: An International Journal 45 (2020): 35–55. http://dx.doi.org/10.11610/isij.4503.

Full text
Abstract:
The article identifies the trends as well as documented instances of adversarial cyberattacks and hybrid warfare against NATO and EU Member States. It illustrates how these adversarial activities impact on the broader aspects of national security and defence, the socio-economic stability and the democratic order of the states affected, including on the cohesion of their respective societies. Cyberattacks by foreign actors—state and non-state—including state-sponsored attacks against democratic institutions, critical infrastructure and other governmental, military, economic, academic, social and corporate systems of both EU and NATO Member States have noticeably multiplied and have become more sophisticated, more destructive, more expensive and often indiscriminate. The cyber and hybrid threats are increasingly seen as a strategic challenge. The article presents some salient topics such as the nexus between cyberattacks and hybrid operations; the game-changing artificial intelligence dimension of the cyber threat; and the viability of public attributions in cases of cyberattacks. On the basis of analysis of the conceptual thinking and policy guide-lines of NATO, the European Union and of the U.S., the author makes the case that a resolute Trans-Atlantic cooperation in the cyber domain is good for the security of the countries involved and essential for the stability of today’s cyber-reliant world.
APA, Harvard, Vancouver, ISO, and other styles
16

Li, Jipeng, Xinyi Li, and Chenjing Zhang. "Analysis on Security and Privacy-preserving in Federated Learning." Highlights in Science, Engineering and Technology 4 (July 26, 2022): 349–58. http://dx.doi.org/10.54097/hset.v4i.923.

Full text
Abstract:
Data privacy breaches during the training and implementation of the model are the main challenges that impede the development of artificial intelligence technologies today. Federated Learning has been an effective tool for the protection of privacy. Federated Learning is a distributive machine learning method that trains a non-destructive learning module based on a local training and passage of parameters from participants, with no required direct access to data source. Federated Learning still holds many pitfalls. This paper first introduces the types of federated learning, including horizontal federated learning, vertical federated learning and federated transfer learning, and then analyses the existing security risks of poisoning attacks, adversarial attacks and privacy leaks, with privacy leaks becoming a security risk that cannot be ignored at this stage. This paper also summarizes the corresponding defence measures, from three aspects: Poison attack defence, Privacy Leak Defence, and Defence against attack, respectively. This paper introduces the defence measures taken against some threats faced by federated learning, and finally gives some future research directions.
APA, Harvard, Vancouver, ISO, and other styles
17

Duff, Peter. "Disclosure in Scottish Criminal Procedure: Another Step in an Inquisitorial Direction?" International Journal of Evidence & Proof 11, no. 3 (July 2007): 153–80. http://dx.doi.org/10.1350/ijep.2007.11.3.153.

Full text
Abstract:
This article describes the recent development of a common law doctrine of disclosure in Scottish criminal procedure when, as little as 10 years ago, the prosecution had no legal duty to disclose any information to the defence prior to trial. Further, it is argued that this transformation has the potential to move the Scottish criminal justice system further from its adversarial base towards a more inquisitorial model.
APA, Harvard, Vancouver, ISO, and other styles
18

Mao, Junjie, Bin Weng, Tianqiang Huang, Feng Ye, and Liqing Huang. "Research on Multimodality Face Antispoofing Model Based on Adversarial Attacks." Security and Communication Networks 2021 (August 9, 2021): 1–12. http://dx.doi.org/10.1155/2021/3670339.

Full text
Abstract:
Face antispoofing detection aims to identify whether the user’s face identity information is legal. Multimodality models generally have high accuracy. However, the existing works of face antispoofing detection have the problem of insufficient research on the safety of the model itself. Therefore, the purpose of this paper is to explore the vulnerability of existing face antispoofing models, especially multimodality models, when resisting various types of attacks. In this paper, we firstly study the resistance ability of multimodality models when they encounter white-box attacks and black-box attacks from the perspective of adversarial examples. Then, we propose a new method that combines mixed adversarial training and differentiable high-frequency suppression modules to effectively improve model safety. Experimental results show that the accuracy of the multimodality face antispoofing model is reduced from over 90% to about 10% when it is attacked by adversarial examples. But, after applying the proposed defence method, the model can still maintain more than 90% accuracy on original examples, and the accuracy of the model can reach more than 80% on attack examples.
APA, Harvard, Vancouver, ISO, and other styles
19

Singh, Abhijit, and Biplab Sikdar. "Adversarial Attack and Defence Strategies for Deep-Learning-Based IoT Device Classification Techniques." IEEE Internet of Things Journal 9, no. 4 (February 15, 2022): 2602–13. http://dx.doi.org/10.1109/jiot.2021.3138541.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Ambos, Kai. "International criminal procedure: "adversarial", "inquisitorial" or mixed?" International Criminal Law Review 3, no. 1 (2003): 1–37. http://dx.doi.org/10.1163/156753603767877084.

Full text
Abstract:
AbstractThe article analyses whether international criminal procedure is "adversarial", "inquisitorial" or mixed. It examines the law of the ICTY and the ICC, including the relevant case law. This law has developed from an adversarial to a truly mixed procedure by way of various amendments of the ICTY's Rules of Procedure and Evidence (RPE) and the drafting of the Rome Statute merging civil and common law elements in one international procedure. It is no longer important whether a rule is either "adversarial" or "inquisitorial" but whether it assists the Tribunals in accomplishing their tasks and whether it complies with fundamental fair trial standards. As to an efficient trial management an UN Expert Group called for a more active role of the judges, in particular with regard to the direction of the trial and the collection of evidence. In this context, it is submitted that a civil law like judge-led procedure may better avoid delays produced by the free interplay of the parties. Ultimately, however, the smooth functioning of a judicial system depends on its actors, procedural rules provide only a general framework in this regard. A truly mixed procedure requires Prosecutors, Defence Counsel and Judges who have knowledge of both common and civil law and are able to look beyond their own legal systems.
APA, Harvard, Vancouver, ISO, and other styles
21

Duddu, Vasisht. "A Survey of Adversarial Machine Learning in Cyber Warfare." Defence Science Journal 68, no. 4 (June 26, 2018): 356. http://dx.doi.org/10.14429/dsj.68.12371.

Full text
Abstract:
<div class="page" title="Page 1"><div class="layoutArea"><div class="column"><p>The changing nature of warfare has seen a paradigm shift from the conventional to asymmetric, contactless warfare such as information and cyber warfare. Excessive dependence on information and communication technologies, cloud infrastructures, big data analytics, data-mining and automation in decision making poses grave threats to business and economy in adversarial environments. Adversarial machine learning is a fast growing area of research which studies the design of Machine Learning algorithms that are robust in adversarial environments. This paper presents a comprehensive survey of this emerging area and the various techniques of adversary modelling. We explore the threat models for Machine Learning systems and describe the various techniques to attack and defend them. We present privacy issues in these models and describe a cyber-warfare test-bed to test the effectiveness of the various attack-defence strategies and conclude with some open problems in this area of research.</p><p> </p></div></div></div>
APA, Harvard, Vancouver, ISO, and other styles
22

Hasneziri, Luan. "The Adversarial Proceedings Principle in the Civil Process." European Journal of Marketing and Economics 4, no. 1 (May 15, 2021): 88. http://dx.doi.org/10.26417/548nth20i.

Full text
Abstract:
One of the most important principles of civil process is the adversarial proceedings principle. This principle characterizes the civil process from its beginning in the trial in the court of first instance, in the court of appeal, until its conclusion in the High Court. Moreover, with the new changes that have been made in the civil procedural law, this principle finds application even before the beginning of the trial in the first instance. According to these changes, the party against whom the lawsuit is filed, before the trial against this party begins, has the right to present its claims against the lawsuit, in a document called “Declaration of defence”, leaving enough time for the fulfillment of this right for a period of 30 days. This scientific work will consist of tëo main issues. The first issue will address the meaning and importance of the adversarial proceedings principle in the civil process. In this issue, two different systems will be analyzed in the application of this principle, analyzing the advantages and disadvantages of each of them. The second issue will analyze the elements of the adversarial proceedings principle, looking at these elements in practical terms and the consequences that their non-implementation may bring. In this scientific work, the adversarial proceedings principle will be seen as part of the fair legal process provided by the Constitution of Albania and analyzed in several decisions of the Constitutional Court of Albania. This principle will also be addressed in the framework of international law, focusing on the way in which this principle is expressed in Article 6 of the European Convention on Human Rights and in the decisions of the Strasbourg Court regarding the fair legal process. At the end this scientific work will be given its conclusions, as well as the bibliography where this work is based.
APA, Harvard, Vancouver, ISO, and other styles
23

Leitch, Shirley, and Juliet Roper. "AD Wars: Adversarial Advertising by Interest Groups in a New Zealand General Election." Media International Australia 92, no. 1 (August 1999): 103–16. http://dx.doi.org/10.1177/1329878x9909200112.

Full text
Abstract:
During New Zealand's 1996 general election, neo-liberal employment law became the subject of two opposing advertising campaigns. Although the campaigns confined themselves to a single piece of legislation, the Employment Contracts Act, they reflected a deep division within New Zealand society. This article examines the two campaigns which were run by the Engineers' Union and the Employers' Federation. At its core, the Engineers' campaign was a defence of collectivism both in terms of the values underlying trade unionism and, more broadly, of Keynesian social democracy, whereas the Employers' Federation campaign championed the ethic of individualism within a free-market economy. Such a clear ideological positioning was absent from the campaigns of the major political parties who fought for the middle ground during New Zealand's first proportional representation election. This article, then, examines how interest groups used network television to confront voters with a stark choice between an unasked-for neo-liberal present and an apparently discredited Keynesian past.
APA, Harvard, Vancouver, ISO, and other styles
24

Sun, Guangling, Yuying Su, Chuan Qin, Wenbo Xu, Xiaofeng Lu, and Andrzej Ceglowski. "Complete Defense Framework to Protect Deep Neural Networks against Adversarial Examples." Mathematical Problems in Engineering 2020 (May 11, 2020): 1–17. http://dx.doi.org/10.1155/2020/8319249.

Full text
Abstract:
Although Deep Neural Networks (DNNs) have achieved great success on various applications, investigations have increasingly shown DNNs to be highly vulnerable when adversarial examples are used as input. Here, we present a comprehensive defense framework to protect DNNs against adversarial examples. First, we present statistical and minor alteration detectors to filter out adversarial examples contaminated by noticeable and unnoticeable perturbations, respectively. Then, we ensemble the detectors, a deep Residual Generative Network (ResGN), and an adversarially trained targeted network, to construct a complete defense framework. In this framework, the ResGN is our previously proposed network which is used to remove adversarial perturbations, and the adversarially trained targeted network is a network that is learned through adversarial training. Specifically, once the detectors determine an input example to be adversarial, it is cleaned by ResGN and then classified by the adversarially trained targeted network; otherwise, it is directly classified by this network. We empirically evaluate the proposed complete defense on ImageNet dataset. The results confirm the robustness against current representative attacking methods including fast gradient sign method, randomized fast gradient sign method, basic iterative method, universal adversarial perturbations, DeepFool method, and Carlini & Wagner method.
APA, Harvard, Vancouver, ISO, and other styles
25

McCarthy, Andrew, Essam Ghadafi, Panagiotis Andriotis, and Phil Legg. "Functionality-Preserving Adversarial Machine Learning for Robust Classification in Cybersecurity and Intrusion Detection Domains: A Survey." Journal of Cybersecurity and Privacy 2, no. 1 (March 17, 2022): 154–90. http://dx.doi.org/10.3390/jcp2010010.

Full text
Abstract:
Machine learning has become widely adopted as a strategy for dealing with a variety of cybersecurity issues, ranging from insider threat detection to intrusion and malware detection. However, by their very nature, machine learning systems can introduce vulnerabilities to a security defence whereby a learnt model is unaware of so-called adversarial examples that may intentionally result in mis-classification and therefore bypass a system. Adversarial machine learning has been a research topic for over a decade and is now an accepted but open problem. Much of the early research on adversarial examples has addressed issues related to computer vision, yet as machine learning continues to be adopted in other domains, then likewise it is important to assess the potential vulnerabilities that may occur. A key part of transferring to new domains relates to functionality-preservation, such that any crafted attack can still execute the original intended functionality when inspected by a human and/or a machine. In this literature survey, our main objective is to address the domain of adversarial machine learning attacks and examine the robustness of machine learning models in the cybersecurity and intrusion detection domains. We identify the key trends in current work observed in the literature, and explore how these relate to the research challenges that remain open for future works. Inclusion criteria were: articles related to functionality-preservation in adversarial machine learning for cybersecurity or intrusion detection with insight into robust classification. Generally, we excluded works that are not yet peer-reviewed; however, we included some significant papers that make a clear contribution to the domain. There is a risk of subjective bias in the selection of non-peer reviewed articles; however, this was mitigated by co-author review. We selected the following databases with a sizeable computer science element to search and retrieve literature: IEEE Xplore, ACM Digital Library, ScienceDirect, Scopus, SpringerLink, and Google Scholar. The literature search was conducted up to January 2022. We have striven to ensure a comprehensive coverage of the domain to the best of our knowledge. We have performed systematic searches of the literature, noting our search terms and results, and following up on all materials that appear relevant and fit within the topic domains of this review. This research was funded by the Partnership PhD scheme at the University of the West of England in collaboration with Techmodal Ltd.
APA, Harvard, Vancouver, ISO, and other styles
26

Huang, Xiaowei, Daniel Kroening, Wenjie Ruan, James Sharp, Youcheng Sun, Emese Thamo, Min Wu, and Xinping Yi. "A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability." Computer Science Review 37 (August 2020): 100270. http://dx.doi.org/10.1016/j.cosrev.2020.100270.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Raj, Rohit, Jayant Kumar, and Akriti Kumari. "HOW AI USED TO PREVENT CYBER THREATS." International Research Journal of Computer Science 9, no. 7 (July 31, 2022): 146–51. http://dx.doi.org/10.26562/irjcs.2022.v0907.002.

Full text
Abstract:
The merging of AI and CS has opened new possibilities for both disciplines [AI]. When used to the creation of intelligent models for malware categorization, intrusion detection, and threat intelligence sensing, deep learning is only one example of the many types of AI that have found a home in cyber security. The integrity of data used in AI models is vulnerable to corruption, which increases the risk of a cyberattack. Specialized cyber security defence and protection solutions are necessary to safeguard AI models from dangers such as adversarial machine learning [ML], keeping ML privacy, preserving federated learning, etc. These two tenets form the basis of our investigation into the effects of AI on network safety. In the first part of this essay, we take a high-level look at the present state of research into the use of AI in the prevention and mitigation of cyber-attacks, including both the use of more conventional machine learning techniques and existent deep learning solutions. We then examine possible AI countermeasures and divide them into distinct defence classes according to their characteristics. To wrap up, we expand on previous studies on the topic of building a safe AI system, paying special attention to the creation of encrypted neural networks and the actualization of secure federated deep learning.
APA, Harvard, Vancouver, ISO, and other styles
28

Hodgson, Jacqueline. "Constructing the Pre-Trial Role of the Defence in French Criminal Procedure: An Adversarial Outsider in an Inquisitorial Process?" International Journal of Evidence & Proof 6, no. 1 (January 2002): 1–16. http://dx.doi.org/10.1177/136571270200600101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Park, Sanglee, and Jungmin So. "On the Effectiveness of Adversarial Training in Defending against Adversarial Example Attacks for Image Classification." Applied Sciences 10, no. 22 (November 14, 2020): 8079. http://dx.doi.org/10.3390/app10228079.

Full text
Abstract:
State-of-the-art neural network models are actively used in various fields, but it is well-known that they are vulnerable to adversarial example attacks. Throughout the efforts to make the models robust against adversarial example attacks, it has been found to be a very difficult task. While many defense approaches were shown to be not effective, adversarial training remains as one of the promising methods. In adversarial training, the training data are augmented by “adversarial” samples generated using an attack algorithm. If the attacker uses a similar attack algorithm to generate adversarial examples, the adversarially trained network can be quite robust to the attack. However, there are numerous ways of creating adversarial examples, and the defender does not know what algorithm the attacker may use. A natural question is: Can we use adversarial training to train a model robust to multiple types of attack? Previous work have shown that, when a network is trained with adversarial examples generated from multiple attack methods, the network is still vulnerable to white-box attacks where the attacker has complete access to the model parameters. In this paper, we study this question in the context of black-box attacks, which can be a more realistic assumption for practical applications. Experiments with the MNIST dataset show that adversarially training a network with an attack method helps defending against that particular attack method, but has limited effect for other attack methods. In addition, even if the defender trains a network with multiple types of adversarial examples and the attacker attacks with one of the methods, the network could lose accuracy to the attack if the attacker uses a different data augmentation strategy on the target network. These results show that it is very difficult to make a robust network using adversarial training, even for black-box settings where the attacker has restricted information on the target network.
APA, Harvard, Vancouver, ISO, and other styles
30

Ravishankar, Monica, D. Vijay Rao, and C. R. S. Kumar. "A Game Theoretic Software Test-bed for Cyber Security Analysis of Critical Infrastructure." Defence Science Journal 68, no. 1 (December 18, 2017): 54. http://dx.doi.org/10.14429/dsj.68.11402.

Full text
Abstract:
<p class="p1">National critical infrastructures are vital to the functioning of modern societies and economies. The dependence on these infrastructures is so succinct that their incapacitation or destruction has a debilitating and cascading effect on national security. Critical infrastructure sectors ranging from financial services to power and transportation to communications and health care, all depend on massive information communication technology networks. Cyberspace is composed of numerous interconnected computers, servers and databases that hold critical data and allow critical infrastructures to function. Securing critical data in a cyberspace that holds against growing and evolving cyber threats is an important focus area for most countries across the world. A novel approach is proposed to assess the vulnerabilities of own networks against adversarial attackers, where the adversary’s perception of strengths and vulnerabilities are modelled using game theoretic techniques. The proposed game theoretic framework models the uncertainties of information with the players (attackers and defenders) in terms of their information sets and their behaviour is modelled and assessed using a probability and belief function framework. The attack-defence scenarios are exercised on a virtual cyber warfare test-bed to assess and evaluate vulnerability of cyber systems. Optimal strategies for attack and defence are computed for the players which are validated using simulation experiments on the cyber war-games testbed, the results of which are used for security analyses.</p>
APA, Harvard, Vancouver, ISO, and other styles
31

Pochylá, Veronika. "Previous witness testimony as immediate or urgent action and its admissibility in court." International and Comparative Law Review 15, no. 2 (December 1, 2015): 145–59. http://dx.doi.org/10.1515/iclr-2016-0041.

Full text
Abstract:
Abstract The paper deals with the admissibility of witness testimony in the preliminary proceeding which could be read in court without right of the defence to hear or examine such a witness. This question is particularly interesting with regard to preserving the adversarial principle which is important for an objective assessment of the facts. The focus will be to answer the question of whether so obtained and executed evidence may stand as the main evidence of guilt especially with regard to Article 6,par. 1 and 3 (d) ECHR (right to obtain attendance and examination of witnesses). The arguments in this paper will be submitted supported by the case law of the Constitutional Court of the Czech Republic and the ECtHR. Contribution will also deal with British law and the applicability of the so-called Hearsay rule and the exceptions to this rule which can be applied in criminal proceedings.
APA, Harvard, Vancouver, ISO, and other styles
32

Moulinou, Iphigenia. "Explicit and implicit discursive strategies and moral order in a trial process." Journal of Language Aggression and Conflict 7, no. 1 (June 12, 2019): 105–32. http://dx.doi.org/10.1075/jlac.00021.mou.

Full text
Abstract:
Abstract The present paper examines data which has been drawn from the official proceedings of a murder trial in a Greek court, concerning the killing of an adolescent by a police officer (see also Georgalidou 2012, 2016), and addresses the issues of aggressive discursive strategies and the moral order in the trial process. It analyses the explicit and implicit strategies involved in morally discrediting the opponent, a rather frequent defence strategy (Atkinson and Drew 1979; Coulthard and Johnson 2007; Levinson 1979). The paper examines agency deflection towards the victim (Georgalidou 2016), attribution of a socio-spatial identity to the victim and witnesses in an essentialist and reductionist way, and other linguistic and discursive means, the majority of which mobilize moral panic and have implications for the moral order in court. I argue that these aggressive discursive means primarily contribute to the construction of a normative moral order by both adversarial parties.
APA, Harvard, Vancouver, ISO, and other styles
33

Fatehi, Nina, Qutaiba Alasad, and Mohammed Alawad. "Towards Adversarial Attacks for Clinical Document Classification." Electronics 12, no. 1 (December 28, 2022): 129. http://dx.doi.org/10.3390/electronics12010129.

Full text
Abstract:
Regardless of revolutionizing improvements in various domains thanks to recent advancements in the field of Deep Learning (DL), recent studies have demonstrated that DL networks are susceptible to adversarial attacks. Such attacks are crucial in sensitive environments to make critical and life-changing decisions, such as health decision-making. Research efforts on using textual adversaries to attack DL for natural language processing (NLP) have received increasing attention in recent years. Among the available textual adversarial studies, Electronic Health Records (EHR) have gained the least attention. This paper investigates the effectiveness of adversarial attacks on clinical document classification and proposes a defense mechanism to develop a robust convolutional neural network (CNN) model and counteract these attacks. Specifically, we apply various black-box attacks based on concatenation and editing adversaries on unstructured clinical text. Then, we propose a defense technique based on feature selection and filtering to improve the robustness of the models. Experimental results show that a small perturbation to the unstructured text in clinical documents causes a significant drop in performance. Performing the proposed defense mechanism under the same adversarial attacks, on the other hand, avoids such a drop in performance. Therefore, it enhances the robustness of the CNN model for clinical document classification.
APA, Harvard, Vancouver, ISO, and other styles
34

esh, Rishik, Ru pasri, Tamil selvan, Yogana rasimman, and Saran Sujai. "Intrusion of Attacks in Puppet and Zombie Attacking and Defence Model Using BW-DDOS." International Academic Journal of Innovative Research 9, no. 1 (June 28, 2022): 13–19. http://dx.doi.org/10.9756/iajir/v9i1/iajir0903.

Full text
Abstract:
The Internet is helpless against data transmission disseminated refusal of administration (BW-DDoS) assaults, wherein many hosts send countless parcels to cause clog and disturb authentic traffic. While adding a protection part against ill-disposed assaults, it is vital to convey different guard strategies paired to accomplish a decent inclusion of different assaults, BW-DDoS assaults have utilized generally rough, wasteful, beast force components; future assaults may be essentially more compelling and destructive. To meet the rising dangers, further developed guards are necessary. Distributed refusal of administration (DDoS) and adversarial assaults represent a genuine danger to the Internet. We examine the Internets vulnerability to Bandwidth Distributed Denial of Service (BW-DDoS) assaults, where many hosts send an immense number of bundles surpassing organization limits and causing clogs and misfortunes, along these lines disturbing genuine traffic. TCP and different conventions utilize clog control instruments that answer misfortunes and postponements by diminishing network usage, henceforth, their presentation might be debased pointedly because of such assaults. Aggressors might disturb connectivity to servers, organizations, independent frameworks, or entire nations or areas; such go after were at that point sent off in several conflicts. In this paper, we overview BW-DDoS assaults and safeguards. We contend that up until this point, BW-DDoS utilized relatively crude, wasteful, animal power systems; future assaults might be altogether more compelling, and thus much more destructive. We examine as of now sent and proposed protections. We contend that to meet the expanding threats, more progressed guards ought to be conveyed. This might include a few proposed systems (not yet conveyed), as well as new methodologies. This article is an outline and will be exceptionally compelling to per users who need to learn about data transmission DDoS assaults and guards.
APA, Harvard, Vancouver, ISO, and other styles
35

Gröndahl, Tommi, and N. Asokan. "Effective writing style transfer via combinatorial paraphrasing." Proceedings on Privacy Enhancing Technologies 2020, no. 4 (October 1, 2020): 175–95. http://dx.doi.org/10.2478/popets-2020-0068.

Full text
Abstract:
AbstractStylometry can be used to profile or deanonymize authors against their will based on writing style. Style transfer provides a defence. Current techniques typically use either encoder-decoder architectures or rule-based algorithms. Crucially, style transfer must reliably retain original semantic content to be actually deployable. We conduct a multifaceted evaluation of three state-of-the-art encoder-decoder style transfer techniques, and show that all fail at semantic retainment. In particular, they do not produce appropriate paraphrases, but only retain original content in the trivial case of exactly reproducing the text. To mitigate this problem we propose ParChoice: a technique based on the combinatorial application of multiple paraphrasing algorithms. ParChoice strongly outperforms the encoder-decoder baselines in semantic retainment. Additionally, compared to baselines that achieve nonnegligible semantic retainment, ParChoice has superior style transfer performance. We also apply ParChoice to multi-author style imitation (not considered by prior work), where we achieve up to 75% imitation success among five authors. Furthermore, when compared to two state-of-the-art rule-based style transfer techniques, ParChoice has markedly better semantic retainment. Combining ParChoice with the best performing rulebased baseline (Mutant-X [34]) also reaches the highest style transfer success on the Brennan-Greenstadt and Extended-Brennan-Greenstadt corpora, with much less impact on original meaning than when using the rulebased baseline techniques alone. Finally, we highlight a critical problem that afflicts all current style transfer techniques: the adversary can use the same technique for thwarting style transfer via adversarial training. We show that adding randomness to style transfer helps to mitigate the effectiveness of adversarial training.
APA, Harvard, Vancouver, ISO, and other styles
36

Hossain‐McKenzie, Shamina, Kaushik Raghunath, Katherine Davis, Sriharsha Etigowni, and Saman Zonouz. "Strategy for distributed controller defence: Leveraging controller roles and control support groups to maintain or regain control in cyber‐adversarial power systems." IET Cyber-Physical Systems: Theory & Applications 6, no. 2 (April 9, 2021): 80–92. http://dx.doi.org/10.1049/cps2.12006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Liu, Ninghao, Mengnan Du, Ruocheng Guo, Huan Liu, and Xia Hu. "Adversarial Attacks and Defenses." ACM SIGKDD Explorations Newsletter 23, no. 1 (May 26, 2021): 86–99. http://dx.doi.org/10.1145/3468507.3468519.

Full text
Abstract:
Despite the recent advances in a wide spectrum of applications, machine learning models, especially deep neural networks, have been shown to be vulnerable to adversarial attacks. Attackers add carefully-crafted perturbations to input, where the perturbations are almost imperceptible to humans, but can cause models to make wrong predictions. Techniques to protect models against adversarial input are called adversarial defense methods. Although many approaches have been proposed to study adversarial attacks and defenses in different scenarios, an intriguing and crucial challenge remains that how to really understand model vulnerability? Inspired by the saying that "if you know yourself and your enemy, you need not fear the battles", we may tackle the challenge above after interpreting machine learning models to open the black-boxes. The goal of model interpretation, or interpretable machine learning, is to extract human-understandable terms for the working mechanism of models. Recently, some approaches start incorporating interpretation into the exploration of adversarial attacks and defenses. Meanwhile, we also observe that many existing methods of adversarial attacks and defenses, although not explicitly claimed, can be understood from the perspective of interpretation. In this paper, we review recent work on adversarial attacks and defenses, particularly from the perspective of machine learning interpretation. We categorize interpretation into two types, feature-level interpretation, and model-level interpretation. For each type of interpretation, we elaborate on how it could be used for adversarial attacks and defenses. We then briefly illustrate additional correlations between interpretation and adversaries. Finally, we discuss the challenges and future directions for tackling adversary issues with interpretation.
APA, Harvard, Vancouver, ISO, and other styles
38

Rosenberg, Ishai, Asaf Shabtai, Yuval Elovici, and Lior Rokach. "Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain." ACM Computing Surveys 54, no. 5 (June 2021): 1–36. http://dx.doi.org/10.1145/3453158.

Full text
Abstract:
In recent years, machine learning algorithms, and more specifically deep learning algorithms, have been widely used in many fields, including cyber security. However, machine learning systems are vulnerable to adversarial attacks, and this limits the application of machine learning, especially in non-stationary, adversarial environments, such as the cyber security domain, where actual adversaries (e.g., malware developers) exist. This article comprehensively summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques and illuminates the risks they pose. First, the adversarial attack methods are characterized based on their stage of occurrence, and the attacker’ s goals and capabilities. Then, we categorize the applications of adversarial attack and defense methods in the cyber security domain. Finally, we highlight some characteristics identified in recent research and discuss the impact of recent advancements in other adversarial learning domains on future research directions in the cyber security domain. To the best of our knowledge, this work is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain, map them in a unified taxonomy, and use the taxonomy to highlight future research directions.
APA, Harvard, Vancouver, ISO, and other styles
39

Huang, Yang, Yuling Chen, Xuewei Wang, Jing Yang, and Qi Wang. "Promoting Adversarial Transferability via Dual-Sampling Variance Aggregation and Feature Heterogeneity Attacks." Electronics 12, no. 3 (February 3, 2023): 767. http://dx.doi.org/10.3390/electronics12030767.

Full text
Abstract:
At present, deep neural networks have been widely used in various fields, but their vulnerability requires attention. The adversarial attack aims to mislead the model by generating imperceptible perturbations on the source model, and although white-box attacks have achieved good success rates, existing adversarial samples exhibit weak migration in the black-box case, especially on some adversarially trained defense models. Previous work for gradient-based optimization either optimizes the image before iteration or optimizes the gradient during iteration, so it results in the generated adversarial samples overfitting the source model and exhibiting poor mobility to the adversarially trained model. To solve these problems, we propose the dual-sample variance aggregation with feature heterogeneity attack; our method is optimized before and during iterations to produce adversarial samples with better transferability. In addition, our method can be integrated with various input transformations. A large amount of experimental data demonstrate the effectiveness of the proposed method, which improves the attack success rate by 5.9% for the normally trained model and 11.5% for the adversarially trained model compared with the current state-of-the-art migration-enhancing attack methods.
APA, Harvard, Vancouver, ISO, and other styles
40

Imam, Niddal H., and Vassilios G. Vassilakis. "A Survey of Attacks Against Twitter Spam Detectors in an Adversarial Environment." Robotics 8, no. 3 (July 4, 2019): 50. http://dx.doi.org/10.3390/robotics8030050.

Full text
Abstract:
Online Social Networks (OSNs), such as Facebook and Twitter, have become a very important part of many people’s daily lives. Unfortunately, the high popularity of these platforms makes them very attractive to spammers. Machine learning (ML) techniques have been widely used as a tool to address many cybersecurity application problems (such as spam and malware detection). However, most of the proposed approaches do not consider the presence of adversaries that target the defense mechanism itself. Adversaries can launch sophisticated attacks to undermine deployed spam detectors either during training or the prediction (test) phase. Not considering these adversarial activities at the design stage makes OSNs’ spam detectors vulnerable to a range of adversarial attacks. Thus, this paper surveys the attacks against Twitter spam detectors in an adversarial environment, and a general taxonomy of potential adversarial attacks is presented using common frameworks from the literature. Examples of adversarial activities on Twitter that were discovered after observing Arabic trending hashtags are discussed in detail. A new type of spam tweet (adversarial spam tweet), which can be used to undermine a deployed classifier, is examined. In addition, possible countermeasures that could increase the robustness of Twitter spam detectors to such attacks are investigated.
APA, Harvard, Vancouver, ISO, and other styles
41

Luo, Yifan, Feng Ye, Bin Weng, Shan Du, and Tianqiang Huang. "A Novel Defensive Strategy for Facial Manipulation Detection Combining Bilateral Filtering and Joint Adversarial Training." Security and Communication Networks 2021 (August 2, 2021): 1–10. http://dx.doi.org/10.1155/2021/4280328.

Full text
Abstract:
Facial manipulation enables facial expressions to be tampered with or facial identities to be replaced in videos. The fake videos are so realistic that they are even difficult for human eyes to distinguish. This poses a great threat to social and public information security. A number of facial manipulation detectors have been proposed to address this threat. However, previous studies have shown that the accuracy of these detectors is sensitive to adversarial examples. The existing defense methods are very limited in terms of applicable scenes and defense effects. This paper proposes a new defense strategy for facial manipulation detectors, which combines a passive defense method, bilateral filtering, and a proactive defense method, joint adversarial training, to mitigate the vulnerability of facial manipulation detectors against adversarial examples. The bilateral filtering method is applied in the preprocessing stage of the model without any modification to denoise the input adversarial examples. The joint adversarial training starts from the training stage of the model, which mixes various adversarial examples and original examples to train the model. The introduction of joint adversarial training can train a model that defends against multiple adversarial attacks. The experimental results show that the proposed defense strategy positively helps facial manipulation detectors counter adversarial examples.
APA, Harvard, Vancouver, ISO, and other styles
42

Gong, Xiaopeng, Wanchun Chen, and Zhongyuan Chen. "Intelligent Game Strategies in Target-Missile-Defender Engagement Using Curriculum-Based Deep Reinforcement Learning." Aerospace 10, no. 2 (January 31, 2023): 133. http://dx.doi.org/10.3390/aerospace10020133.

Full text
Abstract:
Aiming at the attack and defense game problem in the target-missile-defender three-body confrontation scenario, intelligent game strategies based on deep reinforcement learning are proposed, including an attack strategy applicable to attacking missiles and active defense strategy applicable to a target/defender. First, based on the classical three-body adversarial research, the reinforcement learning algorithm is introduced to improve the purposefulness of the algorithm training. The action spaces the reward and punishment conditions of both attack and defense confrontation are considered in the reward function design. Through the analysis of the sign of the action space and design of the reward function in the adversarial form, the combat requirements can be satisfied in both the missile and target/defender training. Then, a curriculum-based deep reinforcement learning algorithm is applied to train the agents and a convergent game strategy is obtained. The simulation results show that the attack strategy of the missile can maneuver according to the battlefield situation and can successfully hit the target after avoiding the defender. The active defense strategy enables the less capable target/defender to achieve the effect similar to a network adversarial attack on the missile agent, shielding targets from attack against missiles with superior maneuverability on the battlefield.
APA, Harvard, Vancouver, ISO, and other styles
43

Heffernan, Liz. "The participation of victims in the trial process." Northern Ireland Legal Quarterly 68, no. 4 (December 21, 2017): 491–504. http://dx.doi.org/10.53386/nilq.v68i4.60.

Full text
Abstract:
Directive 2012/29/EU, establishing minimum standards on the rights, support and protection of victims of crime, forms part of a package of measures designed to ensure that victims have the same basic level of rights throughout the EU regardless of their nationality or the location of the crime. One of the Directive’s innovations is a suite of measures designed to facilitate the participation of victims in the criminal process. The provisions include a right on the part of victims to be heard and a right to have their dignity protected when giving evidence. Although there has been a gradual strengthening of victims’ rights at national and international level, the concept of participation remains poorly defined and practice varies widely across the EU. The issue is particularly controversial in common law systems where victims are not assigned any formal role in the trial process. The traditional adversarial trial, designed to accommodate the prosecution and the defence, poses a structural obstacle to reform. However, recognising the limits of EU competence to legislate in the area of criminal justice, the member states have been afforded a wide margin of appreciation when implementing the Directive’s provisions on participation.
APA, Harvard, Vancouver, ISO, and other styles
44

Zheng, Tianhang, Changyou Chen, and Kui Ren. "Distributionally Adversarial Attack." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 2253–60. http://dx.doi.org/10.1609/aaai.v33i01.33012253.

Full text
Abstract:
Recent work on adversarial attack has shown that Projected Gradient Descent (PGD) Adversary is a universal first-order adversary, and the classifier adversarially trained by PGD is robust against a wide range of first-order attacks. It is worth noting that the original objective of an attack/defense model relies on a data distribution p(x), typically in the form of risk maximization/minimization, e.g., max/min Ep(x) L(x) with p(x) some unknown data distribution and L(·) a loss function. However, since PGD generates attack samples independently for each data sample based on L(·), the procedure does not necessarily lead to good generalization in terms of risk optimization. In this paper, we achieve the goal by proposing distributionally adversarial attack (DAA), a framework to solve an optimal adversarial-data distribution, a perturbed distribution that satisfies the L∞ constraint but deviates from the original data distribution to increase the generalization risk maximally. Algorithmically, DAA performs optimization on the space of potential data distributions, which introduces direct dependency between all data points when generating adversarial samples. DAA is evaluated by attacking state-of-the-art defense models, including the adversarially-trained models provided by MIT MadryLab. Notably, DAA ranks the first place on MadryLab’s white-box leaderboards, reducing the accuracy of their secret MNIST model to 88.56% (with l∞ perturbations of ε = 0.3) and the accuracy of their secret CIFAR model to 44.71% (with l∞ perturbations of ε = 8.0). Code for the experiments is released on https://github.com/tianzheng4/Distributionally-Adversarial-Attack.
APA, Harvard, Vancouver, ISO, and other styles
45

Yao, Yuan, Haoxi Zhong, Zhengyan Zhang, Xu Han, Xiaozhi Wang, Kai Zhang, Chaojun Xiao, Guoyang Zeng, Zhiyuan Liu, and Maosong Sun. "Adversarial Language Games for Advanced Natural Language Intelligence." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 16 (May 18, 2021): 14248–56. http://dx.doi.org/10.1609/aaai.v35i16.17676.

Full text
Abstract:
We study the problem of adversarial language games, in which multiple agents with conflicting goals compete with each other via natural language interactions. While adversarial language games are ubiquitous in human activities, little attention has been devoted to this field in natural language processing. In this work, we propose a challenging adversarial language game called Adversarial Taboo as an example, in which an attacker and a defender compete around a target word. The attacker is tasked with inducing the defender to utter the target word invisible to the defender, while the defender is tasked with detecting the target word before being induced by the attacker. In Adversarial Taboo, a successful attacker and defender need to hide or infer the intention, and induce or defend during conversations. This requires several advanced language abilities, such as adversarial pragmatic reasoning and goal-oriented language interactions in open domain, which will facilitate many downstream NLP tasks. To instantiate the game, we create a game environment and a competition platform. Comprehensive experiments on several baseline attack and defense strategies show promising and interesting results, based on which we discuss some directions for future research.
APA, Harvard, Vancouver, ISO, and other styles
46

Silva, Samuel Henrique, Arun Das, Adel Aladdini, and Peyman Najafirad. "Adaptive Clustering of Robust Semantic Representations for Adversarial Image Purification on Social Networks." Proceedings of the International AAAI Conference on Web and Social Media 16 (May 31, 2022): 968–79. http://dx.doi.org/10.1609/icwsm.v16i1.19350.

Full text
Abstract:
Advances in Artificial Intelligence (AI) have made it possible to automate human-level visual search and perception tasks on the massive sets of image data shared on social media on a daily basis. However, AI-based automated filters are highly susceptible to deliberate image attacks that can lead to content misclassification of cyberbulling, child sexual abuse material (CSAM), adult content, and deepfakes. One of the most effective methods to defend against such disturbances is adversarial training, but this comes at the cost of generalization for unseen attacks and transferability across models. In this article, we propose a robust defense against adversarial image attacks, which is model agnostic and generalizable to unseen adversaries. We begin with a baseline model, extracting the latent representations for each class and adaptively clustering the latent representations that share a semantic similarity. Next, we obtain the distributions for these clustered latent representations along with their originating images. We then learn semantic reconstruction dictionaries (SRD). We adversarially train a new model constraining the latent space representation to minimize the distance between the adversarial latent representation and the true cluster distribution. To purify the image, we decompose the input into low and high-frequency components. The high-frequency component is reconstructed based on the best SRD from the clean dataset. In order to evaluate the best SRD, we rely on the distance between the robust latent representations and semantic cluster distributions. The output is a purified image with no perturbations. Evaluations using comprehensive datasets including image benchmarks and social media images demonstrate that our proposed purification approach guards and enhances the accuracy of AI-based image filters for unlawful and harmful perturbed images considerably.
APA, Harvard, Vancouver, ISO, and other styles
47

Zeng, Huimin, Chen Zhu, Tom Goldstein, and Furong Huang. "Are Adversarial Examples Created Equal? A Learnable Weighted Minimax Risk for Robustness under Non-uniform Attacks." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (May 18, 2021): 10815–23. http://dx.doi.org/10.1609/aaai.v35i12.17292.

Full text
Abstract:
Adversarial Training is proved to be an efficient method to defend against adversarial examples, being one of the few defenses that withstand strong attacks. However, traditional defense mechanisms assume a uniform attack over the examples according to the underlying data distribution, which is apparently unrealistic as the attacker could choose to focus on more vulnerable examples. We present a weighted minimax risk optimization that defends against non-uniform attacks, achieving robustness against adversarial examples under perturbed test data distributions. Our modified risk considers importance weights of different adversarial examples and focuses adaptively on harder examples that are wrongly classified or at higher risk of being classified incorrectly. The designed risk allows the training process to learn a strong defense through optimizing the importance weights. The experiments show that our model significantly improves state-of-the-art adversarial accuracy under non-uniform attacks without a significant drop under uniform attacks.
APA, Harvard, Vancouver, ISO, and other styles
48

Zhang, Ziwei, and Dengpan Ye. "Defending against Deep-Learning-Based Flow Correlation Attacks with Adversarial Examples." Security and Communication Networks 2022 (March 27, 2022): 1–11. http://dx.doi.org/10.1155/2022/2962318.

Full text
Abstract:
Tor is vulnerable to flow correlation attacks, adversaries who can observe the traffic metadata (e.g., packet timing, size, etc.) between client to entry relay and exit relay to the server will deanonymize users by calculating the degree of association. A recent study has shown that deep-learning-based approach called DeepCorr provides a high flow correlation accuracy of over 96%. The escalating threat of this attack requires timely and effective countermeasures. In this paper, we propose a novel defense mechanism that injects dummy packets into flow traces by precomputing adversarial examples, successfully breaks the flow pattern that CNNs model has learned, and achieves a high protection success rate of over 97%. Moreover, our defense only requires 20% bandwidth overhead, which outperforms the state-of-the-art defense. We further consider implementing our defense in the real world. We find that, unlike traditional scenarios, the traffic flows are “fixed” only when they are coming, which means we must know the next packet’s feature. In addition, the websites are not immutable, and the characteristics of the transmitted packets will change irregularly and lead to the inefficiency of adversarial samples. To solve these problems, we design a system to adapt our defense in the real world and further reduce bandwidth overhead.
APA, Harvard, Vancouver, ISO, and other styles
49

Shi, Lin, Teyi Liao, and Jianfeng He. "Defending Adversarial Attacks against DNN Image Classification Models by a Noise-Fusion Method." Electronics 11, no. 12 (June 8, 2022): 1814. http://dx.doi.org/10.3390/electronics11121814.

Full text
Abstract:
Adversarial attacks deceive deep neural network models by adding imperceptibly small but well-designed attack data to the model input. Those attacks cause serious problems. Various defense methods have been provided to defend against those attacks by: (1) providing adversarial training according to specific attacks; (2) denoising the input data; (3) preprocessing the input data; and (4) adding noise to various layers of models. Here we provide a simple but effective Noise-Fusion Method (NFM) to defend adversarial attacks against DNN image classification models. Without knowing any details about attacks or models, NFM not only adds noise to the model input at run time, but also to the training data at training time. Two l∞-attacks, the Fast Gradient Signed Method (FGSM) and the Projected Gradient Descent (PGD), and one l1-attack, the Sparse L1 Descent (SLD), are applied to evaluate defense effects of the NFM on various deep neural network models which used MNIST and CIFAR-10 datasets. Various amplitude noises with different statistical distribution are applied to show the defense effects of the NFM in different noise. The NFM also compares with an adversarial training method on MNIST and CIFAR-10 datasets. Results show that adding noise to the input images and the training images not only defends against all three adversarial attacks but also improves robustness of corresponding models. The results indicate possibly generalized defense effects of the NFM which can extend to other adversarial attacks. It also shows potential application of the NFM to models not only with image input but also with voice or audio input.
APA, Harvard, Vancouver, ISO, and other styles
50

Luo, Zhirui, Qingqing Li, and Jun Zheng. "A Study of Adversarial Attacks and Detection on Deep Learning-Based Plant Disease Identification." Applied Sciences 11, no. 4 (February 20, 2021): 1878. http://dx.doi.org/10.3390/app11041878.

Full text
Abstract:
Transfer learning using pre-trained deep neural networks (DNNs) has been widely used for plant disease identification recently. However, pre-trained DNNs are susceptible to adversarial attacks which generate adversarial samples causing DNN models to make wrong predictions. Successful adversarial attacks on deep learning (DL)-based plant disease identification systems could result in a significant delay of treatments and huge economic losses. This paper is the first attempt to study adversarial attacks and detection on DL-based plant disease identification. Our results show that adversarial attacks with a small number of perturbations can dramatically degrade the performance of DNN models for plant disease identification. We also find that adversarial attacks can be effectively defended by using adversarial sample detection with an appropriate choice of features. Our work will serve as a basis for developing more robust DNN models for plant disease identification and guiding the defense against adversarial attacks.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography