Дисертації з теми "Adversarial robustness"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-20 дисертацій для дослідження на тему "Adversarial robustness".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Engstrom, Logan(Logan G. ). "Understanding the landscape of adversarial robustness." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123021.
Повний текст джерелаThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 108-115).
Despite their performance on standard tasks in computer vision, natural language processing and voice recognition, state-of-the-art models are pervasively vulnerable to adversarial examples. Adversarial examples are inputs that have been slightly perturbed--such that the semantic content is the same--as to cause malicious behavior in a classifier. The study of adversarial robustness has so far largely focused on perturbations bound in l[subscript p]-norms, in the case where the attacker knows the full model and controls exactly what input is sent to the classifier. However, this threat model is unrealistic in many respects. Models are vulnerable to classes of slight perturbations that are not captured by l[subscript p] bounds, adversaries realistically often will not have full model access, and in the physical world it is not possible to exactly control what image is sent to the classifier. In our exploration we successfully develop new algorithms and frameworks for exploiting vulnerabilities even in restricted threat models. We find that models are highly vulnerable to adversarial examples in these more realistic threat models, highlighting the necessity of further research to attain models that are truly robust and reliable.
by Logan Engstrom.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Zhang, Jeffrey M. Eng Massachusetts Institute of Technology. "Enhancing adversarial robustness of deep neural networks." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122994.
Повний текст джерелаThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 57-58).
Logit-based regularization and pretrain-then-tune are two approaches that have recently been shown to enhance adversarial robustness of machine learning models. In the realm of regularization, Zhang et al. (2019) proposed TRADES, a logit-based regularization optimization function that has been shown to improve upon the robust optimization framework developed by Madry et al. (2018) [14, 9]. They were able to achieve state-of-the-art adversarial accuracy on CIFAR10. In the realm of pretrain- then-tune models, Hendrycks el al. (2019) demonstrated that adversarially pretraining a model on ImageNet then adversarially tuning on CIFAR10 greatly improves the adversarial robustness of machine learning models. In this work, we propose Adversarial Regularization, another logit-based regularization optimization framework that surpasses TRADES in adversarial generalization. Furthermore, we explore the impact of trying different types of adversarial training on the pretrain-then-tune paradigm.
by Jeffry Zhang.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
PINTOR, MAURA. "Towards Debugging and Improving Adversarial Robustness Evaluations ." Doctoral thesis, Università degli Studi di Cagliari, 2022. http://hdl.handle.net/11584/328882.
Повний текст джерелаCina', Antonio Emanuele <1995>. "On the Robustness of Clustering Algorithms to Adversarial Attacks." Master's Degree Thesis, Università Ca' Foscari Venezia, 2019. http://hdl.handle.net/10579/15430.
Повний текст джерелаAllenet, Thibault. "Quantization and adversarial robustness of embedded deep neural networks." Electronic Thesis or Diss., Université de Rennes (2023-....), 2023. https://ged.univ-rennes1.fr/nuxeo/site/esupversions/5f524c49-7a4a-4724-ae77-9afe383b7c3c.
Повний текст джерелаConvolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) have been broadly used in many fields such as computer vision, natural language processing and signal processing. Nevertheless, the computational workload and the heavy memory bandwidth involved in deep neural networks inference often prevents their deployment on low-power embedded devices. Moreover, deep neural networks vulnerability towards small input perturbations questions their deployment for applications involving high criticality decisions. This PhD research project objective is twofold. On the one hand, it proposes compression methods to make deep neural networks more suitable for embedded systems with low computing resources and memory requirements. On the other hand, it proposes a new strategy to make deep neural networks more robust towards attacks based on crafted inputs with the perspective to infer on edge. We begin by introducing common concepts for training neural networks, convolutional neural networks, recurrent neural networks and review the state of the art neural on deep neural networks compression methods. After this literature review we present two main contributions on compressing deep neural networks: an investigation of lottery tickets on RNNs and Disentangled Loss Quantization Aware Training (DL-QAT) on CNNs. The investigation of lottery tickets on RNNs analyze the convergence of RNNs and study its impact when subject to pruning on image classification and language modelling. Then we present a pre-processing method based on data sub-sampling that enables faster convergence of LSTM while preserving application performance. With the Disentangled Loss Quantization Aware Training (DL-QAT) method, we propose to further improve an advanced quantization method with quantization friendly loss functions to reach low bit settings like binary parameters where the application performance is the most impacted. Experiments on ImageNet-1k with DL-QAT show improvements by nearly 1\% on the top-1 accuracy of ResNet-18 with binary weights and 2-bit activations, and also show the best profile of memory footprint over accuracy when compared with other state-of-the art methods. This work then studies neural networks robustness toward adversarial attacks. After introducing the state of the art on adversarial attacks and defense mechanisms, we propose the Ensemble Hash Defense (EHD) defense mechanism. EHD enables better resilience to adversarial attacks based on gradient approximation while preserving application performance and only requiring a memory overhead at inference time. In the best configuration, our system achieves significant robustness gains compared to baseline models and a loss function-driven approach. Moreover, the principle of EHD makes it complementary to other robust optimization methods that would further enhance the robustness of the final system and compression methods. With the perspective of edge inference, the memory overhead introduced by EHD can be reduced with quantization or weight sharing. The contributions in this thesis have concerned optimization methods and a defense system to solve an important challenge, that is, how to make deep neural networks more robust towards adversarial attacks and easier to deployed on the resource limited platforms. This work further reduces the gap between state of the art deep neural networks and their execution on edge devices
Ebrahimi, Javid. "Robustness of Neural Networks for Discrete Input: An Adversarial Perspective." Thesis, University of Oregon, 2019. http://hdl.handle.net/1794/24535.
Повний текст джерелаCARBONE, GINEVRA. "Robustness and Interpretability of Neural Networks’ Predictions under Adversarial Attacks." Doctoral thesis, Università degli Studi di Trieste, 2023. https://hdl.handle.net/11368/3042163.
Повний текст джерелаDeep Neural Networks (DNNs) are powerful predictive models, exceeding human capabilities in a variety of tasks. They learn complex and flexible decision systems from the available data and achieve exceptional performances in multiple machine learning fields, spanning from applications in artificial intelligence, such as image, speech and text recognition, to the more traditional sciences, including medicine, physics and biology. Despite the outstanding achievements, high performance and high predictive accuracy are not sufficient for real-world applications, especially in safety-critical settings, where the usage of DNNs is severely limited by their black-box nature. There is an increasing need to understand how predictions are performed, to provide uncertainty estimates, to guarantee robustness to malicious attacks and to prevent unwanted behaviours. State-of-the-art DNNs are vulnerable to small perturbations in the input data, known as adversarial attacks: maliciously crafted manipulations of the inputs that are perceptually indistinguishable from the original samples but are capable of fooling the model into incorrect predictions. In this work, we prove that such brittleness is related to the geometry of the data manifold and is therefore likely to be an intrinsic feature of DNNs’ predictions. This negative condition suggests a possible direction to overcome such limitation: we study the geometry of adversarial attacks in the large-data, overparameterized limit for Bayesian Neural Networks and prove that, in this limit, they are immune to gradient-based adversarial attacks. Furthermore, we propose some training techniques to improve the adversarial robustness of deterministic architectures. In particular, we experimentally observe that ensembles of NNs trained on random projections of the original inputs into lower dimensional spaces are more resilient to the attacks. Next, we focus on the problem of interpretability of NNs’ predictions in the setting of saliency-based explanations. We analyze the stability of the explanations under adversarial attacks on the inputs and we prove that, in the large-data and overparameterized limit, Bayesian interpretations are more stable than those provided by deterministic networks. We validate this behaviour in multiple experimental settings in the finite data regime. Finally, we introduce the concept of adversarial perturbations of amino acid sequences for protein Language Models (LMs). Deep Learning models for protein structure prediction, such as AlphaFold2, leverage Transformer architectures and their attention mechanism to capture structural and functional properties of amino acid sequences. Despite the high accuracy of predictions, biologically small perturbations of the input sequences, or even single point mutations, can lead to substantially different 3d structures. On the other hand, protein language models are insensitive to mutations that induce misfolding or dysfunction (e.g. missense mutations). Precisely, predictions of the 3d coordinates do not reveal the structure-disruptive effect of these mutations. Therefore, there is an evident inconsistency between the biological importance of mutations and the resulting change in structural prediction. Inspired by this problem, we introduce the concept of adversarial perturbation of protein sequences in continuous embedding spaces of protein language models. Our method relies on attention scores to detect the most vulnerable amino acid positions in the input sequences. Adversarial mutations are biologically diverse from their references and are able to significantly alter the resulting 3D structures.
Itani, Aashish. "COMPARISON OF ADVERSARIAL ROBUSTNESS OF ANN AND SNN TOWARDS BLACKBOX ATTACKS." OpenSIUC, 2021. https://opensiuc.lib.siu.edu/theses/2864.
Повний текст джерелаYang, Shuo. "Adversarial Data Generation for Robust Deep Learning." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/27291.
Повний текст джерелаUličný, Matej. "Methods for Increasing Robustness of Deep Convolutional Neural Networks." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-29734.
Повний текст джерелаWoldeyohannes, Habtamu Desalegn <1981>. "Review on “Adversarial Robustness Toolbox (ART) v1.5.x.”: ART Attacks against Supervised Learning Algorithms Case Study." Master's Degree Thesis, Università Ca' Foscari Venezia, 2021. http://hdl.handle.net/10579/19100.
Повний текст джерелаÖstberg, Rasmus. "Robustness of a neural network used for image classification : The effect of applying distortions on adversarial examples." Thesis, Högskolan i Gävle, Datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-26118.
Повний текст джерелаGuichard, Jonathan. "Quality Assessment of Conversational Agents : Assessing the Robustness of Conversational Agents to Errors and Lexical Variability." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-226552.
Повний текст джерелаAtt bedöma en konversationsagents språkförståelse är kritiskt, eftersom dåliga användarinteraktioner kan avgöra om agenten blir en framgång eller ett misslyckande redan i början av livscykeln. I denna rapport undersöker vi användningen av parafraser som ett testverktyg för dessa konversationsagenter. Parafraser, vilka är olika sätt att uttrycka samma avsikt, skapas baserat på känd indata genom att utföra lexiska substitutioner och genom att introducera flera stavningsavvikelser. Eftersom det förväntade resultatet för denna indata är känd kan vi använda resultaten för att bedöma agentens robusthet mot språkvariation och upptäcka potentiella förståelssvagheter. Som framgår av en fallstudie får vi uppmuntrande resultat, eftersom detta tillvägagångssätt verkar kunna bidra till att förutse eventuella brister i förståelsen, och dessa brister kan hanteras av de genererade parafraserna.
Balda, Cañizares Emilio Rafael [Verfasser], Rudolf [Akademischer Betreuer] Mathar, and Bastian [Akademischer Betreuer] Leibe. "Robustness analysis of deep neural networks in the presence of adversarial perturbations and noisy labels / Emilio Rafael Balda Canizares ; Rudolf Mathar, Bastian Leibe." Aachen : Universitätsbibliothek der RWTH Aachen, 2019. http://d-nb.info/1216040931/34.
Повний текст джерелаMAURI, LARA. "DATA PARTITIONING AND COMPENSATION TECHNIQUES FOR SECURE TRAINING OF MACHINE LEARNING MODELS." Doctoral thesis, Università degli Studi di Milano, 2022. http://hdl.handle.net/2434/932387.
Повний текст джерелаAbdelaty, Maged Fathy Youssef. "Robust Anomaly Detection in Critical Infrastructure." Doctoral thesis, Università degli studi di Trento, 2022. http://hdl.handle.net/11572/352463.
Повний текст джерела(11211114), Qingyi Gao. "ADVERSARIAL LEARNING ON ROBUSTNESS AND GENERATIVE MODELS." Thesis, 2021.
Знайти повний текст джерелаMopuri, Konda Reddy. "Deep Visual Representations: A study on Augmentation, Visualization, and Robustness." Thesis, 2018. https://etd.iisc.ac.in/handle/2005/5446.
Повний текст джерелаZhang, Haotian. "Network Robustness: Diffusing Information Despite Adversaries." Thesis, 2012. http://hdl.handle.net/10012/6890.
Повний текст джерелаNayak, Gaurav Kumar. "Data-efficient Deep Learning Algorithms for Computer Vision Applications." Thesis, 2022. https://etd.iisc.ac.in/handle/2005/6094.
Повний текст джерела