Добірка наукової літератури з теми "Adversarial robustness"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Adversarial robustness".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Adversarial robustness"
Doan, Bao Gia, Shuiqiao Yang, Paul Montague, Olivier De Vel, Tamas Abraham, Seyit Camtepe, Salil S. Kanhere, Ehsan Abbasnejad, and Damith C. Ranashinghe. "Feature-Space Bayesian Adversarial Learning Improved Malware Detector Robustness." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (June 26, 2023): 14783–91. http://dx.doi.org/10.1609/aaai.v37i12.26727.
Повний текст джерелаZhou, Xiaoling, Nan Yang, and Ou Wu. "Combining Adversaries with Anti-adversaries in Training." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (June 26, 2023): 11435–42. http://dx.doi.org/10.1609/aaai.v37i9.26352.
Повний текст джерелаGoldblum, Micah, Liam Fowl, Soheil Feizi, and Tom Goldstein. "Adversarially Robust Distillation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3996–4003. http://dx.doi.org/10.1609/aaai.v34i04.5816.
Повний текст джерелаTack, Jihoon, Sihyun Yu, Jongheon Jeong, Minseon Kim, Sung Ju Hwang, and Jinwoo Shin. "Consistency Regularization for Adversarial Robustness." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 8414–22. http://dx.doi.org/10.1609/aaai.v36i8.20817.
Повний текст джерелаLiang, Youwei, and Dong Huang. "Large Norms of CNN Layers Do Not Hurt Adversarial Robustness." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 10 (May 18, 2021): 8565–73. http://dx.doi.org/10.1609/aaai.v35i10.17039.
Повний текст джерелаWang, Desheng, Weidong Jin, and Yunpu Wu. "Between-Class Adversarial Training for Improving Adversarial Robustness of Image Classification." Sensors 23, no. 6 (March 20, 2023): 3252. http://dx.doi.org/10.3390/s23063252.
Повний текст джерелаBui, Anh Tuan, Trung Le, He Zhao, Paul Montague, Olivier DeVel, Tamas Abraham, and Dinh Phung. "Improving Ensemble Robustness by Collaboratively Promoting and Demoting Adversarial Robustness." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 6831–39. http://dx.doi.org/10.1609/aaai.v35i8.16843.
Повний текст джерелаLi, Xin, Xiangrui Li, Deng Pan, and Dongxiao Zhu. "Improving Adversarial Robustness via Probabilistically Compact Loss with Logit Constraints." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 10 (May 18, 2021): 8482–90. http://dx.doi.org/10.1609/aaai.v35i10.17030.
Повний текст джерелаYang, Shuo, Tianyu Guo, Yunhe Wang, and Chang Xu. "Adversarial Robustness through Disentangled Representations." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 4 (May 18, 2021): 3145–53. http://dx.doi.org/10.1609/aaai.v35i4.16424.
Повний текст джерелаLi, Zhuorong, Chao Feng, Minghui Wu, Hongchuan Yu, Jianwei Zheng, and Fanwei Zhu. "Adversarial robustness via attention transfer." Pattern Recognition Letters 146 (June 2021): 172–78. http://dx.doi.org/10.1016/j.patrec.2021.03.011.
Повний текст джерелаДисертації з теми "Adversarial robustness"
Engstrom, Logan(Logan G. ). "Understanding the landscape of adversarial robustness." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123021.
Повний текст джерелаThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 108-115).
Despite their performance on standard tasks in computer vision, natural language processing and voice recognition, state-of-the-art models are pervasively vulnerable to adversarial examples. Adversarial examples are inputs that have been slightly perturbed--such that the semantic content is the same--as to cause malicious behavior in a classifier. The study of adversarial robustness has so far largely focused on perturbations bound in l[subscript p]-norms, in the case where the attacker knows the full model and controls exactly what input is sent to the classifier. However, this threat model is unrealistic in many respects. Models are vulnerable to classes of slight perturbations that are not captured by l[subscript p] bounds, adversaries realistically often will not have full model access, and in the physical world it is not possible to exactly control what image is sent to the classifier. In our exploration we successfully develop new algorithms and frameworks for exploiting vulnerabilities even in restricted threat models. We find that models are highly vulnerable to adversarial examples in these more realistic threat models, highlighting the necessity of further research to attain models that are truly robust and reliable.
by Logan Engstrom.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Zhang, Jeffrey M. Eng Massachusetts Institute of Technology. "Enhancing adversarial robustness of deep neural networks." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122994.
Повний текст джерелаThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 57-58).
Logit-based regularization and pretrain-then-tune are two approaches that have recently been shown to enhance adversarial robustness of machine learning models. In the realm of regularization, Zhang et al. (2019) proposed TRADES, a logit-based regularization optimization function that has been shown to improve upon the robust optimization framework developed by Madry et al. (2018) [14, 9]. They were able to achieve state-of-the-art adversarial accuracy on CIFAR10. In the realm of pretrain- then-tune models, Hendrycks el al. (2019) demonstrated that adversarially pretraining a model on ImageNet then adversarially tuning on CIFAR10 greatly improves the adversarial robustness of machine learning models. In this work, we propose Adversarial Regularization, another logit-based regularization optimization framework that surpasses TRADES in adversarial generalization. Furthermore, we explore the impact of trying different types of adversarial training on the pretrain-then-tune paradigm.
by Jeffry Zhang.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
PINTOR, MAURA. "Towards Debugging and Improving Adversarial Robustness Evaluations ." Doctoral thesis, Università degli Studi di Cagliari, 2022. http://hdl.handle.net/11584/328882.
Повний текст джерелаCina', Antonio Emanuele <1995>. "On the Robustness of Clustering Algorithms to Adversarial Attacks." Master's Degree Thesis, Università Ca' Foscari Venezia, 2019. http://hdl.handle.net/10579/15430.
Повний текст джерелаAllenet, Thibault. "Quantization and adversarial robustness of embedded deep neural networks." Electronic Thesis or Diss., Université de Rennes (2023-....), 2023. https://ged.univ-rennes1.fr/nuxeo/site/esupversions/5f524c49-7a4a-4724-ae77-9afe383b7c3c.
Повний текст джерелаConvolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) have been broadly used in many fields such as computer vision, natural language processing and signal processing. Nevertheless, the computational workload and the heavy memory bandwidth involved in deep neural networks inference often prevents their deployment on low-power embedded devices. Moreover, deep neural networks vulnerability towards small input perturbations questions their deployment for applications involving high criticality decisions. This PhD research project objective is twofold. On the one hand, it proposes compression methods to make deep neural networks more suitable for embedded systems with low computing resources and memory requirements. On the other hand, it proposes a new strategy to make deep neural networks more robust towards attacks based on crafted inputs with the perspective to infer on edge. We begin by introducing common concepts for training neural networks, convolutional neural networks, recurrent neural networks and review the state of the art neural on deep neural networks compression methods. After this literature review we present two main contributions on compressing deep neural networks: an investigation of lottery tickets on RNNs and Disentangled Loss Quantization Aware Training (DL-QAT) on CNNs. The investigation of lottery tickets on RNNs analyze the convergence of RNNs and study its impact when subject to pruning on image classification and language modelling. Then we present a pre-processing method based on data sub-sampling that enables faster convergence of LSTM while preserving application performance. With the Disentangled Loss Quantization Aware Training (DL-QAT) method, we propose to further improve an advanced quantization method with quantization friendly loss functions to reach low bit settings like binary parameters where the application performance is the most impacted. Experiments on ImageNet-1k with DL-QAT show improvements by nearly 1\% on the top-1 accuracy of ResNet-18 with binary weights and 2-bit activations, and also show the best profile of memory footprint over accuracy when compared with other state-of-the art methods. This work then studies neural networks robustness toward adversarial attacks. After introducing the state of the art on adversarial attacks and defense mechanisms, we propose the Ensemble Hash Defense (EHD) defense mechanism. EHD enables better resilience to adversarial attacks based on gradient approximation while preserving application performance and only requiring a memory overhead at inference time. In the best configuration, our system achieves significant robustness gains compared to baseline models and a loss function-driven approach. Moreover, the principle of EHD makes it complementary to other robust optimization methods that would further enhance the robustness of the final system and compression methods. With the perspective of edge inference, the memory overhead introduced by EHD can be reduced with quantization or weight sharing. The contributions in this thesis have concerned optimization methods and a defense system to solve an important challenge, that is, how to make deep neural networks more robust towards adversarial attacks and easier to deployed on the resource limited platforms. This work further reduces the gap between state of the art deep neural networks and their execution on edge devices
Ebrahimi, Javid. "Robustness of Neural Networks for Discrete Input: An Adversarial Perspective." Thesis, University of Oregon, 2019. http://hdl.handle.net/1794/24535.
Повний текст джерелаCARBONE, GINEVRA. "Robustness and Interpretability of Neural Networks’ Predictions under Adversarial Attacks." Doctoral thesis, Università degli Studi di Trieste, 2023. https://hdl.handle.net/11368/3042163.
Повний текст джерелаDeep Neural Networks (DNNs) are powerful predictive models, exceeding human capabilities in a variety of tasks. They learn complex and flexible decision systems from the available data and achieve exceptional performances in multiple machine learning fields, spanning from applications in artificial intelligence, such as image, speech and text recognition, to the more traditional sciences, including medicine, physics and biology. Despite the outstanding achievements, high performance and high predictive accuracy are not sufficient for real-world applications, especially in safety-critical settings, where the usage of DNNs is severely limited by their black-box nature. There is an increasing need to understand how predictions are performed, to provide uncertainty estimates, to guarantee robustness to malicious attacks and to prevent unwanted behaviours. State-of-the-art DNNs are vulnerable to small perturbations in the input data, known as adversarial attacks: maliciously crafted manipulations of the inputs that are perceptually indistinguishable from the original samples but are capable of fooling the model into incorrect predictions. In this work, we prove that such brittleness is related to the geometry of the data manifold and is therefore likely to be an intrinsic feature of DNNs’ predictions. This negative condition suggests a possible direction to overcome such limitation: we study the geometry of adversarial attacks in the large-data, overparameterized limit for Bayesian Neural Networks and prove that, in this limit, they are immune to gradient-based adversarial attacks. Furthermore, we propose some training techniques to improve the adversarial robustness of deterministic architectures. In particular, we experimentally observe that ensembles of NNs trained on random projections of the original inputs into lower dimensional spaces are more resilient to the attacks. Next, we focus on the problem of interpretability of NNs’ predictions in the setting of saliency-based explanations. We analyze the stability of the explanations under adversarial attacks on the inputs and we prove that, in the large-data and overparameterized limit, Bayesian interpretations are more stable than those provided by deterministic networks. We validate this behaviour in multiple experimental settings in the finite data regime. Finally, we introduce the concept of adversarial perturbations of amino acid sequences for protein Language Models (LMs). Deep Learning models for protein structure prediction, such as AlphaFold2, leverage Transformer architectures and their attention mechanism to capture structural and functional properties of amino acid sequences. Despite the high accuracy of predictions, biologically small perturbations of the input sequences, or even single point mutations, can lead to substantially different 3d structures. On the other hand, protein language models are insensitive to mutations that induce misfolding or dysfunction (e.g. missense mutations). Precisely, predictions of the 3d coordinates do not reveal the structure-disruptive effect of these mutations. Therefore, there is an evident inconsistency between the biological importance of mutations and the resulting change in structural prediction. Inspired by this problem, we introduce the concept of adversarial perturbation of protein sequences in continuous embedding spaces of protein language models. Our method relies on attention scores to detect the most vulnerable amino acid positions in the input sequences. Adversarial mutations are biologically diverse from their references and are able to significantly alter the resulting 3D structures.
Itani, Aashish. "COMPARISON OF ADVERSARIAL ROBUSTNESS OF ANN AND SNN TOWARDS BLACKBOX ATTACKS." OpenSIUC, 2021. https://opensiuc.lib.siu.edu/theses/2864.
Повний текст джерелаYang, Shuo. "Adversarial Data Generation for Robust Deep Learning." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/27291.
Повний текст джерелаUličný, Matej. "Methods for Increasing Robustness of Deep Convolutional Neural Networks." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-29734.
Повний текст джерелаКниги з теми "Adversarial robustness"
Adversarial Robustness for Machine Learning. Elsevier, 2023. http://dx.doi.org/10.1016/c2020-0-01078-9.
Повний текст джерелаAdversarial Robustness for Machine Learning Models. Elsevier Science & Technology Books, 2022.
Знайти повний текст джерелаAdversarial Robustness for Machine Learning Models. Elsevier Science & Technology, 2022.
Знайти повний текст джерелаMachine Learning Algorithms: Adversarial Robustness in Signal Processing. Springer International Publishing AG, 2022.
Знайти повний текст джерелаЧастини книг з теми "Adversarial robustness"
Göpfert, Christina, Jan Philip Göpfert, and Barbara Hammer. "Adversarial Robustness Curves." In Machine Learning and Knowledge Discovery in Databases, 172–79. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-43823-4_15.
Повний текст джерелаMao, Chengzhi, Amogh Gupta, Vikram Nitin, Baishakhi Ray, Shuran Song, Junfeng Yang, and Carl Vondrick. "Multitask Learning Strengthens Adversarial Robustness." In Computer Vision – ECCV 2020, 158–74. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58536-5_10.
Повний текст джерелаRíos Insua, David, Fabrizio Ruggeri, Cesar Alfaro, and Javier Gomez. "Robustness for Adversarial Risk Analysis." In Robustness Analysis in Decision Aiding, Optimization, and Analytics, 39–58. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-33121-8_3.
Повний текст джерелаGünnemann, Stephan. "Graph Neural Networks: Adversarial Robustness." In Graph Neural Networks: Foundations, Frontiers, and Applications, 149–76. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-6054-2_8.
Повний текст джерелаCao, Houze, and Meng Xue. "Adversarial Training for Better Robustness." In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 75–84. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-35982-8_6.
Повний текст джерелаReyes-Amezcua, Ivan, Gilberto Ochoa-Ruiz, and Andres Mendez-Vazquez. "Adversarial Robustness on Artificial Intelligence." In What AI Can Do, 419–31. Boca Raton: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/b23345-24.
Повний текст джерелаKomiyama, Ryota, and Motonobu Hattori. "Adversarial Minimax Training for Robustness Against Adversarial Examples." In Neural Information Processing, 690–99. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-04179-3_61.
Повний текст джерелаTang, Keke, Tianrui Lou, Xu He, Yawen Shi, Peican Zhu, and Zhaoquan Gu. "Enhancing Adversarial Robustness via Anomaly-aware Adversarial Training." In Knowledge Science, Engineering and Management, 328–42. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40283-8_28.
Повний текст джерелаZhang, Chaoning, Kang Zhang, Chenshuang Zhang, Axi Niu, Jiu Feng, Chang D. Yoo, and In So Kweon. "Decoupled Adversarial Contrastive Learning for Self-supervised Adversarial Robustness." In Lecture Notes in Computer Science, 725–42. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20056-4_42.
Повний текст джерелаVaishnavi, Pratik, Tianji Cong, Kevin Eykholt, Atul Prakash, and Amir Rahmati. "Can Attention Masks Improve Adversarial Robustness?" In Communications in Computer and Information Science, 14–22. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-62144-5_2.
Повний текст джерелаТези доповідей конференцій з теми "Adversarial robustness"
Bai, Tao, Jinqi Luo, Jun Zhao, Bihan Wen, and Qian Wang. "Recent Advances in Adversarial Training for Adversarial Robustness." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/591.
Повний текст джерелаHsiung, Lei, Yun-Yun Tsai, Pin-Yu Chen, and Tsung-Yi Ho. "CARBEN: Composite Adversarial Robustness Benchmark." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/851.
Повний текст джерелаGuo, Xiaohui, Richong Zhang, Yaowei Zheng, and Yongyi Mao. "Robust Regularization with Adversarial Labelling of Perturbed Samples." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/343.
Повний текст джерелаByun, Junyoung, Hyojun Go, Seungju Cho, and Changick Kim. "Exploiting Doubly Adversarial Examples for Improving Adversarial Robustness." In 2022 IEEE International Conference on Image Processing (ICIP). IEEE, 2022. http://dx.doi.org/10.1109/icip46576.2022.9897374.
Повний текст джерелаCheng, Minhao, Qi Lei, Pin-Yu Chen, Inderjit Dhillon, and Cho-Jui Hsieh. "CAT: Customized Adversarial Training for Improved Robustness." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/95.
Повний текст джерелаMegyeri, Istvan, Istvan Hegedus, and Mark Jelasity. "Adversarial Robustness of Model Sets." In 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020. http://dx.doi.org/10.1109/ijcnn48605.2020.9206656.
Повний текст джерелаStutz, David, Matthias Hein, and Bernt Schiele. "Disentangling Adversarial Robustness and Generalization." In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2019. http://dx.doi.org/10.1109/cvpr.2019.00714.
Повний текст джерелаRozsa, Andras, Manuel Gunther, and Terrance Boult. "Adversarial Robustness: Softmax versus Openmax." In British Machine Vision Conference 2017. British Machine Vision Association, 2017. http://dx.doi.org/10.5244/c.31.156.
Повний текст джерелаHosseini, Hossein, Sreeram Kannan, and Radha Poovendran. "Dropping Pixels for Adversarial Robustness." In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2019. http://dx.doi.org/10.1109/cvprw.2019.00017.
Повний текст джерелаBen-Eliezer, Omri, and Eylon Yogev. "The Adversarial Robustness of Sampling." In SIGMOD/PODS '20: International Conference on Management of Data. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3375395.3387643.
Повний текст джерелаЗвіти організацій з теми "Adversarial robustness"
Rudner, Tim, and Helen Toner. Key Concepts in AI Safety: Robustness and Adversarial Examples. Center for Security and Emerging Technology, March 2021. http://dx.doi.org/10.51593/20190041.
Повний текст джерела