Gotowa bibliografia na temat „Apprentissage machine adversarial”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Spis treści
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Apprentissage machine adversarial”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Rozprawy doktorskie na temat "Apprentissage machine adversarial"
Grari, Vincent. "Adversarial mitigation to reduce unwanted biases in machine learning". Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS096.
Pełny tekst źródłaThe past few years have seen a dramatic rise of academic and societal interest in fair machine learning. As a result, significant work has been done to include fairness constraints in the training objective of machine learning algorithms. Its primary purpose is to ensure that model predictions do not depend on any sensitive attribute as gender or race, for example. Although this notion of independence is incontestable in a general context, it can theoretically be defined in many different ways depending on how one sees fairness. As a result, many recent papers tackle this challenge by using their "own" objectives and notions of fairness. Objectives can be categorized in two different families: Individual and Group fairness. This thesis gives an overview of the methodologies applied in these different families in order to encourage good practices. Then, we identify and complete gaps by presenting new metrics and new Fair-ML algorithms that are more appropriate for specific contexts
Goibert, Morgane. "Statistical Understanding of Adversarial Robustness". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT052.
Pełny tekst źródłaThis thesis focuses on the question of robustness in machine learning, specifically examining two types of attacks: poisoning attacks at training time and evasion attacks at inference time.The study of poisoning attacks dates back to the sixties and has been unified under the theory of robust statistics. However, prior research was primarily focused on classical data types, mainly real-numbered data, limiting the applicability of poisoning attack studies. In this thesis, robust statistics are extended to ranking data, which lack a vector space structure and have a combinatorial nature. The work presented in this thesis initiates the study of robustness in the context of ranking data and provides a framework for future extensions. Contributions include a practical algorithm to measure the robustness of statistics for the task of consensus ranking, and two robust statistics to solve this task.In contrast, since 2013, evasion attacks gained significant attention in the deep learning field, particularly for image classification. Despite the proliferation of research works on adversarial examples, the theoretical analysis of the problem remains challenging and it lacks unification. To address this matter, the thesis makes contributions to understanding and mitigating evasion attacks. These contributions involve the unification of adversarial examples' characteristics through the study of under-optimized edges and information flow within neural networks, and the establishment of theoretical bounds characterizing the success rate of modern low-dimensional attacks for a wide range of models
Chali, Samy. "Robustness Analysis of Classifiers Against Out-of-Distribution and Adversarial Inputs". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPAST012.
Pełny tekst źródłaMany issues addressed by AI involve the classification of complex input data that needs to be separated into different classes. The functions that transform the complex input values into a simpler, linearly separable space are achieved either through learning (deep convolutional networks) or by projecting into a high-dimensional space to obtain a 'rich' non-linear representation of the inputs, followed by a linear mapping between the high-dimensional space and the output units, as used in Support Vector Machines (Vapnik's work 1966-1995). The thesis aims to create an optimized, generic architecture capable of preprocessing data to prepare them for classification with minimal operations required. Additionally, this architecture aims to enhance the model's autonomy by enabling continuous learning, robustness to corrupted data, and the identification of data that the model cannot process
Allenet, Thibault. "Quantization and adversarial robustness of embedded deep neural networks". Electronic Thesis or Diss., Université de Rennes (2023-....), 2023. https://ged.univ-rennes1.fr/nuxeo/site/esupversions/5f524c49-7a4a-4724-ae77-9afe383b7c3c.
Pełny tekst źródłaConvolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) have been broadly used in many fields such as computer vision, natural language processing and signal processing. Nevertheless, the computational workload and the heavy memory bandwidth involved in deep neural networks inference often prevents their deployment on low-power embedded devices. Moreover, deep neural networks vulnerability towards small input perturbations questions their deployment for applications involving high criticality decisions. This PhD research project objective is twofold. On the one hand, it proposes compression methods to make deep neural networks more suitable for embedded systems with low computing resources and memory requirements. On the other hand, it proposes a new strategy to make deep neural networks more robust towards attacks based on crafted inputs with the perspective to infer on edge. We begin by introducing common concepts for training neural networks, convolutional neural networks, recurrent neural networks and review the state of the art neural on deep neural networks compression methods. After this literature review we present two main contributions on compressing deep neural networks: an investigation of lottery tickets on RNNs and Disentangled Loss Quantization Aware Training (DL-QAT) on CNNs. The investigation of lottery tickets on RNNs analyze the convergence of RNNs and study its impact when subject to pruning on image classification and language modelling. Then we present a pre-processing method based on data sub-sampling that enables faster convergence of LSTM while preserving application performance. With the Disentangled Loss Quantization Aware Training (DL-QAT) method, we propose to further improve an advanced quantization method with quantization friendly loss functions to reach low bit settings like binary parameters where the application performance is the most impacted. Experiments on ImageNet-1k with DL-QAT show improvements by nearly 1\% on the top-1 accuracy of ResNet-18 with binary weights and 2-bit activations, and also show the best profile of memory footprint over accuracy when compared with other state-of-the art methods. This work then studies neural networks robustness toward adversarial attacks. After introducing the state of the art on adversarial attacks and defense mechanisms, we propose the Ensemble Hash Defense (EHD) defense mechanism. EHD enables better resilience to adversarial attacks based on gradient approximation while preserving application performance and only requiring a memory overhead at inference time. In the best configuration, our system achieves significant robustness gains compared to baseline models and a loss function-driven approach. Moreover, the principle of EHD makes it complementary to other robust optimization methods that would further enhance the robustness of the final system and compression methods. With the perspective of edge inference, the memory overhead introduced by EHD can be reduced with quantization or weight sharing. The contributions in this thesis have concerned optimization methods and a defense system to solve an important challenge, that is, how to make deep neural networks more robust towards adversarial attacks and easier to deployed on the resource limited platforms. This work further reduces the gap between state of the art deep neural networks and their execution on edge devices
Darwaish, Asim. "Adversary-aware machine learning models for malware detection systems". Electronic Thesis or Diss., Université Paris Cité, 2022. http://www.theses.fr/2022UNIP7283.
Pełny tekst źródłaThe exhilarating proliferation of smartphones and their indispensability to human life is inevitable. The exponential growth is also triggering widespread malware and stumbling the prosperous mobile ecosystem. Among all handheld devices, Android is the most targeted hive for malware authors due to its popularity, open-source availability, and intrinsic infirmity to access internal resources. Machine learning-based approaches have been successfully deployed to combat evolving and polymorphic malware campaigns. As the classifier becomes popular and widely adopted, the incentive to evade the classifier also increases. Researchers and adversaries are in a never-ending race to strengthen and evade the android malware detection system. To combat malware campaigns and counter adversarial attacks, we propose a robust image-based android malware detection system that has proven its robustness against various adversarial attacks. The proposed platform first constructs the android malware detection system by intelligently transforming the Android Application Packaging (APK) file into a lightweight RGB image and training a convolutional neural network (CNN) for malware detection and family classification. Our novel transformation method generates evident patterns for benign and malware APKs in color images, making the classification easier. The detection system yielded an excellent accuracy of 99.37% with a False Negative Rate (FNR) of 0.8% and a False Positive Rate (FPR) of 0.39% for legacy and new malware variants. In the second phase, we evaluate the robustness of our secured image-based android malware detection system. To validate its hardness and effectiveness against evasion, we have crafted three novel adversarial attack models. Our thorough evaluation reveals that state-of-the-art learning-based malware detection systems are easy to evade, with more than a 50% evasion rate. However, our proposed system builds a secure mechanism against adversarial perturbations using its intrinsic continuous space obtained after the intelligent transformation of Dex and Manifest files which makes the detection system strenuous to bypass
Chaitou, Hassan. "Optimization of security risk for learning on heterogeneous quality data". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT030.
Pełny tekst źródłaIntrusion Detection Systems (IDSs) serve as critical components in network security infrastructure.In order to cope with the scalability issues of IDSs using handcrafted detection rules, machine learning is used to design IDSs trained on datasets.Yet, they are increasingly challenged by meta-attacks, called adversarial evasion attacks, that alter existing attacks to improve their evasion capabilities.These approaches, for instance, employ Generative Adversarial Networks (GANs) to automate the alteration process.Several strategies have been proposed to enhance the robustness of IDSs against such attacks, with significant success in strategies based on adversarial training.However, IDSs evasion remains relevant as many contributions also show that adversarial evasion attacks are still efficient despite using adversarial training on IDSs. In this thesis, we investigate this situation and present contributions that improve the understanding of one of its root causes and guidelines to mitigate it.The first step is to better understand the possible sources of variability in IDS or evasion attack performances. Three potential sources are considered: methodological assessment issues, the inherent race to spend more computational resources in attack or defense, or issues in training and dataset acquisition when training IDSs.The first contribution consists of guidelines to conduct robust IDSs assessments beyond the simple recommendation for empirical analysis. These guidelines cover both single experiment design but also sensitivity analysis campaigns.The consequence of applying such guidelines is to obtain more stable results when changing training resource related parameters. Removing artifacts due to inadequate assessment procedures leads us to investigate why some selected parts of the considered dataset tend to be almost not affected by adversarial attacks.The second contribution is the formalization of adversarial neighborhoods: an alternative way to characterize adversarial samples. This formalization allows us to adapt and evaluate data quality criteria used for non-adversarial samples, such as the absence of contradictory samples, and apply similar criteria to adversarial sample datasets. From this concept, four threat situations have been identified with clear qualitative impacts either on the training of a robust IDS or the attacker's ability to find more successful evasion attacks.Finally, we propose countermeasures to the identified threats and then perform an empirical quantitative assessment of both threats and countermeasures.The findings of these experiments highlight the need to identify and mitigate threats associated with a non-empty extended contradictory set. Indeed, this crucial vulnerability should be identified and addressed prior to IDS training
Rodriguez, Colmeiro Ramiro German. "Towards Reduced Dose Positron Emission Tomography Imaging Using Sparse Sampling and Machine Learning". Thesis, Troyes, 2021. http://www.theses.fr/2021TROY0015.
Pełny tekst źródłaThis thesis explores the reduction of the patient radiation dose in screening Positron Emission Tomography (PET) studies. It analyses three aspects of PET imaging, which can reduce the patient dose: the data acquisition, the image reconstruction and the attenuation map generation. The first part of the thesis is dedicated to the PET scanner technology. Two optimization techniques are developed for a novel low-cost and low-dose scanner, the AR-PET scanner. First a photomultiplier selection and placement strategy is created, improving the energy resolution. The second work focuses on the localization of gamma events on solid scintillation crystals. The method is based on neural networks and a single flood acquisition, resulting in an increased detector’s sensitivity. In the second part, the PET image reconstruction on mesh support is studied. A mesh-based reconstruction algorithm is proposed which uses a series of 2D meshes to describe the 3D radiotracer distribution. It is shown that with this reconstruction strategy the number of sample points can be reduced without loosing accuracy and enabling parallel mesh optimization. Finally the attenuation map generation using deep neural networks is explored. A neural network is trained to learn the mapping from non attenuation corrected FDG PET images to a synthetic Computerized Tomography. With these approaches, this thesis lays a base for a low-cost and low-dose PET screening system, dispensing the need of a computed tomography image in exchange of an artificial attenuation map
Gitzinger, Louison. "Surviving the massive proliferation of mobile malware". Thesis, Rennes 1, 2020. http://www.theses.fr/2020REN1S058.
Pełny tekst źródłaNowadays, many of us are surrounded by smart devices that seamlessly operate interactively and autonomously together with multiple services to make our lives more comfortable. These smart devices are part of larger ecosystems, in which various companies collaborate to ease the distribution of applications between developers and users. However malicious attackers take advantage of them illegitimately to infect users' smart devices with malicious applications. Despite all the efforts made to defend these ecosystems, the rate of devices infected with malware is still increasing in 2020. In this thesis, we explore three research axes with the aim of globally improving malware detection in the Android ecosystem. We demonstrate that the accuracy of machine learning-based detection systems can be improved by automating their evaluation and by reusing the concept of AutoML to fine-tune learning algorithms parameters. We propose an approach to automatically create malware variants from combinations of complex evasion techniques to diversify experimental malware datasets in order to challenge existing detection systems. Finally, we propose methods to globally increase the quality of experimental datasets used to train and test detection systems
Marzinotto, Gabriel. "Semantic frame based analysis using machine learning techniques : improving the cross-domain generalization of semantic parsers". Electronic Thesis or Diss., Aix-Marseille, 2019. http://www.theses.fr/2019AIXM0483.
Pełny tekst źródłaMaking semantic parsers robust to lexical and stylistic variations is a real challenge with many industrial applications. Nowadays, semantic parsing requires the usage of domain-specific training corpora to ensure acceptable performances on a given domain. Transfer learning techniques are widely studied and adopted when addressing this lack of robustness, and the most common strategy is the usage of pre-trained word representations. However, the best parsers still show significant performance degradation under domain shift, evidencing the need for supplementary transfer learning strategies to achieve robustness. This work proposes a new benchmark to study the domain dependence problem in semantic parsing. We use this bench to evaluate classical transfer learning techniques and to propose and evaluate new techniques based on adversarial learning. All these techniques are tested on state-of-the-art semantic parsers. We claim that adversarial learning approaches can improve the generalization capacities of models. We test this hypothesis on different semantic representation schemes, languages and corpora, providing experimental results to support our hypothesis
Gidel, Gauthier. "Multi-player games in the era of machine learning". Thesis, 2020. http://hdl.handle.net/1866/24800.
Pełny tekst źródłaAmong all the historical board games played by humans, the game of go was considered one of the most difficult to master by a computer program [Van Den Heriket al., 2002]; Until it was not [Silver et al., 2016]. This odds-breaking break-through [Müller, 2002, Van Den Herik et al., 2002] came from a sophisticated combination of Monte Carlo tree search and machine learning techniques to evaluate positions, shedding light upon the high potential of machine learning to solve games. Adversarial training, a special case of multiobjective optimization, is an increasingly useful tool in machine learning. For example, two-player zero-sum games are important for generative modeling (GANs) [Goodfellow et al., 2014] and mastering games like Go or Poker via self-play [Silver et al., 2017, Brown and Sandholm,2017]. A classic result in Game Theory states that convex-concave games always have an equilibrium [Neumann, 1928]. Surprisingly, machine learning practitioners successfully train a single pair of neural networks whose objective is a nonconvex-nonconcave minimax problem while for such a payoff function, the existence of a Nash equilibrium is not guaranteed in general. This work is an attempt to put learning in games on a firm theoretical foundation. The first contribution explores minimax theorems for a particular class of nonconvex-nonconcave games that encompasses generative adversarial networks. The proposed result is an approximate minimax theorem for two-player zero-sum games played with neural networks, including WGAN, StarCrat II, and Blotto game. Our findings rely on the fact that despite being nonconcave-nonconvex with respect to the neural networks parameters, the payoff of these games are concave-convex with respect to the actual functions (or distributions) parametrized by these neural networks. The second and third contributions study the optimization of minimax problems, and more generally, variational inequalities in the context of machine learning. While the standard gradient descent-ascent method fails to converge to the Nash equilibrium of simple convex-concave games, there exist ways to use gradients to obtain methods that converge. We investigate several techniques such as extrapolation, averaging and negative momentum. We explore these techniques experimentally by proposing a state-of-the-art (at the time of publication) optimizer for GANs called ExtraAdam. We also prove new convergence results for Extrapolation from the past, originally proposed by Popov [1980], as well as for gradient method with negative momentum. The fourth contribution provides an empirical study of the practical landscape of GANs. In the second and third contributions, we diagnose that the gradient method breaks when the game’s vector field is highly rotational. However, such a situation may describe a worst-case that does not occur in practice. We provide new visualization tools in order to exhibit rotations in practical GAN landscapes. In this contribution, we show empirically that the training of GANs exhibits significant rotations around Local Stable Stationary Points (LSSP), and we provide empirical evidence that GAN training converges to a stable stationary point, which is a saddle point for the generator loss, not a minimum, while still achieving excellent performance.