Gotowa bibliografia na temat „Apprentissage automatique sur données confidentielles”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Spis treści
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Apprentissage automatique sur données confidentielles”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Apprentissage automatique sur données confidentielles"
Önen, Melek, Francesco Cremonesi i Marco Lorenzi. "Apprentissage automatique fédéré pour l’IA collaborative dans le secteur de la santé". Revue internationale de droit économique XXXVI, nr 3 (21.04.2023): 95–113. http://dx.doi.org/10.3917/ride.363.0095.
Pełny tekst źródłaHARINAIVO, A., H. HAUDUC i I. TAKACS. "Anticiper l’impact de la météo sur l’influent des stations d’épuration grâce à l’intelligence artificielle". Techniques Sciences Méthodes 3 (20.03.2023): 33–42. http://dx.doi.org/10.36904/202303033.
Pełny tekst źródłaJoan Casademont, Anna, Nancy Gagné i Èric Viladrich Castellanas. "Allers-retours entre recherche et pratique : Analyse de besoins et capsules de microapprentissage en apprentissage d’une langue tierce ou additionnelle". Médiations et médiatisations, nr 12 (29.11.2022): 8–33. http://dx.doi.org/10.52358/mm.vi12.288.
Pełny tekst źródłaIthurralde, Guillaume, i Franck Maurel. "Inspection Ultrasonore Robotisée de Pièces Composites". e-journal of nondestructive testing 28, nr 9 (wrzesień 2023). http://dx.doi.org/10.58286/28516.
Pełny tekst źródłaRozprawy doktorskie na temat "Apprentissage automatique sur données confidentielles"
Saadeh, Angelo. "Applications of secure multi-party computation in Machine Learning". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT022.
Pełny tekst źródłaPrivacy-preserving in machine learning and data analysis is becoming increasingly important as the amount of sensitive personal information collected and used by organizations continues to grow. This poses the risk of exposing sensitive personal information to malicious third parties - which can lead to identity theft, financial fraud, or other types of cybercrime. Laws against the use of private data are important to protect individuals from having their information used and shared. However, by doing so, data protection laws limit the applications of machine learning models, and some of these applications could be life-saving - like in the medical field.Secure multi-party computation (MPC) allows multiple parties to jointly compute a function over their inputs without having to reveal or exchange the data itself. This tool can be used for training collaborative machine learning models when there are privacy concerns about exchanging sensitive datasets between different entities.In this thesis, we (I) use existing and develop new secure multi-party computation algorithms, (II) introduce cryptography-friendly approximations of common machine functions, and (III) complement secure multi-party computation with other privacy tools. This work is done in the goal of implementing privacy-preserving machine learning and data analysis algorithms.Our work and experimental results show that by executing the algorithms using secure multi-party computation both security and correctness are satisfied. In other words, no party has access to another's information and they are still being able to collaboratively train machine learning models with high accuracy results, and to collaboratively evaluate data analysis algorithms in comparison with non-encrypted datasets.Overall, this thesis provides a comprehensive view of secure multi-party computation for machine learning, demonstrating its potential to revolutionize the field. This thesis contributes to the deployment and acceptability of secure multi-party computation in machine learning and data analysis
Girard, Régis. "Classification conceptuelle sur des données arborescentes et imprécises". La Réunion, 1997. http://elgebar.univ-reunion.fr/login?url=http://thesesenligne.univ.run/97_08_Girard.pdf.
Pełny tekst źródłaAllesiardo, Robin. "Bandits Manchots sur Flux de Données Non Stationnaires". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS334/document.
Pełny tekst źródłaThe multi-armed bandit is a framework allowing the study of the trade-off between exploration and exploitation under partial feedback. At each turn t Є [1,T] of the game, a player has to choose an arm kt in a set of K and receives a reward ykt drawn from a reward distribution D(µkt) of mean µkt and support [0,1]. This is a challeging problem as the player only knows the reward associated with the played arm and does not know what would be the reward if she had played another arm. Before each play, she is confronted to the dilemma between exploration and exploitation; exploring allows to increase the confidence of the reward estimators and exploiting allows to increase the cumulative reward by playing the empirical best arm (under the assumption that the empirical best arm is indeed the actual best arm).In the first part of the thesis, we will tackle the multi-armed bandit problem when reward distributions are non-stationary. Firstly, we will study the case where, even if reward distributions change during the game, the best arm stays the same. Secondly, we will study the case where the best arm changes during the game. The second part of the thesis tacles the contextual bandit problem where means of reward distributions are now dependent of the environment's current state. We will study the use of neural networks and random forests in the case of contextual bandits. We will then propose meta-bandit based approach for selecting online the most performant expert during its learning
Bascol, Kevin. "Adaptation de domaine multisource sur données déséquilibrées : application à l'amélioration de la sécurité des télésièges". Thesis, Lyon, 2019. http://www.theses.fr/2019LYSES062.
Pełny tekst źródłaBluecime has designed a camera-based system to monitor the boarding station of chairlifts in ski resorts, which aims at increasing the safety of all passengers. This already successful system does not use any machine learning component and requires an expensive configuration step. Machine learning is a subfield of artificial intelligence which deals with studying and designing algorithms that can learn and acquire knowledge from examples for a given task. Such a task could be classifying safe or unsafe situations on chairlifts from examples of images already labeled with these two categories, called the training examples. The machine learning algorithm learns a model able to predict one of these two categories on unseen cases. Since 2012, it has been shown that deep learning models are the best suited machine learning models to deal with image classification problems when many training data are available. In this context, this PhD thesis, funded by Bluecime, aims at improving both the cost and the effectiveness of Bluecime's current system using deep learning
Vandromme, Maxence. "Optimisation combinatoire et extraction de connaissances sur données hétérogènes et temporelles : application à l’identification de parcours patients". Thesis, Lille 1, 2017. http://www.theses.fr/2017LIL10044.
Pełny tekst źródłaHospital data exhibit numerous specificities that make the traditional data mining tools hard to apply. In this thesis, we focus on the heterogeneity associated with hospital data and on their temporal aspect. This work is done within the frame of the ANR ClinMine research project and a CIFRE partnership with the Alicante company. In this thesis, we propose two new knowledge discovery methods suited for hospital data, each able to perform a variety of tasks: classification, prediction, discovering patients profiles, etc.In the first part, we introduce MOSC (Multi-Objective Sequence Classification), an algorithm for supervised classification on heterogeneous, numeric and temporal data. In addition to binary and symbolic terms, this method uses numeric terms and sequences of temporal events to form sets of classification rules. MOSC is the first classification algorithm able to handle these types of data simultaneously. In the second part, we introduce HBC (Heterogeneous BiClustering), a biclustering algorithm for heterogeneous data, a problem that has never been studied so far. This algorithm is extended to support temporal data of various types: temporal events and unevenly-sampled time series. HBC is used for a case study on a set of hospital data, whose goal is to identify groups of patients sharing a similar profile. The results make sense from a medical viewpoint; they indicate that relevant, and sometimes new knowledge is extracted from the data. These results also lead to further, more precise case studies. The integration of HBC within a software is also engaged, with the implementation of a parallel version and a visualization tool for biclustering results
Jaillet, Simon. "Catégorisation automatique de documents textuels : D'une représentation basée sur les concepts aux motifs séquentiels". Montpellier 2, 2005. http://www.theses.fr/2005MON20030.
Pełny tekst źródłaAllart, Thibault. "Apprentissage statistique sur données longitudinales de grande taille et applications au design des jeux vidéo". Electronic Thesis or Diss., Paris, CNAM, 2017. http://www.theses.fr/2017CNAM1136.
Pełny tekst źródłaThis thesis focuses on longitudinal time to event data possibly large along the following tree axes : number of individuals, observation frequency and number of covariates. We introduce a penalised estimator based on Cox complete likelihood with data driven weights. We introduce proximal optimization algorithms to efficiently fit models coefficients. We have implemented thoses methods in C++ and in the R package coxtv to allow everyone to analyse data sets bigger than RAM; using data streaming and online learning algorithms such that proximal stochastic gradient descent with adaptive learning rates. We illustrate performances on simulations and benchmark with existing models. Finally, we investigate the issue of video game design. We show that using our model on large datasets available in video game industry allows us to bring to light ways of improving the design of studied games. First we have a look at low level covariates, such as equipment choices through time and show that this model allows us to quantify the effect of each game elements, giving to designers ways to improve the game design. Finally, we show that the model can be used to extract more general design recommendations such as dificulty influence on player motivations
Dragoni, Laurent. "Tri de potentiels d'action sur des données neurophysiologiques massives : stratégie d’ensemble actif par fenêtre glissante pour l’estimation de modèles convolutionnels en grande dimension". Thesis, Université Côte d'Azur, 2022. http://www.theses.fr/2022COAZ4016.
Pełny tekst źródłaIn the nervous system, cells called neurons are specialized in the communication of information. Through the generation and propagation of electrical currents named action potentials, neurons are able to transmit information in the body. Given the importance of the neurons, in order to better understand the functioning of the nervous system, a wide range of methods have been proposed for studying those cells. In this thesis, we focus on the analysis of signals which have been recorded by electrodes, and more specifically, tetrodes and multi-electrode arrays (MEA). Since those devices usually record the activity of a set of neurons, the recorded signals are often a mixture of the activity of several neurons. In order to gain more knowledge from this type of data, a crucial pre-processing step called spike sorting is required to separate the activity of each neuron. Nowadays, the general procedure for spike sorting consists in a three steps procedure: thresholding, feature extraction and clustering. Unfortunately this methodology requires a large number of manual operations. Moreover, it becomes even more difficult when treating massive volumes of data, especially MEA recordings which also tend to feature more neuronal synchronizations. In this thesis, we present a spike sorting strategy allowing the analysis of large volumes of data and which requires few manual operations. This strategy makes use of a convolutional model which aims at breaking down the recorded signals as temporal convolutions between two factors: neuron activations and action potential shapes. The estimation of these two factors is usually treated through alternative optimization. Being the most difficult task, we only focus here on the estimation of the activations, assuming that the action potential shapes are known. Estimating the activations is traditionally referred to convolutional sparse coding. The well-known Lasso estimator features interesting mathematical properties for the resolution of such problem. However its computation remains challenging on high dimensional problems. We propose an algorithm based of the working set strategy in order to compute efficiently the Lasso. This algorithm takes advantage of the particular structure of the problem, derived from biological properties, by using temporal sliding windows, allowing it to scale in high dimension. Furthermore, we adapt theoretical results about the Lasso to show that, under reasonable assumptions, our estimator recovers the support of the true activation vector with high probability. We also propose models for both the spatial distribution and activation times of the neurons which allow us to quantify the size of our problem and deduce the theoretical complexity of our algorithm. In particular, we obtain a quasi-linear complexity with respect to the size of the recorded signal. Finally we present numerical results illustrating both the theoretical results and the performances of our approach
Roudiere, Gilles. "Détection d'attaques sur les équipements d'accès à Internet". Thesis, Toulouse, INSA, 2018. http://www.theses.fr/2018ISAT0017/document.
Pełny tekst źródłaNetwork anomalies, and specifically distributed denial of services attacks, are still an important threat to the Internet stakeholders. Detecting such anomalies requires dedicated tools, not only able to perform an accurate detection but also to meet the several constraints due to an industrial operation. Such constraints include, amongst others, the ability to run autonomously or to operate on sampled traffic. Unlike supervised or signature-based approaches, unsupervised detection do not require any kind of knowledge database on the monitored traffic. Such approaches rely on an autonomous characterization of the traffic in production. They require the intervention of the network administrator a posteriori, when it detects a deviation from the usual shape of the traffic. The main problem with unsupervised detection relies on the fact that building such characterization is complex, which might require significant amounts of computing resources. This requirement might be deterrent, especially when the detection should run on network devices that already have a significant workload. As a consequence, we propose a new unsupervised detection algorithm that aims at reducing the computing power required to run the detection. Its detection focuses on distributed denial of service attacks. Its processing is based upon the creation, at a regular interval, of traffic snapshots, which helps the diagnosis of detected anomalies. We evaluate the performances of the detector over two datasets to check its ability to accurately detect anomalies and to operate, in real time, with limited computing power resources. We also evaluate its performances over sampled traffic. The results we obtained are compared with those obtained with FastNetMon and UNADA
Eude, Thibaut. "Forage des données et formalisation des connaissances sur un accident : Le cas Deepwater Horizon". Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEM079/document.
Pełny tekst źródłaData drilling, the method and means developed in this thesis, redefines the process of data extraction, the formalization of knowledge and its enrichment, particularly in the context of the elucidation of events that have not or only slightly been documented. The Deepwater Horizon disaster, the drilling platform operated for BP in the Gulf of Mexico that suffered a blowout on April 20, 2010, will be our case study for the implementation of our proof of concept for data drilling. This accident is the result of an unprecedented discrepancy between the state of the art of drilling engineers' heuristics and that of pollution response engineers. The loss of control of the MC 252-1 well is therefore an engineering failure and it will take the response party eighty-seven days to regain control of the wild well and halt the pollution. Deepwater Horizon is in this sense a case of engineering facing extreme situation, as defined by Guarnieri and Travadel.First, we propose to return to the overall concept of accident by means of an in-depth linguistic analysis presenting the semantic spaces in which the accident takes place. This makes it possible to enrich its "core meaning" and broaden the shared acceptance of its definition.Then, we bring that the literature review must be systematically supported by algorithmic assistance to process the data taking into account the available volume, the heterogeneity of the sources and the requirements of quality and relevance standards. In fact, more than eight hundred scientific articles mentioning this accident have been published to date and some twenty investigation reports, constituting our research material, have been produced. Our method demonstrates the limitations of accident models when dealing with a case like Deepwater Horizon and the urgent need to look for an appropriate way to formalize knowledge.As a result, the use of upper-level ontologies should be encouraged. The DOLCE ontology has shown its great interest in formalizing knowledge about this accident and especially in elucidating very accurately a decision-making process at a critical moment of the intervention. The population, the creation of instances, is the heart of the exploitation of ontology and its main interest, but the process is still largely manual and not without mistakes. This thesis proposes a partial answer to this problem by an original NER algorithm for the automatic population of an ontology.Finally, the study of accidents involves determining the causes and examining "socially constructed facts". This thesis presents the original plans of a "semantic pipeline" built with a series of algorithms that extract the expressed causality in a document and produce a graph that represents the "causal path" underlying the document. It is significant for scientific or industrial research to highlight the reasoning behind the findings of the investigation team. To do this, this work leverages developments in Machine Learning and Question Answering and especially the Natural Language Processing tools.As a conclusion, this thesis is a work of a fitter, an architect, which offers both a prime insight into the Deepwater Horizon case and proposes the data drilling, an original method and means to address an event, in order to uncover answers from the research material for questions that had previously escaped understanding