Contents
Academic literature on the topic 'Méthodes d’apprentissage automatique'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Méthodes d’apprentissage automatique.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Méthodes d’apprentissage automatique"
Hamida, Soufiane, Bouchaib Cherradi, Abdelhadi Raihani, and Hassan Ouajji. "Evaluation des apprentissages au sein d’un environnement de type MOOC adaptatif." ITM Web of Conferences 39 (2021): 03005. http://dx.doi.org/10.1051/itmconf/20213903005.
Full textReys, Victor, and Gilles Labesse. "Profilage in silico des inhibiteurs de protéine kinases." médecine/sciences 36 (October 2020): 38–41. http://dx.doi.org/10.1051/medsci/2020182.
Full textDissertations / Theses on the topic "Méthodes d’apprentissage automatique"
Renaudie, David. "Méthodes d’apprentissage automatique pour la modélisation de l’élève en algèbre." Grenoble INPG, 2005. http://www.theses.fr/2005INPG0008.
Full textBuilding an Intelligent Tutoring System which adapts to student difficulties, needs to develop automatic tools that diagnose their knowledge level. This work is based on a database containing behaviours of students, who are learning algebra with the Aplusix software. We aim at automatically extracting behavioural regularities from this database, in order to support the developpement of an intelligent tutor. To achieve this goal, we apply machine learning methods that detect similarities in datasets, and we propose two distinct approaches to student modelling issues. One one hand, we identify clusters of students that share behavioural similarities on a given exercise, with an unsupervised clustering algorithm. On the other hand, an analysis of procedural regularities in the production of each student, inspired from a theoretical framework of knowledge representation and based on a symbolic learning algorithm, leads to individual models that could be used for adapted remediation
Colin, Igor. "Adaptation des méthodes d’apprentissage aux U-statistiques." Thesis, Paris, ENST, 2016. http://www.theses.fr/2016ENST0070/document.
Full textWith the increasing availability of large amounts of data, computational complexity has become a keystone of many machine learning algorithms. Stochastic optimization algorithms and distributed/decentralized methods have been widely studied over the last decade and provide increased scalability for optimizing an empirical risk that is separable in the data sample. Yet, in a wide range of statistical learning problems, the risk is accurately estimated by U-statistics, i.e., functionals of the training data with low variance that take the form of averages over d-tuples. We first tackle the problem of sampling for the empirical risk minimization problem. We show that empirical risks can be replaced by drastically computationally simpler Monte-Carlo estimates based on O(n) terms only, usually referred to as incomplete U-statistics, without damaging the learning rate. We establish uniform deviation results and numerical examples show that such approach surpasses more naive subsampling techniques. We then focus on the decentralized estimation topic, where the data sample is distributed over a connected network. We introduce new synchronous and asynchronous randomized gossip algorithms which simultaneously propagate data across the network and maintain local estimates of the U-statistic of interest. We establish convergence rate bounds with explicit data and network dependent terms. Finally, we deal with the decentralized optimization of functions that depend on pairs of observations. Similarly to the estimation case, we introduce a method based on concurrent local updates and data propagation. Our theoretical analysis reveals that the proposed algorithms preserve the convergence rate of centralized dual averaging up to an additive bias term. Our simulations illustrate the practical interest of our approach
Bouaziz, Ameni. "Méthodes d’apprentissage interactif pour la classification des messages courts." Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR4039/document.
Full textAutomatic short text classification is more and more used nowadays in various applications like sentiment analysis or spam detection. Short texts like tweets or SMS are more challenging than traditional texts. Therefore, their classification is more difficult owing to their shortness, sparsity and lack of contextual information. We present two new approaches to improve short text classification. Our first approach is "Semantic Forest". The first step of this approach proposes a new enrichment method that uses an external source of enrichment built in advance. The idea is to transform a short text from few words to a larger text containing more information in order to improve its quality before building the classification model. Contrarily to the methods proposed in the literature, the second step of our approach does not use traditional learning algorithm but proposes a new one based on the semantic links among words in the Random Forest classifier. Our second contribution is "IGLM" (Interactive Generic Learning Method). It is a new interactive approach that recursively updates the classification model by considering the new data arriving over time and by leveraging the user intervention to correct misclassified data. An abstraction method is then combined with the update mechanism to improve short text quality. The experiments performed on these two methods show their efficiency and how they outperform traditional algorithms in short text classification. Finally, the last part of the thesis concerns a complete and argued comparative study of the two proposed methods taking into account various criteria such as accuracy, speed, etc
Sokol, Marina. "Méthodes d’apprentissage semi-supervisé basé sur les graphes et détection rapide des nœuds centraux." Thesis, Nice, 2014. http://www.theses.fr/2014NICE4018/document.
Full textSemi-supervised learning methods constitute a category of machine learning methods which use labelled points together with unlabeled data to tune the classifier. The main idea of the semi-supervised methods is based on an assumption that the classification function should change smoothly over a similarity graph. In the first part of the thesis, we propose a generalized optimization approach for the graph-based semi-supervised learning which implies as particular cases the Standard Laplacian, Normalized Laplacian and PageRank based methods. Using random walk theory, we provide insights about the differences among the graph-based semi-supervised learning methods and give recommendations for the choice of the kernel parameters and labelled points. We have illustrated all theoretical results with the help of synthetic and real data. As one example of real data we consider classification of content and users in P2P systems. This application demonstrates that the proposed family of methods scales very well with the volume of data. The second part of the thesis is devoted to quick detection of network central nodes. The algorithms developed in the second part of the thesis can be applied for the selections of quality labelled data but also have other applications in information retrieval. Specifically, we propose random walk based algorithms for quick detection of large degree nodes and nodes with large values of Personalized PageRank. Finally, in the end of the thesis we suggest new centrality measure, which generalizes both the current flow betweenness centrality and PageRank. This new measure is particularly well suited for detection of network vulnerability
Gulikers, Lennart. "Sur deux problèmes d’apprentissage automatique : la détection de communautés et l’appariement adaptatif." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE062/document.
Full textIn this thesis, we study two problems of machine learning: (I) community detection and (II) adaptive matching. I) It is well-known that many networks exhibit a community structure. Finding those communities helps us understand and exploit general networks. In this thesis we focus on community detection using so-called spectral methods based on the eigenvectors of carefully chosen matrices. We analyse their performance on artificially generated benchmark graphs. Instead of the classical Stochastic Block Model (which does not allow for much degree-heterogeneity), we consider a Degree-Corrected Stochastic Block Model (DC-SBM) with weighted vertices, that is able to generate a wide class of degree sequences. We consider this model in both a dense and sparse regime. In the dense regime, we show that an algorithm based on a suitably normalized adjacency matrix correctly classifies all but a vanishing fraction of the nodes. In the sparse regime, we show that the availability of only a small amount of information entails the existence of an information-theoretic threshold below which no algorithm performs better than random guess. On the positive side, we show that an algorithm based on the non-backtracking matrix works all the way down to the detectability threshold in the sparse regime, showing the robustness of the algorithm. This follows after a precise characterization of the non-backtracking spectrum of sparse DC-SBM's. We further perform tests on well-known real networks. II) Online two-sided matching markets such as Q&A forums and online labour platforms critically rely on the ability to propose adequate matches based on imperfect knowledge of the two parties to be matched. We develop a model of a task / server matching system for (efficient) platform operation in the presence of such uncertainty. For this model, we give a necessary and sufficient condition for an incoming stream of tasks to be manageable by the system. We further identify a so-called back-pressure policy under which the throughput that the system can handle is optimized. We show that this policy achieves strictly larger throughput than a natural greedy policy. Finally, we validate our model and confirm our theoretical findings with experiments based on user-contributed content on an online platform
Lam, Chi-Nguyen. "Méthodes de Machine Learning pour le suivi de l'occupation du sol des deltas du Viêt-Nam." Thesis, Brest, 2021. http://www.theses.fr/2021BRES0074.
Full textSocio-economic development in Vietnam is associated with the existence of large fluvial deltas. Furthermore, environmental factors such as dryness and flooding have an important role in the change of land use/land cover within these deltas. These changes have an impact on the natural and economic balance of the country. In this perspective, the objectives of the present thesis are to suggest processing methods of satellite data for an efficient mapping and monitoring of land use in the two main deltas of Vietnam, the Red River and the Mekong Delta. Indeed, experimental work has been carried out by verifying and evaluating the contribution of multi-sensor image processing through various image segmentation approaches and machine/deep learning algorithms. Thus, a Convolutional Neural Network (CNN) model adapted to the context of the study demonstrated its robustness for the detection and mapping of land use in order to characterise the flood hazard and analyse the issues at risk
Dubois, Florentine. "Une méthodologie de conception de modèles analytiques de surface et de puissance de réseaux sur puce hautement paramétriques basée sur une méthode d’apprentissage automatique." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM026/document.
Full textIn the last decade, Networks-on-chip (NoCs) have emerged as an efficient and flexible interconnect solution to handle the increasing number of processing elements included in Systems-on-chip (SoCs). NoCs are able to handle high-bandwidth and scalability needs under tight performance constraints. However, they are usually characterized by a large number of architectural and implementation parameters, resulting in a vast design space. In these conditions, finding a suitable NoC architecture for specific platform needs is a challenging issue. Moreover, most of main design decisions (e.g. topology, routing scheme, quality of service) are usually made at architectural-level during the first steps of the design flow, but measuring the effects of these decisions on the final implementation at such high level of abstraction is complex. Static analysis (i.e. non-simulation-based methods) has emerged to fulfill this need of reliable performance and cost estimation methods available early in the design flow. As the level of abstraction of static analysis is high, it is unrealistic to expect an accurate estimation of the performance or cost of the chip. Fidelity (i.e. characterization of the main tendencies of a metric) is thus the main objective rather than accuracy. This thesis proposes a modeling methodology to design static cost analysis of NoC components. The proposed method is mainly oriented towards generality. In particular, no assumption is made neither on the number of parameters of the components nor on the dependences of the modeled metric on these parameters. We are then able to address components with millions of configurations possibilities (order of 1e+30 configuration possibilities) and to estimate cost of complex NoCs composed of a large number of these components at architectural-level. It is difficult to model that kind of components with experimental analytical models due to the huge number of configuration possibilities. We thus propose a fully-automated modeling flow which can be applied directly to any architecture and technology. The output of the flow is a NoC component cost predictor able to estimate a metric of interest for any configuration of the design space in few seconds. The flow builds fine-grained analytical models on the basis of gate-level results and a machine-learning method. It is then able to design models with a better fidelity than purely-mathematical methods while preserving their main qualities (i.e. low complexity, early availability). Moreover, it is also able to take into account the effects of the technology on the performance. We propose to use an interpolation method based on Kriging theory. By using Kriging methodology, the number of implementation flow runs required in the modeling process is minimized and the main characteristics of the metrics in space are modeled both globally and locally. The method is applied to model logic area of key NoC components. The inclusion of traffic is then addressed and a NoC router leakage and average dynamic power model is designed on this basis