Dissertations / Theses on the topic 'Apprentissage à partir de peu d'exemples'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 17 dissertations / theses for your research on the topic 'Apprentissage à partir de peu d'exemples.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Bollinger, Toni. "Généralisation en apprentissage à partir d'exemples." Paris 11, 1986. http://www.theses.fr/1986PA112064.
Full textThis thesis treats two aspects of the problem of generalization in machine learning. First, we give a formal definition of the relation "more general" which we deduce from our notion of an example that is accepted by a description. We present also a methodology for determining if one description is more general than another. In the second part, we describe the generalization algorithm AGAPE based on structural matching. This algorithm tries to preserve a maximum of information common to the examples by transforming the descriptions of the examples until they match structurally, i. E. Until the descriptions are almost identical. At the end of this thesis, we present some extensions of this algorithm especially designed for enabling the treatement of counter-examples
Bollinger, Toni. "Généralisation en apprentissage a partir d'exemples." Grenoble 2 : ANRT, 1986. http://catalogue.bnf.fr/ark:/12148/cb37596263z.
Full textHANSER, THIERRY. "Apprentissage automatique de methodes de synthese a partir d'exemples." Université Louis Pasteur (Strasbourg) (1971-2008), 1993. http://www.theses.fr/1993STR13106.
Full textGautheron, Léo. "Construction de Représentation de Données Adaptées dans le Cadre de Peu d'Exemples Étiquetés." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSES044.
Full textMachine learning consists in the study and design of algorithms that build models able to handle non trivial tasks as well as or better than humans and hopefully at a lesser cost.These models are typically trained from a dataset where each example describes an instance of the same task and is represented by a set of characteristics and an expected outcome or label which we usually want to predict.An element required for the success of any machine learning algorithm is related to the quality of the set of characteristics describing the data, also referred as data representation or features.In supervised learning, the more the features describing the examples are correlated with the label, the more effective the model will be.There exist three main families of features: the ``observable'', the ``handcrafted'' and the ``latent'' features that are usually automatically learned from the training data.The contributions of this thesis fall into the scope of this last category. More precisely, we are interested in the specific setting of learning a discriminative representation when the number of data of interest is limited.A lack of data of interest can be found in different scenarios.First, we tackle the problem of imbalanced learning with a class of interest composed of a few examples by learning a metric that induces a new representation space where the learned models do not favor the majority examples.Second, we propose to handle a scenario with few available examples by learning at the same time a relevant data representation and a model that generalizes well through boosting models using kernels as base learners approximated by random Fourier features.Finally, to address the domain adaptation scenario where the target set contains no label while the source examples are acquired in different conditions, we propose to reduce the discrepancy between the two domains by keeping only the most similar features optimizing the solution of an optimal transport problem between the two domains
Henniche, M'hammed. "Apprentissage incrémental à partir d'exemples dans un espace de recherche réduit." Paris 13, 1998. http://www.theses.fr/1998PA13A001.
Full textTruong, Nguyen Tuong Vinh. "Apprentissage de fonctions d'ordonnancement avec peu d'exemples étiquetés : une application au routage d'information, au résumé de textes et au filtrage collaboratif." Paris 6, 2009. http://www.theses.fr/2009PA066568.
Full textBarnachon, Mathieu. "Reconnaissance d'actions en temps réel à partir d'exemples." Phd thesis, Université Claude Bernard - Lyon I, 2013. http://tel.archives-ouvertes.fr/tel-00820113.
Full textLu, Cheng-Ren. "Apprentissage incrémental par analogie : le système OGUST⁺." Paris 11, 1989. http://www.theses.fr/1989PA112393.
Full textMordelet, Fantine. "Méthodes d'apprentissage statistique à partir d'exemples positifs et indéterminés en biologie." Phd thesis, École Nationale Supérieure des Mines de Paris, 2010. http://pastel.archives-ouvertes.fr/pastel-00566401.
Full textNogry, Sandra Mille Alain. "Faciliter l'apprentissage à partir d'exemples en situation de résolution de problèmes Application au projet AMBRE /." Lyon : Université Lumière Lyon 2, 2005. http://theses.univ-lyon2.fr/sdx/theses/lyon2/2005/nogry_s.
Full textBlin, Laurent. "Apprentissage de structures d'arbres à partir d'exemples ; application à la prosodie pour la synthèse de la parole." Rennes 1, 2002. http://www.theses.fr/2002REN10117.
Full textAguirre, Cervantes José Luis. "Construction automatique de taxonomies à partir d'exemples dans un modèle de connaissances par objets." Grenoble INPG, 1989. http://www.theses.fr/1989INPG0067.
Full textBouthinon, Dominique. "Apprentissage à partir d'exemples ambigus : étude théorique et application à la découverte de structures communes à un ensemble de séquences d'ARN." Paris 13, 1996. http://www.theses.fr/1996PA132033.
Full textVOGEL, HUGUES. "Apprentissage automatique de connaissances reactionnelles : acquisition d'exemples de reactions a partir de bases de donnees et prise en compte des conditions reactionnelles." Université Louis Pasteur (Strasbourg) (1971-2008), 2000. http://www.theses.fr/2000STR13062.
Full textVrain, Christel. "Un outil pour la généralisation utilisant systématiquement les théorèmes : le système OGUST." Paris 11, 1987. http://www.theses.fr/1987PA112302.
Full textGuiroy, Simon. "Towards Understanding Generalization in Gradient-Based Meta-Learning." Thèse, 2019. http://hdl.handle.net/1866/23783.
Full textIn this master's thesis, we study the generalization of neural networks in gradient-based meta-learning by analyzing various properties of the objective landscapes. Meta-learning, a challenging paradigm where models not only have to learn a task but beyond that, are trained for ``learning to learn" as they must adapt to new tasks and environments with very limited data about them. With research on the objective landscapes of neural networks in classical supervised having provided some answers regarding their ability to generalize for new data points, we propose similar analyses aimed at understanding generalization in meta-learning. We first introduce the literature on objective landscapes of neural networks in Section \ref{sec:intro:objective_landscapes}. We then introduce the literature of meta-learning in Section \ref{chap:prof_forcing}, concluding our introduction with the approach of gradient-based meta-learning, a meta-learning setup that bears strong similarities to the traditional supervised learning setup through stochastic gradient-based optimization. At the time of writing of this thesis, and to the best of our knowledge, this is the first work to empirically study the objective landscapes in gradient-based meta-learning, especially in the context of deep learning. We notably provide some insights on some properties of those landscapes that appear correlated to the generalization to new tasks. We experimentally demonstrate that as meta-training progresses, the meta-test solutions, obtained after adapting the meta-train solution of the model, to new tasks via few steps of gradient-based fine-tuning, become flatter, lower in loss, and further away from the meta-train solution. We also show that those meta-test solutions become flatter even as generalization starts to degrade, thus providing experimental evidence against the correlation between generalization and flat minima in the paradigm of gradient-based meta-leaning. Furthermore, we provide empirical evidence that generalization to new tasks is correlated with the coherence between their adaptation trajectories in parameter space, measured by the average cosine similarity between task-specific trajectory directions, starting from a same meta-train solution. We also show that coherence of meta-test gradients, measured by the average inner product between the task-specific gradient vectors evaluated at meta-train solution, is also correlated with generalization. Based on these observations, we propose a novel regularizer for the Model Agnostic Meta-Learning (MAML) algorithm and provide experimental evidence for its effectiveness.
Batot, Edouard. "From examples to knowledge in model-driven engineering : a holistic and pragmatic approach." Thèse, 2018. http://hdl.handle.net/1866/21737.
Full text