Tesis sobre el tema "Apprentissage à partir de démonstrations"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Apprentissage à partir de démonstrations".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Chenu, Alexandre. "Leveraging sequentiality in Robot Learning : Application of the Divide & Conquer paradigm to Neuro-Evolution and Deep Reinforcement Learning". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS342.
Texto completo“To succeed, planning alone is insufficient. One must improvise as well.” This quote from Isaac Asimov, founding father of robotics and author of the Three Laws of Robotics, emphasizes the importance of being able to adapt and think on one’s feet to achieve success. Although robots can nowadays resolve highly complex tasks, they still need to gain those crucial adaptability skills to be deployed on a larger scale. Robot Learning uses learning algorithms to tackle this lack of adaptability and to enable robots to solve complex tasks autonomously. Two types of learning algorithms are particularly suitable for robots to learn controllers autonomously: Deep Reinforcement Learning and Neuro-Evolution. However, both classes of algorithms often cannot solve Hard Exploration Problems, that is problems with a long horizon and a sparse reward signal, unless they are guided in their learning process. One can consider different approaches to tackle those problems. An option is to search for a diversity of behaviors rather than a specific one. The idea is that among this diversity, some behaviors will be able to solve the task. We call these algorithms Diversity Search algorithms. A second option consists in guiding the learning process using demonstrations provided by an expert. This is called Learning from Demonstration. However, searching for diverse behaviors or learning from demonstration can be inefficient in some contexts. Indeed, finding diverse behaviors can be tedious if the environment is complex. On the other hand, learning from demonstration can be very difficult if only one demonstration is available. This thesis attempts to improve the effectiveness of Diversity Search and Learning from Demonstration when applied to Hard Exploration Problems. To do so, we assume that complex robotics behaviors can be decomposed into reaching simpler sub-goals. Based on this sequential bias, we try to improve the sample efficiency of Diversity Search and Learning from Demonstration algorithms by adopting Divide & Conquer strategies, which are well-known for their efficiency when the problem is composable. Throughout the thesis, we propose two main strategies. First, after identifying some limitations of Diversity Search algorithms based on Neuro-Evolution, we propose Novelty Search Skill Chaining. This algorithm combines Diversity Search with Skill- Chaining to efficiently navigate maze environments that are difficult to explore for state-of-the-art Diversity Search. In a second set of contributions, we propose the Divide & Conquer Imitation Learning algorithms. The key intuition behind those methods is to decompose the complex task of learning from a single demonstration into several simpler goal-reaching sub-tasks. DCIL-II, the most advanced variant, can learn walking behaviors for under-actuated humanoid robots with unprecedented efficiency. Beyond underlining the effectiveness of the Divide & Conquer paradigm in Robot Learning, this work also highlights the difficulties that can arise when composing behaviors, even in elementary environments. One will inevitably have to address these difficulties before applying these algorithms directly to real robots. It may be necessary for the success of the next generations of robots, as outlined by Asimov
Tokmakov, Pavel. "Apprentissage à partir du mouvement". Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM031/document.
Texto completoWeakly-supervised learning studies the problem of minimizing the amount of human effort required for training state-of-the-art models. This allows to leverage a large amount of data. However, in practice weakly-supervised methods perform significantly worse than their fully-supervised counterparts. This is also the case in deep learning, where the top-performing computer vision approaches remain fully-supervised, which limits their usage in real world applications. This thesis attempts to bridge the gap between weakly-supervised and fully-supervised methods by utilizing motion information. It also studies the problem of moving object segmentation itself, proposing one of the first learning-based methods for this task.We focus on the problem of weakly-supervised semantic segmentation. This is especially challenging due to the need to precisely capture object boundaries and avoid local optima, as for example segmenting the most discriminative parts. In contrast to most of the state-of-the-art approaches, which rely on static images, we leverage video data with object motion as a strong cue. In particular, our method uses a state-of-the-art video segmentation approach to segment moving objects in videos. The approximate object masks produced by this method are then fused with the semantic segmentation model learned in an EM-like framework to infer pixel-level semantic labels for video frames. Thus, as learning progresses, the quality of the labels improves automatically. We then integrate this architecture with our learning-based approach for video segmentation to obtain a fully trainable framework for weakly-supervised learning from videos.In the second part of the thesis we study unsupervised video segmentation, the task of segmenting all the objects in a video that move independently from the camera. This task presents challenges such as strong camera motion, inaccuracies in optical flow estimation and motion discontinuity. We address the camera motion problem by proposing a learning-based method for motion segmentation: a convolutional neural network that takes optical flow as input and is trained to segment objects that move independently from the camera. It is then extended with an appearance stream and a visual memory module to improve temporal continuity. The appearance stream capitalizes on the semantic information which is complementary to the motion information. The visual memory module is the key component of our approach: it combines the outputs of the motion and appearance streams and aggregates a spatio-temporal representation of the moving objects. The final segmentation is then produced based on this aggregated representation. The resulting approach obtains state-of-the-art performance on several benchmark datasets, outperforming the concurrent deep learning and heuristic-based methods
Bollinger, Toni. "Généralisation en apprentissage à partir d'exemples". Paris 11, 1986. http://www.theses.fr/1986PA112064.
Texto completoThis thesis treats two aspects of the problem of generalization in machine learning. First, we give a formal definition of the relation "more general" which we deduce from our notion of an example that is accepted by a description. We present also a methodology for determining if one description is more general than another. In the second part, we describe the generalization algorithm AGAPE based on structural matching. This algorithm tries to preserve a maximum of information common to the examples by transforming the descriptions of the examples until they match structurally, i. E. Until the descriptions are almost identical. At the end of this thesis, we present some extensions of this algorithm especially designed for enabling the treatement of counter-examples
Bollinger, Toni. "Généralisation en apprentissage a partir d'exemples". Grenoble 2 : ANRT, 1986. http://catalogue.bnf.fr/ark:/12148/cb37596263z.
Texto completoBarlier, Merwan. "Sur le rôle de l’être humain dans le dialogue humain/machine". Thesis, Lille 1, 2018. http://www.theses.fr/2018LIL1I087/document.
Texto completoThe context of this thesis takes place in Reinforcement Learning for Spoken Dialogue Systems. This document proposes several ways to consider the role of the human interlocutor. After an overview of the limits of the traditional Agent/Environment framework, we first suggest to model human/machine dialogue as a Stochastic Game. Within this framework, the human being is seen as a rational agent, acting in order to optimize his preferences. We show that this framework allows to take into consideration co-adaptation phenomena and extend the applications of human/machine dialogue, e.g. negociation dialogues. In a second time, we address the issue of allowing the incorporation of human expertise in order to speed-up the learning phase of a reinforcement learning based spoken dialogue system. We provide an algorithm that takes advantage of those human advice and shows a great improvement over the performance of traditional reinforcement learning algorithms. Finally, we consider a third situation in which a system listens to a conversation between two human beings and talk when it estimates that its intervention could help to maximize the preferences of its user. We introduce a original reward function balancing the outcome of the conversation with the intrusiveness of the system. Our results obtained by simulation suggest that such an approach is suitable for computer-aided human-human dialogue. However, in order to implement this method, a model of the human/human conversation is required. We propose in a final contribution to learn this model with an algorithm based on multiplicity automata
Ferrandiz, Sylvain. "Apprentissage supervisé à partir de données séquentielles". Caen, 2006. http://www.theses.fr/2006CAEN2030.
Texto completoIn the data mining process, the main part of the data preparation step is devoted to feature construction and selection. The filter approach usually adopted requires evaluation methods for any kind of feature. We address the problem of the supervised evaluation of a sequential feature. We show that this problem is solved if a more general problem is tackled : that of the supervised evaluation of a similarity measure. We provide such an evaluation method. We first turn the problem into the search of a discriminating Voronoi partition. Then, we define a new supervised criterion evaluating such partitions and design a new optimised algorithm. The criterion automatically prevents from overfitting the data and the algorithm quickly provides a good solution. In the end, the method can be interpreted as a robust non parametric method for estimating the conditional density of a nominal target feature given a similarity measure defined from a descriptive feature. The method is experimented on many datasets. It is useful for answering questions like : which day of the week or which hourly time segment is the most relevant to discriminate customers from their call detailed records ? Which series allows to better estimate the customer need for a new service ?
Liquière, Michel. "Apprentissage à partir d'objets structurés : conception et réalisation". Montpellier 2, 1990. http://www.theses.fr/1990MON20038.
Texto completoWolley, Chirine. "Apprentissage supervisé à partir des multiples annotateurs incertains". Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4070/document.
Texto completoIn supervised learning tasks, obtaining the ground truth label for each instance of the training dataset can be difficult, time-consuming and/or expensive. With the advent of infrastructures such as the Internet, an increasing number of web services propose crowdsourcing as a way to collect a large enough set of labels from internet users. The use of these services provides an exceptional facility to collect labels from anonymous annotators, and thus, it considerably simplifies the process of building labels datasets. Nonetheless, the main drawback of crowdsourcing services is their lack of control over the annotators and their inability to verify and control the accuracy of the labels and the level of expertise for each labeler. Hence, managing the annotators' uncertainty is a clue for learning from imperfect annotations. This thesis provides three algorithms when learning from multiple uncertain annotators. IGNORE generates a classifier that predict the label of a new instance and evaluate the performance of each annotator according to their level of uncertainty. X-Ignore, considers that the performance of the annotators both depends on their uncertainty and on the quality of the initial dataset to be annotated. Finally, ExpertS deals with the problem of annotators' selection when generating the classifier. It identifies experts annotators, and learn the classifier based only on their labels. We conducted in this thesis a large set of experiments in order to evaluate our models, both using experimental and real world medical data. The results prove the performance and accuracy of our models compared to previous state of the art solutions in this context
Arcadias, Marie. "Apprentissage non supervisé de dépendances à partir de textes". Thesis, Orléans, 2015. http://www.theses.fr/2015ORLE2080/document.
Texto completoDependency grammars allow the construction of a hierarchical organization of the words of sentences. The one-by-one building of dependency trees can be very long and it requries expert knowledge. In this regard, we are interested in unsupervised dependency learning. Currently, DMV give the state-of-art results in unsupervised dependency parsing. However, DMV has been known to be highly sensitive to initial parameters. The training of DMV model is also heavy and long. We present in this thesis a new model to solve this problem in a simpler, faster and more adaptable way. We learn a family of PCFG using less than 6 nonterminal symbols and less than 15 combination rules from the part-of-speech tags. The tuning of these PCFG is ligth, and so easily adaptable to the 12 languages we tested. Our proposed method for unsupervised dependency parsing can show the near state-of-the-art results, being twice faster. Moreover, we describe our interests in dependency trees to other applications such as relation extraction. Therefore, we show how such information from dependency structures can be integrated into condition random fields and how to improve a relation extraction task
HANSER, THIERRY. "Apprentissage automatique de methodes de synthese a partir d'exemples". Université Louis Pasteur (Strasbourg) (1971-2008), 1993. http://www.theses.fr/1993STR13106.
Texto completoChevaleyre, Yann. "Apprentissage de règles à partir de données multi-instances". Paris 6, 2001. http://www.theses.fr/2001PA066502.
Texto completoDuclaye, Florence. "Apprentissage automatique de relations d'équivalence sémantique à partir du Web". Phd thesis, Télécom ParisTech, 2003. http://pastel.archives-ouvertes.fr/pastel-00001119.
Texto completoDuclaye, Florence Aude Dorothée. "Apprentissage automatique de relations d'équivalence sémantique à partir du Web". Paris, ENST, 2003. http://www.theses.fr/2003ENST0044.
Texto completoThis PhD thesis can be situated in the context of a question answering system, which is capable of automatically finding answers to factual questions on the Web. One way to improve the quality of these answers is to increase the recall rate of the system, by identifying the answers under multiple possible formulations(paraphrases). As the manual recording of paraphrases is a long and expensive task, the goal of this PhD thesis is to design and develop a mechanism that learns automatically and in a weakly supervised manner the possible paraphrases of an answer. Thanks to the redundance and the linguistic variety of the information it contains, the Web is considered to be a very interesting corpus. Assimilated to a gigantic bipartite graph represented, on the one hand, by formulations and, on the other hand, by argument couples, the Web turns out to be propitious to the application of Firth's hypothesis, according to which "you shall know a word (resp. A formulation, in our case) by the company (resp. Arguments) it keeps". Consequently, the Web is sampled using an iterative mechanism : formulations (potential paraphrases) are extracted by anchoring arguments and, inversely, new arguments are extracted by anchoring the acquired formulations. In order to make the learning process converge, an intermediary stage is necessary, which partitions the sampled data using a statistical classification method. The obtained results were empirically evaluated, which, more particularly, shows the value added by the learnt paraphrases of the question answering system
Duclaye, Florence. "Apprentissage automatique de relations d'équivalence sémantique à partir du Web /". Paris : École nationale supérieure des télécommunications, 2005. http://catalogue.bnf.fr/ark:/12148/cb39935321s.
Texto completoMoukari, Michel. "Estimation de profondeur à partir d'images monoculaires par apprentissage profond". Thesis, Normandie, 2019. http://www.theses.fr/2019NORMC211/document.
Texto completoComputer vision is a branch of artificial intelligence whose purpose is to enable a machine to analyze, process and understand the content of digital images. Scene understanding in particular is a major issue in computer vision. It goes through a semantic and structural characterization of the image, on one hand to describe its content and, on the other hand, to understand its geometry. However, while the real space is three-dimensional, the image representing it is two-dimensional. Part of the 3D information is thus lost during the process of image formation and it is therefore non trivial to describe the geometry of a scene from 2D images of it.There are several ways to retrieve the depth information lost in the image. In this thesis we are interested in estimating a depth map given a single image of the scene. In this case, the depth information corresponds, for each pixel, to the distance between the camera and the object represented in this pixel. The automatic estimation of a distance map of the scene from an image is indeed a critical algorithmic brick in a very large number of domains, in particular that of autonomous vehicles (obstacle detection, navigation aids).Although the problem of estimating depth from a single image is a difficult and inherently ill-posed problem, we know that humans can appreciate distances with one eye. This capacity is not innate but acquired and made possible mostly thanks to the identification of indices reflecting the prior knowledge of the surrounding objects. Moreover, we know that learning algorithms can extract these clues directly from images. We are particularly interested in statistical learning methods based on deep neural networks that have recently led to major breakthroughs in many fields and we are studying the case of the monocular depth estimation
Tuo, Aboubacar. "Extraction d'événements à partir de peu d'exemples par méta-apprentissage". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG098.
Texto completoInformation Extraction (IE) is a research field with the objective of automatically identifying and extracting structured information within a given domain from unstructured or minimally structured text data. The implementation of such extractions often requires significant human efforts, either in the form of rule development or the creation of annotated data for systems based on machine learning. One of the current challenges in information extraction is to develop methods that minimize the costs and development time of these systems whenever possible. This thesis focuses on few-shot event extraction through a meta-learning approach that aims to train IE models from only few data. We have redefined the task of event extraction from this perspective, aiming to develop systems capable of quickly adapting to new contexts with a small volume of training data. First, we propose methods to enhance event trigger detection by developing more robust representations for this task. Then, we tackle the specific challenge raised by the "NULL" class (absence of events) within this framework. Finally, we evaluate the effectiveness of our proposals within the broader context of event extraction by extending their application to the extraction of event arguments
Boutin, Luc. "Biomimétisme, génération de trajectoires pour la robotique humanoïde à partir de mouvements humains". Poitiers, 2009. http://theses.edel.univ-poitiers.fr/theses/2009/Boutin-Luc/2009-Boutin-Luc-These.pdf.
Texto completoThe true reproduction of human locomotion is a topical issue on humanoid robots. The goal of this work is to define a process to imitate the human motion with humanoid robots. In the first part, the motion capture techniques are presented. The measurement protocol adopted is exposed and the calculation of joint angles. An adaptation of three existing algorithms is proposed to detect the contact events during complex movements. The method is valided by measurements on thirty healthy subjects. The second part deals with the generation of humanoid trajectories imitating the human motion. Once the problem and the imitation process are defined, the balance criterion of walking robots is presented. Using data from human motion capture, the reference trajectories of the feet and ZMP are defined. These paths are modified to avoid collision between feet, particularly in the case of executing a slalom. Finally an inverse kinematics algorithm developed for this problem is used to determine the joint angles associated with the robot reference trajectories of the feet and ZMP. Several applications on robots HOAP-3 and HRP-2 are presented. The trajectories are validated according to the robot balance through dynamic simulations of the computed motion, and respecting the limits of actuators
Deschamps, Sébastien. "Apprentissage actif profond pour la reconnaissance visuelle à partir de peu d’exemples". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS199.
Texto completoAutomatic image analysis has improved the exploitation of image sensors, with data coming from different sensors such as phone cameras, surveillance cameras, satellite imagers or even drones. Deep learning achieves excellent results in image analysis applications where large amounts of annotated data are available, but learning a new image classifier from scratch is a difficult task. Most image classification methods are supervised, requiring annotations, which is a significant investment. Different frugal learning solutions (with few annotated examples) exist, including transfer learning, active learning, semi-supervised learning or meta-learning. The goal of this thesis is to study these frugal learning solutions for visual recognition tasks, namely image classification and change detection in satellite images. The classifier is trained iteratively by starting with only a few annotated samples, and asking the user to annotate as little data as possible to obtain satisfactory performance. Deep active learning was initially studied with other methods and suited our operational problem the most, so we chose this solution. In this thesis, we have developed an interactive approach, where we ask the most informative questions about the relevance of the data to an oracle (annotator). Based on its answers, a decision function is iteratively updated. We model the probability that the samples are relevant, by minimizing an objective function capturing the representativeness, diversity and ambiguity of the data. Data with high probability are then selected for annotation. We have improved this approach, using reinforcement learning to dynamically and accurately weight the importance of representativeness, diversity and ambiguity of the data in each active learning cycle. Finally, our last approach consists of a display model that selects the most representative and diverse virtual examples, which adversely challenge the learned model, in order to obtain a highly discriminative model in subsequent iterations of active learning. The good results obtained against the different baselines and the state of the art in the tasks of satellite image change detection and image classification have demonstrated the relevance of the proposed frugal learning models, and have led to various publications (Sahbi et al. 2021; Deschamps and Sahbi 2022b; Deschamps and Sahbi 2022a; Sahbi and Deschamps2022)
DUVAL, BEATRICE. "Apprentissage a partir d'explications dans une theorie incomplete : completion d'explications partielles". Paris 11, 1991. http://www.theses.fr/1991PA112244.
Texto completoDubois, Vincent. "Apprentissage approximatif et extraction de connaissances à partir de données textuelles". Nantes, 2003. http://www.theses.fr/2003NANT2001.
Texto completoJouve, Pierre-Emmanuel. "Apprentissage non supervisé et extraction de connaissances à partir de données". Lyon 2, 2003. http://theses.univ-lyon2.fr/documents/lyon2/2003/jouve_pe.
Texto completoHenniche, M'hammed. "Apprentissage incrémental à partir d'exemples dans un espace de recherche réduit". Paris 13, 1998. http://www.theses.fr/1998PA13A001.
Texto completoJouve, Pierre-Emmanuel Nicoloyannis Nicolas. "Apprentissage non supervisé et extraction de connaissances à partir de données". Lyon : Université Lumière Lyon 2, 2003. http://demeter.univ-lyon2.fr/sdx/theses/lyon2/2003/jouve_pe.
Texto completoLuc, Pauline. "Apprentissage autosupervisé de modèles prédictifs de segmentation à partir de vidéos". Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM024/document.
Texto completoPredictive models of the environment hold promise for allowing the transfer of recent reinforcement learning successes to many real-world contexts, by decreasing the number of interactions needed with the real world.Video prediction has been studied in recent years as a particular case of such predictive models, with broad applications in robotics and navigation systems.While RGB frames are easy to acquire and hold a lot of information, they are extremely challenging to predict, and cannot be directly interpreted by downstream applications.Here we introduce the novel tasks of predicting semantic and instance segmentation of future frames.The abstract feature spaces we consider are better suited for recursive prediction and allow us to develop models which convincingly predict segmentations up to half a second into the future.Predictions are more easily interpretable by downstream algorithms and remain rich, spatially detailed and easy to obtain, relying on state-of-the-art segmentation methods.We first focus on the task of semantic segmentation, for which we propose a discriminative approach based on adversarial training.Then, we introduce the novel task of predicting future semantic segmentation, and develop an autoregressive convolutional neural network to address it.Finally, we extend our method to the more challenging problem of predicting future instance segmentation, which additionally segments out individual objects.To deal with a varying number of output labels per image, we develop a predictive model in the space of high-level convolutional image features of the Mask R-CNN instance segmentation model.We are able to produce visually pleasing segmentations at a high resolution for complex scenes involving a large number of instances, and with convincing accuracy up to half a second ahead
Elati, Mohamed. "Apprentissage de réseaux de régulation génétique à partir de données d'expression". Paris 13, 2007. http://www.theses.fr/2007PA132031.
Texto completoOuld, Abdel Vetah Mohamed. "Apprentissage automatique appliqué à l'extraction d'information à partir de textes biologiques". Paris 11, 2005. http://www.theses.fr/2005PA112133.
Texto completoThis thesis is about information extraction from textual data. Two main approaches co-exist in this field. The first approach is based on shallow text analysis. These methods are easy to implement but the information they extract is often incomplete and noisy. The second approach requires deeper structural linguistic information. Compared to the first approach, it has the double advantage of being easily adaptable and of taking into account the diversity of formulation which is an intrinsic characteristic of textual data. In this thesis, we have contributed to the realization of a complete information extraction tool based on this latter approach. Our tool is dedicated to the automatic extraction of gene interactions described in MedLine abstracts. In the first part of the work, we develop a filtering module that allows the user to identify the sentences referring to gene interactions. The module is available on line and already used by biologists. The second part of the work introduces an original methodology based on an abstraction of the syntactic analysis for automatical learning of information extraction rules. The preliminary results are promising and show that our abstraction approach provides a good representation for learning extraction rules
Xia, Chen. "Apprentissage Intelligent des Robots Mobiles dans la Navigation Autonome". Thesis, Ecole centrale de Lille, 2015. http://www.theses.fr/2015ECLI0026/document.
Texto completoModern robots are designed for assisting or replacing human beings to perform complicated planning and control operations, and the capability of autonomous navigation in a dynamic environment is an essential requirement for mobile robots. In order to alleviate the tedious task of manually programming a robot, this dissertation contributes to the design of intelligent robot control to endow mobile robots with a learning ability in autonomous navigation tasks. First, we consider the robot learning from expert demonstrations. A neural network framework is proposed as the inference mechanism to learn a policy offline from the dataset extracted from experts. Then we are interested in the robot self-learning ability without expert demonstrations. We apply reinforcement learning techniques to acquire and optimize a control strategy during the interaction process between the learning robot and the unknown environment. A neural network is also incorporated to allow a fast generalization, and it helps the learning to converge in a number of episodes that is greatly smaller than the traditional methods. Finally, we study the robot learning of the potential rewards underneath the states from optimal or suboptimal expert demonstrations. We propose an algorithm based on inverse reinforcement learning. A nonlinear policy representation is designed and the max-margin method is applied to refine the rewards and generate an optimal control policy. The three proposed methods have been successfully implemented on the autonomous navigation tasks for mobile robots in unknown and dynamic environments
Zehraoui, Farida. "Systèmes d'apprentissage connexionnistes et raisonnement à partir de cas pour la classification et le classement de séquence". Paris 13, 2004. http://www.theses.fr/2004PA132007.
Texto completoLu, Cheng-Ren. "Apprentissage incrémental par analogie : le système OGUST⁺". Paris 11, 1989. http://www.theses.fr/1989PA112393.
Texto completoPradel, Bruno. "Evaluation des systèmes de recommandation à partir d'historiques de données". Paris 6, 2013. http://www.theses.fr/2013PA066263.
Texto completoThis thesis presents various experimental protocols leading to abetter offline estimation of errors in recommender systems. As a first contribution, results form a case study of a recommendersystem based on purchased data will be presented. Recommending itemsis a complex task that has been mainly studied considering solelyratings data. In this study, we put the stress on predicting thepurchase a customer will make rather than the rating he will assign toan item. While ratings data are not available for many industries andpurchases data widely used, very few studies considered purchasesdata. In that setting, we compare the performances of variouscollaborative filtering models from the litterature. We notably showthat some changes the training and testing phases, and theintroduction of contextual information lead to major changes of therelative perfomances of algorithms. The following contributions will focus on the study of ratings data. Asecond contribution will present our participation to the Challenge onContext-Aware Movie Recommendation. This challenge provides two majorchanges in the standard ratings prediction protocol: models areevaluated conisdering ratings metrics and tested on two specificsperiod of the year: Christmas and Oscars. We provides personnalizedrecommendation modeling the short-term evolution of the popularitiesof movies. Finally, we study the impact of the observation process of ratings onranking evaluation metrics. Users choose the items they want to rateand, as a result, ratings on items are not observed at random. First,some items receive a lot more ratings than others and secondly, highratings are more likely to be oberved than poor ones because usersmainly rate the items they likes. We propose a formal analysis ofthese effects on evaluation metrics and experiments on the Yahoo!Musicdataset, gathering standard and randomly collected ratings. We showthat considering missing ratings as negative during training phaseleads to good performances on the TopK task, but these performancescan be misleading favoring methods modeling the popularities of itemsmore than the real tastes of users
Le, Folgoc Loïc. "Apprentissage statistique pour la personnalisation de modèles cardiaques à partir de données d’imagerie". Thesis, Nice, 2015. http://www.theses.fr/2015NICE4098/document.
Texto completoThis thesis focuses on the calibration of an electromechanical model of the heart from patient-specific, image-based data; and on the related task of extracting the cardiac motion from 4D images. Long-term perspectives for personalized computer simulation of the cardiac function include aid to the diagnosis, aid to the planning of therapy and prevention of risks. To this end, we explore tools and possibilities offered by statistical learning. To personalize cardiac mechanics, we introduce an efficient framework coupling machine learning and an original statistical representation of shape & motion based on 3D+t currents. The method relies on a reduced mapping between the space of mechanical parameters and the space of cardiac motion. The second focus of the thesis is on cardiac motion tracking, a key processing step in the calibration pipeline, with an emphasis on quantification of uncertainty. We develop a generic sparse Bayesian model of image registration with three main contributions: an extended image similarity term, the automated tuning of registration parameters and uncertainty quantification. We propose an approximate inference scheme that is tractable on 4D clinical data. Finally, we wish to evaluate the quality of uncertainty estimates returned by the approximate inference scheme. We compare the predictions of the approximate scheme with those of an inference scheme developed on the grounds of reversible jump MCMC. We provide more insight into the theoretical properties of the sparse structured Bayesian model and into the empirical behaviour of both inference schemes
Brigot, Guillaume. "Prédire la structure des forêts à partir d'images PolInSAR par apprentissage de descripteurs LIDAR". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS584/document.
Texto completoThe objective of this thesis is to predict the structural parameters of forests on a large scale using remote sensing images. The approach is to extend the accuracy of LIDAR full waveforms, on a larger area covered by polarimetric and interferometric (PolInSAR) synthetic aperture radar images using machine learning methods. From the analysis of the geometric properties of the PolInSAR coherence shape, we proposed a set of parameters that are likely to have a strong correlation with the LIDAR density profiles on forest lands. These features were used as input data for SVM techniques, neural networks, and random forests, in order to learn a set of forest descriptors deduced from LIDAR: the canopy height, the vertical profile type, and the canopy cover. The application of these techniques to airborne data over boreal forests in Sweden and Canada, and the evaluation of their accuracy, demonstrate the relevance of the method. This approach can be soon be adapted for future satellite missions dedicated to the forest: Biomass, Tandem-L and NiSAR
Gauthier, Luc-Aurélien. "Inférence de liens signés dans les réseaux sociaux, par apprentissage à partir d'interactions utilisateur". Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066639/document.
Texto completoIn this thesis, we study the semantic of relations between users and, in particular, the antagonistic forces we naturally observe in various social relationships, such as hostility or suspicion. The study of these relationships raises many problems both techniques - because the mathematical arsenal is not really adapted to the negative ties - and practical, due to the difficulty of collecting such data (explaining a negative relationship is perceived as intrusive and inappropriate for many users). That’s why we focus on the alternative solutions consisting in inferring these negative relationships from more widespread content. We use the common judgments about items the users share, which are the data used in recommender systems. We provide three contributions, described in three distinct chapters. In the first one, we discuss the case of agreements about items that may not have the same semantics if they involve appreciated items or not by two users. We will see that disliking the same product does not mean similarity. Afterward, we consider in our second contribution the distributions of user ratings and items ratings in order to measure whether the agreements or disagreements may happen by chance or not, in particular to avoid the user and item biases observed in this type of data. Our third contribution consists in using these results to predict the sign of the links between users from the only positive ties and the common judgments about items, and then without any negative social information
Gauthier, Luc-Aurélien. "Inférence de liens signés dans les réseaux sociaux, par apprentissage à partir d'interactions utilisateur". Electronic Thesis or Diss., Paris 6, 2015. http://www.theses.fr/2015PA066639.
Texto completoIn this thesis, we study the semantic of relations between users and, in particular, the antagonistic forces we naturally observe in various social relationships, such as hostility or suspicion. The study of these relationships raises many problems both techniques - because the mathematical arsenal is not really adapted to the negative ties - and practical, due to the difficulty of collecting such data (explaining a negative relationship is perceived as intrusive and inappropriate for many users). That’s why we focus on the alternative solutions consisting in inferring these negative relationships from more widespread content. We use the common judgments about items the users share, which are the data used in recommender systems. We provide three contributions, described in three distinct chapters. In the first one, we discuss the case of agreements about items that may not have the same semantics if they involve appreciated items or not by two users. We will see that disliking the same product does not mean similarity. Afterward, we consider in our second contribution the distributions of user ratings and items ratings in order to measure whether the agreements or disagreements may happen by chance or not, in particular to avoid the user and item biases observed in this type of data. Our third contribution consists in using these results to predict the sign of the links between users from the only positive ties and the common judgments about items, and then without any negative social information
Khiali, Lynda. "Fouille de données à partir de séries temporelles d’images satellites". Thesis, Montpellier, 2018. http://www.theses.fr/2018MONTS046/document.
Texto completoNowadays, remotely sensed images constitute a rich source of information that can be leveraged to support several applications including risk prevention, land use planning, land cover classification and many other several tasks. In this thesis, Satellite Image Time Series (SITS) are analysed to depict the dynamic of natural and semi-natural habitats. The objective is to identify, organize and highlight the evolution patterns of these areas.We introduce an object-oriented method to analyse SITS that consider segmented satellites images. Firstly, we identify the evolution profiles of the objects in the time series. Then, we analyse these profiles using machine learning methods. To identify the evolution profiles, we explore all the objects to select a subset of objects (spatio-temporal entities/reference objects) to be tracked. The evolution of the selected spatio-temporal entities is described using evolution graphs.To analyse these evolution graphs, we introduced three contributions. The first contribution explores annual SITS. It analyses the evolution graphs using clustering algorithms, to identify similar evolutions among the spatio-temporal entities. In the second contribution, we perform a multi-annual cross-site analysis. We consider several study areas described by multi-annual SITS. We use the clustering algorithms to identify intra and inter-site similarities. In the third contribution, we introduce à semi-supervised method based on constrained clustering. We propose a method to select the constraints that will be used to guide the clustering and adapt the results to the user needs.Our contributions were evaluated on several study areas. The experimental results allow to pinpoint relevant landscape evolutions in each study sites. We also identify the common evolutions among the different sites. In addition, the constraint selection method proposed in the constrained clustering allows to identify relevant entities. Thus, the results obtained using the unsupervised learning were improved and adapted to meet the user needs
Touati-Amar, Nassera. "Modes de rationalisation et processus d'apprentissage au sein des organisations : Réflexions à partir d’études de cas en milieu bancaire". ENSMP, 1996. http://www.theses.fr/1996ENMP0699.
Texto completoThis work deals with the theme of the organizational learning. We propose to define organizational learning as an elaborational process of interaction rules, in an organizational situation, inducing individual learnings. This vision, based notably upon piaget work, seems rich to us as regards to its potential implementation in entreprises, especially because it links three levels of analysis : the activity type, the coordination rules and the knowledge dynamics. By further investigating this vision, through banking case studies, we have associated different kinds of risk management tools to different learning process logics (centralized or decentralized) : the choice of an instrumentational logic depends on the nature of the activity. From this study, emerged the concept of a learning tool (illustrated here by an aiding reasoning tool). This tool is fundamentally oriented towards the production of a decentralized learning process : to this reasoning support system, correspond staictural ownerships and specific uses. Similarly, the organization should take into account these aims of learning, especially through the definition of missions, collective work procedures and mechanisms of instigation. Carrying out this pattern of rationalization requires a network of "procedure" actors supporting the function of learning vectors. Finally, this pattern of rationalization (based upon a reasoning support tool) has been compared to other rationalizational approachs which aim the promotion of learning dynamics
Renaux, Pierre. "Extraction d'informations à partir de documents juridiques : application à la contrefaçon de marques". Caen, 2006. http://www.theses.fr/2006CAEN2019.
Texto completoOur research framework focuses on the extraction and analysis of induced knowledge from legal corpus databases describing the nominative trade-mark infringement. This discipline deals with all the constraints arising from the different domains of knowledge discovery from documents: the electronic document, databases, statistics, artificial intelligence and human computer interaction. Meanwhile, the accuracy of these methods are closely linked with the quality of the data used. In our research framework, each decision is supervised by an author (the magistrate) and relies on a contextual writing environment, thus limiting the information extraction process. Here we are interesteding in decisions which direct the document learning process. We observe their surrounding, find their strategic capacity and offer adapted solutions in order to determine a better document representation. We suggest an explorative and supervised approach for calculating the data quality by finding properties which corrupt the knowledge quality. We have developped an interactive and collaborative platform for modelling all the processes concluding to the knowledge extraction in order to efficiently integrate the expert's know-how and practices
Voerman, Joris. "Classification automatique à partir d’un flux de documents". Electronic Thesis or Diss., La Rochelle, 2022. http://www.theses.fr/2022LAROS025.
Texto completoAdministrative documents can be found everywhere today. They are numerous, diverse and can be of two types: physical and numerical. The need to switch between these two forms required the development of new solutions. After document digitization (mainly with a scanner), one of the first problems is to determine the type of the document, which will simplify all future processes. Automatic classification is a complex process that has multiple solutions in the state of the art. Therefore, the document classification, the imbalanced context and industrial constraints will heavily challenge these solutions. This thesis focuses on the automatic classification of document streams with research of solutions to the three major problems previously introduced. To this end, we first propose an evaluation of existing methods adaptation to document streams context. In addition, this work proposes an evaluation of state-of-the-art solutions to contextual constraints and possible combinations between them. Finally, we propose a new combination method that uses a cascade of systems to offer a gradual solution. The most effective solutions are, at first, a multimodal neural network reinforced by an attention model that is able to classify a great variety of documents. In second, a cascade of three complementary networks with : a one network for text classification, one for image classification and one for low represented classes. These two options provide good results as well in ideal context than in imbalanced context. In the first case, it challenges the state of the art. In the second case, it shows an improvement of +6% F0.5-Measure in comparison to the state of the art
Pinelli, Nicolas. "Développer des compétences : une approche didactique à partir de la phénoménologie de l'imprévu appliquée aux événements artistiques". Corte, 2010. https://tel.archives-ouvertes.fr/tel-00762527.
Texto completoTo develop skills, this didactic approach tries to find ways of passage of the knowledges in the competence. For that purpose, a deductive approach installs a dialogue between six theoretical models (pedagogy, didactics, formation, management, socio-constructivism and cognitivism). At the same time, an inductive approach builds an experimental model (behavior, strategies and typology situationnelle), supported by a situationnel diagnosis. This diagnosis estimates and locates unforeseen in the working activity, with situationals parameters (individuals, facts, objects, places and moments). They are classified in invariants, variables and prodromes, according to criteria of existence, recurrence, concordance and causality. They also act as situationnels catalysts. The phenomenology of the unforeseen analyzes the situations, lived by professionals and students, resulting from artistic events (circus, dance, music and television) and surprises flagrante delictos of inference. Three theoretical concepts (procedural, ergologique and systematism) direct our reflexion throughout this double approach. The results of this research leans on observations, conversations and a questionnaire. Ways of passage exist through statistical links of convergence, divergence (experimental model), relevance (theoretical models) and coherence (all the models). A didactic congruence completes this relation between available knowledges and prescribed competences. It comes out from it a situational model and transferable didactics in a device of formation and for the prevention of the socioprofessional risks
Verstaevel, Nicolas. "Self-organization of robotic devices through demonstrations". Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30060/document.
Texto completoThe AMAS (Adaptive Multi-Agent Systems) theory proposes to solve complex problems for which there is no known algorithmic solution by self-organization. The self-organizing behaviour of the cooperative agents enables the system to self-adapt to a dynamical environment to maintain the system in a functionality adequate state. In this thesis, we apply the theory to the problematic of control in ambient systems, and more particularly to service robotics. Service robotics is more and more taking part in ambient environment, we talk of ambient robotics. Ambient systems have challenging characteristics, such as openness and heterogeneity, which make the task of control particularly complex. This complexity is increased if we take into account the specific, changing and often contradictory needs of users. This thesis proposes to use the principle of self-organization to design a multi-agent system with the ability to learn in real-time to control a robotic device from demonstrations made by a tutor. We then talk of learning from demonstrations. By observing the activity of the users, and learning the context in which they act, the system learns a control policy allowing to satisfy users. Firstly, we propose a new paradigm to design robotic systems under the name Extreme Sensitive Robotics. The main proposal of this paradigm is to distribute the control inside the different functionalities which compose a system, and to give to each functionality the capacity to self-adapt to its environment. To evaluate the benefits of this paradigm, we designed ALEX (Adaptive Learner by Experiments), an Adaptive Multi-Agent System which learns to control a robotic device from demonstrations. The AMAS approach enables the design of software with emergent functionalities. The solution to a problem emerges from the cooperative interactions between a set of autonomous agents, each agent having only a partial perception of its environment. The application of this approach implies to isolate the different agents involved in the problem of control and to describe their local behaviour. Then, we identify a set of non-cooperative situations susceptible to disturb their normal behaviour, and propose a set of cooperation mechanisms to handle them. The different experimentations have shown the capacity of our system to learn in realtime from the observation of the activity of the user and have enable to highlight the benefits, limitations and perspectives offered by our approach to the problematic of control in ambient systems
Turenne, Nicolas. "Apprentissage statistique pour l'extraction de concepts à partir de textes : application au filtrage d'informations textuelles". Phd thesis, Université Louis Pasteur - Strasbourg I, 2000. http://tel.archives-ouvertes.fr/tel-00006210.
Texto completoPomorski, Denis. "Apprentissage automatique symbolique/numérique : construction et évaluation d'un ensemble de règles à partir des données". Lille 1, 1991. http://www.theses.fr/1991LIL10117.
Texto completoBuchet, Samuel. "Vérification formelle et apprentissage logique pour la modélisation qualitative à partir de données single-cell". Thesis, Ecole centrale de Nantes, 2022. http://www.theses.fr/2022ECDN0011.
Texto completoThe understanding of cellular mechanisms occurring inside human beings usually depends on the study of its gene expression.However, genes are implied in complex regulatory processes and their measurement is difficult to perform. In this context, the qualitative modeling of gene regulatory networks intends to establish the function of each gene from the discrete modeling of a dynamical interaction network. In this thesis, our goal is to implement this modeling approach from single-cell sequencing data. These data prove to be interesting for qualitative modeling since they bring high precision, and they can be interpreted in a dynamical way. Thus, we develop a method for the inference of qualitative models based on the automatic learning of logic programs. This method is applied on a single-cell dataset, and we propose several approaches to interpret the resulting models by comparing them with existing knowledge
Turenne, Nicolas. "Apprentissage statistique pour l'extraction de concepts a partir de textes. Application au filtrage d'informations textuelles". Université Louis Pasteur (Strasbourg) (1971-2008), 2000. http://www.theses.fr/2000STR13159.
Texto completoGuillouet, Brendan. "Apprentissage statistique : application au trafic routier à partir de données structurées et aux données massives". Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30205/document.
Texto completoThis thesis focuses on machine learning techniques for application to big data. We first consider trajectories defined as sequences of geolocalized data. A hierarchical clustering is then applied on a new distance between trajectories (Symmetrized Segment-Path Distance) producing groups of trajectories which are then modeled with Gaussian mixture in order to describe individual movements. This modeling can be used in a generic way in order to resolve the following problems for road traffic : final destination, trip time or next location predictions. These examples show that our model can be applied to different traffic environments and that, once learned, can be applied to trajectories whose spatial and temporal characteristics are different. We also produce comparisons between different technologies which enable the application of machine learning methods on massive volumes of data
Bouguelia, Mohamed-Rafik. "Classification et apprentissage actif à partir d'un flux de données évolutif en présence d'étiquetage incertain". Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0034/document.
Texto completoThis thesis focuses on machine learning for data classification. To reduce the labelling cost, active learning allows to query the class label of only some important instances from a human labeller.We propose a new uncertainty measure that characterizes the importance of data and improves the performance of active learning compared to the existing uncertainty measures. This measure determines the smallest instance weight to associate with new data, so that the classifier changes its prediction concerning this data. We then consider a setting where the data arrives continuously from an infinite length stream. We propose an adaptive uncertainty threshold that is suitable for active learning in the streaming setting and achieves a compromise between the number of classification errors and the number of required labels. The existing stream-based active learning methods are initialized with some labelled instances that cover all possible classes. However, in many applications, the evolving nature of the stream implies that new classes can appear at any time. We propose an effective method of active detection of novel classes in a multi-class data stream. This method incrementally maintains a feature space area which is covered by the known classes, and detects those instances that are self-similar and external to that area as novel classes. Finally, it is often difficult to get a completely reliable labelling because the human labeller is subject to labelling errors that reduce the performance of the learned classifier. This problem was solved by introducing a measure that reflects the degree of disagreement between the manually given class and the predicted class, and a new informativeness measure that expresses the necessity for a mislabelled instance to be re-labeled by an alternative labeller
Ramadier, Lionel. "Indexation et apprentissage de termes et de relations à partir de comptes rendus de radiologie". Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT298/document.
Texto completoIn the medical field, the computerization of health professions and development of the personal medical file (DMP) results in a fast increase in the volume of medical digital information. The need to convert and manipulate all this information in a structured form is a major challenge. This is the starting point for the development of appropriate tools where the methods from the natural language processing (NLP) seem well suited.The work of this thesis are within the field of analysis of medical documents and address the issue of representation of biomedical information (especially the radiology area) and its access. We propose to build a knowledge base dedicated to radiology within a general knowledge base (lexical-semantic network JeuxDeMots). We show the interest of the hypothesis of no separation between different types of knowledge through a document analysis. This hypothesis is that the use of general knowledge, in addition to those specialties, significantly improves the analysis of medical documents.At the level of lexical-semantic network, manual and automated addition of meta information on annotations (frequency information, pertinence, etc.) is particularly useful. This network combines weight and annotations on typed relationships between terms and concepts as well as an inference mechanism which aims to improve quality and network coverage. We describe how from semantic information in the network, it is possible to define an increase in gross index built for each records to improve information retrieval. We present then a method of extracting semantic relationships between terms or concepts. This extraction is performed using lexical patterns to which we added semantic constraints.The results show that the hypothesis of no separation between different types of knowledge to improve the relevance of indexing. The index increase results in an improved return while semantic constraints improve the accuracy of the relationship extraction
Pouget, Maël. "Synthèse incrémentale de la parole à partir du texte". Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAT008/document.
Texto completoIn this thesis, we investigate a new paradigm for text-to-speech synthesis (TTS) allowing to deliver synthetic speech while the text is being inputted : incremental text-to-speech synthesis. Contrary to conventional TTS systems, that trigger the synthesis after a whole sentence has been typed down, incremental TTS devices deliver speech in a ``piece-meal'' fashion (i.e. word after word) while aiming at preserving the speech quality achievable by conventional TTS systems.By reducing the waiting time between two speech outputs while maintaining a good speech quality, such a system should improve the quality of the interaction for speech-impaired people using TTS devices to express themselves.The main challenge brought by incremental TTS is the synthesis of a word, or of a group of words, with the same segmental and supra-segmental quality as conventional TTS, but without knowing the end of the sentence to be synthesized. In this thesis, we propose to adapt the two main modules (natural language processing and speech synthesis) of a TTS system to the incremental paradigm.For the natural language processing module, we focused on part-of-speech tagging, which is a key step for phonetization and prosody generation. We propose an ``adaptive latency algorithm'' for part-of-speech tagging, that estimates if the inferred part-of-speech for a given word (based on the n-gram approach) is likely to change when adding one or several words. If the Part-of-speech is considered as likely to change, the synthesis of the word is delayed. In the other case, the word may be synthesized without risking to alter the segmental or supra-segmental quality of the synthetic speech. The proposed method is based on a set of binary decision trees trained over a large corpus of text. We achieve 92.5% precision for the incremental part-of-speech tagging task and a mean delay of 1.4 words.For the speech synthesis module, in the context of HMM-based speech synthesis, we propose a training method that takes into account the uncertainty about contextual features that cannot be computed at synthesis time (namely, contextual features related to the following words). We compare the proposed method to other strategies (baselines) described in the literature. Objective and subjective evaluation show that the proposed method outperforms the baselines for French.Finally, we describe a prototype developed during this thesis implementing the proposed solution for incremental part-of-speech tagging and speech synthesis. A perceptive evaluation of the word grouping derived from the proposed adaptive latency algorithm as well as the segmental quality of the synthetic speech tends to show that our system reaches a good trade-off between reactivity (minimizing the waiting time between the input and the synthesis of a word) and speech quality (both at segmental and supra-segmental levels)
Nicolini, Claire. "Apprentissage et raisonnement à partir de cas pour l'aide au diagnostic de pannes dans une installation nucléaire". Dijon, 1998. http://www.theses.fr/1998DIJOS030.
Texto completoBichindaritz, Isabelle. "Apprentissage de concepts dans une mémoire dynamique : raisonnement à partir de cas adaptable à la tâche cognitive". Paris 5, 1994. http://www.theses.fr/1994PA05S004.
Texto completo