Tesi sul tema "Apprentissage de modèles d'actions"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Apprentissage de modèles d'actions".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.
Lesner, Boris. "Planification et apprentissage par renforcement avec modèles d'actions compacts". Caen, 2011. http://www.theses.fr/2011CAEN2074.
We study Markovian Decision Processes represented with Probabilistic STRIPS action models. A first part of our work is about solving those processes in a compact way. To that end we propose two algorithms. A first one based on propositional formula manipulation allows to obtain approximate solutions in tractable propositional fragments such as Horn and 2-CNF. The second algorithm solves exactly and efficiently problems represented in PPDDL using a new notion of extended value functions. The second part is about learning such action models. We propose different approaches to solve the problem of ambiguous observations occurring while learning. Firstly, a heuristic method based on Linear Programming gives good results in practice yet without theoretical guarantees. We next describe a learning algorithm in the ``Know What It Knows'' framework. This approach gives strong theoretical guarantees on the quality of the learned models as well on the sample complexity. These two approaches are then put into a Reinforcement Learning setting to allow an empirical evaluation of their respective performances
Rodrigues, Christophe. "Apprentissage incrémental des modèles d'action relationnels". Paris 13, 2013. http://scbd-sto.univ-paris13.fr/secure/edgalilee_th_2013_rodrigues.pdf.
In this thesis, we study machine learning for action. Our work both covers reinforcement learning (RL) and inductive logic programming (ILP). We focus on learning action models. An action model describes the preconditions and effects of possible actions in an environment. It enables anticipating the consequences of the agent’s actions and may also be used by a planner. We specifically work on a relational representation of environments. They allow to describe states and actions by the means of objects and relations between the various objects that compose them. We present the IRALe method, which learns incrementally relational action models. First, we presume that states are fully observable and the consequences of actions are deterministic. We provide a proof of convergence for this method. Then, we develop an active exploration approach which allows focusing the agent’s experience on actions that are supposedly non-covered by the model. Finally, we generalize the approach by introducing a noisy perception of the environment in order to make our learning framework more realistic. We empirically illustrate each approach’s importance on various planification problems. The results obtained show that the number of interactions necessary with the environments is very weak compared to the size of the considered states spaces. Moreover, active learning allows to improve significantly these results
Gaidon, Adrien. "Modèles structurés pour la reconnaissance d'actions dans des vidéos réalistes". Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00780679.
Grand, Maxence. "Apprentissage de Modèle d'Actions basé sur l'Induction Grammaticale Régulière pour la Planification en Intelligence Artificielle". Electronic Thesis or Diss., Université Grenoble Alpes, 2022. http://www.theses.fr/2022GRALM044.
The field of artificial intelligence aims to design and build autonomous agents able to perceive, learn and act without any human intervention to perform complex tasks. To perform complex tasks, the autonomous agent must plan the best possible actions and execute them. To do this, the autonomous agent needs an action model. An action model is a semantic representation of the actions it can execute. In an action model, an action is represented using (1) a precondition: the set of conditions that must be satisfied for the action to be executed and (2) the effects: the set of properties of the world that will be altered by the execution of the action. STRIPS planning is a classical method to design these action models. However, STRIPS action models are generally too restrictive to be used in real-world applications. There are other forms of action models: temporal action models allowing to represent actions that can be executed concurrently, HTN action models allowing to represent actions as tasks and subtasks, etc. These models are less restrictive, but the less restrictive the models are the more difficult they are to design. In this thesis, we are interested in approaches facilitating the acquisition of these action models based on machine learning techniques.In this thesis, we present AMLSI (Action Model Learning with State machine Interaction), an approach for action model learning based on Regular Grammatical Induction. First, we show that the AMLSI approach allows to learn (STRIPS) action models. We will show the different properties of the approach proving its efficiency: robustness, convergence, require few learning data, quality of the learned models. In a second step, we propose two extensions for temporal action model learning and HTN action model learning
Davesne, Frédéric. "Etude de l'émergence de facultés d'apprentissage fiables et prédictibles d'actions réflexes, à partir de modèles paramétriques soumis à des contraintes internes". Phd thesis, Université d'Evry-Val d'Essonne, 2002. http://tel.archives-ouvertes.fr/tel-00375023.
Dans un premier temps, nous donnons des arguments défendant l'idée que les méthodes d'apprentissage classiques ne peuvent pas,
intrinsèquement, répondre à nos exigences de fiabilité et de prédictibilité. Nous pensons que la clé du problème se situe dans la manière dont la communication entre le système apprenant et son environnement est modélisée. Nous illustrons nos propos grâce à un exemple d'apprentissage par renforcement.
Nous présentons une démarche formalisée dans laquelle la communication est une interaction, au sens physique du terme. Le système y est soumis à deux forces: la réaction du système est due à la fois à l'action de l'environnement et au maintient de contraintes internes. L'apprentissage devient
une propriété émergente d'une suite de réactions du système, dans des cas d'interactions favorables. L'ensemble des évolutions possibles du système est déduit par le calcul, en se basant uniquement (sans autre paramètre) sur la connaissance de l'interaction.
Nous appliquons notre démarche à deux sous-systèmes interconnectés, dont l'objectif global est
l'apprentissage d'actions réflexes.
Nous prouvons que le premier possède comme propriété émergente des facultés d'apprentissage par renforcement et d'apprentissage latent fiables et prédictibles.
Le deuxième, qui est ébauché, transforme un signal en une information perceptive. Il fonctionne par sélection d'hypothèses d'évolution du signal au cours du temps à partir d'une mémoire. Des contraintes internes à la mémoire déterminent les ensembles valides d'informations perceptives.
Nous montrons, dans un cas simple, que ces contraintes mènent à un équivalent du théorème de Shannon sur l'échantillonnage.
Dragoni, Laurent. "Tri de potentiels d'action sur des données neurophysiologiques massives : stratégie d’ensemble actif par fenêtre glissante pour l’estimation de modèles convolutionnels en grande dimension". Thesis, Université Côte d'Azur, 2022. http://www.theses.fr/2022COAZ4016.
In the nervous system, cells called neurons are specialized in the communication of information. Through the generation and propagation of electrical currents named action potentials, neurons are able to transmit information in the body. Given the importance of the neurons, in order to better understand the functioning of the nervous system, a wide range of methods have been proposed for studying those cells. In this thesis, we focus on the analysis of signals which have been recorded by electrodes, and more specifically, tetrodes and multi-electrode arrays (MEA). Since those devices usually record the activity of a set of neurons, the recorded signals are often a mixture of the activity of several neurons. In order to gain more knowledge from this type of data, a crucial pre-processing step called spike sorting is required to separate the activity of each neuron. Nowadays, the general procedure for spike sorting consists in a three steps procedure: thresholding, feature extraction and clustering. Unfortunately this methodology requires a large number of manual operations. Moreover, it becomes even more difficult when treating massive volumes of data, especially MEA recordings which also tend to feature more neuronal synchronizations. In this thesis, we present a spike sorting strategy allowing the analysis of large volumes of data and which requires few manual operations. This strategy makes use of a convolutional model which aims at breaking down the recorded signals as temporal convolutions between two factors: neuron activations and action potential shapes. The estimation of these two factors is usually treated through alternative optimization. Being the most difficult task, we only focus here on the estimation of the activations, assuming that the action potential shapes are known. Estimating the activations is traditionally referred to convolutional sparse coding. The well-known Lasso estimator features interesting mathematical properties for the resolution of such problem. However its computation remains challenging on high dimensional problems. We propose an algorithm based of the working set strategy in order to compute efficiently the Lasso. This algorithm takes advantage of the particular structure of the problem, derived from biological properties, by using temporal sliding windows, allowing it to scale in high dimension. Furthermore, we adapt theoretical results about the Lasso to show that, under reasonable assumptions, our estimator recovers the support of the true activation vector with high probability. We also propose models for both the spatial distribution and activation times of the neurons which allow us to quantify the size of our problem and deduce the theoretical complexity of our algorithm. In particular, we obtain a quasi-linear complexity with respect to the size of the recorded signal. Finally we present numerical results illustrating both the theoretical results and the performances of our approach
Baccouche, Moez. "Apprentissage neuronal de caractéristiques spatio-temporelles pour la classification automatique de séquences vidéo". Phd thesis, INSA de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-00932662.
Oneata, Dan. "Modèles robustes et efficaces pour la reconnaissance d'action et leur localisation". Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM019/document.
Video interpretation and understanding is one of the long-term research goals in computer vision. Realistic videos such as movies present a variety of challenging machine learning problems, such as action classification/action retrieval, human tracking, human/object interaction classification, etc. Recently robust visual descriptors for video classification have been developed, and have shown that it is possible to learn visual classifiers in realistic difficult settings. However, in order to deploy visual recognition systems on large-scale in practice it becomes important to address the scalability of the techniques. The main goal is this thesis is to develop scalable methods for video content analysis (eg for ranking, or classification)
Iriart, Alejandro. "Mesures d’insertion sociale destinées aux détenus québécois et récidive criminelle : une approche par l'apprentissage automatique". Master's thesis, Université Laval, 2020. http://hdl.handle.net/20.500.11794/66717.
In this master thesis, we tried to determine the real influence of social rehabilitation programs on the risk of recidivism. To do this, we used a machine learning algorithm to analyze a database provided by the Quebec Ministry of Public Security (MSP). In this database, we are able to follow the numerous incarcerations of 97,140 prisoners from 2006 to 2018. Our analysis focuses only on inmates who have served in the prison in Quebec City. The approach we used is named Generalized Random Forests (GRF) and was developed by Athey et al. (2019). Our main analysis focuses not only on the characteristics of the prisoners, but also on the results they obtained when they were subjected to the LS/CMI, an extensive questionnaire aimed at determining the criminogenic needs and the risk level of the inmates . We also determined which variables have the most influence on predicting the treatment effect by using a function of the same algorithm that calculates the relative importance of each of the variables to make a prediction. By comparing participants and non-participants, we were able to demonstrate that participating in a program reduces the risk of recidivism by approximately 6.9% for a two-year trial period. Participating in a program always reduces significantly recidivism no matter the definition of recidivism used. We also determined that in terms of personal characteristics, it is the age, the nature of the offence and the number of years of study that are the main predictors for the individual causal effects. As for the LS/CMI, only a few sections of the questionnaire have real predictive power while others, like the one about leisure, do not. In light of our results, we believe that a more efficient instrument capable of predicting recidivism can be created by focusing on the newly identified variables with the greatest predictive power. A better instrument will make it possible to provide better counselling to prisoners on the programs they should follow, and thus increase their chances of being fully rehabilitated.
Arora, Ankuj. "Apprentissage du modèle d'action pour une interaction socio-communicative des hommes-robots". Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM081/document.
Driven with the objective of rendering robots as socio-communicative, there has been a heightened interest towards researching techniques to endow robots with social skills and ``commonsense'' to render them acceptable. This social intelligence or ``commonsense'' of the robot is what eventually determines its social acceptability in the long run.Commonsense, however, is not that common. Robots can, thus, only learn to be acceptable with experience. However, teaching a humanoid the subtleties of a social interaction is not evident. Even a standard dialogue exchange integrates the widest possible panel of signs which intervene in the communication and are difficult to codify (synchronization between the expression of the body, the face, the tone of the voice, etc.). In such a scenario, learning the behavioral model of the robot is a promising approach. This learning can be performed with the help of AI techniques. This study tries to solve the problem of learning robot behavioral models in the Automated Planning and Scheduling (APS) paradigm of AI. In the domain of Automated Planning and Scheduling (APS), intelligent agents by virtue require an action model (blueprints of actions whose interleaved executions effectuates transitions of the system state) in order to plan and solve real world problems. During the course of this thesis, we introduce two new learning systems which facilitate the learning of action models, and extend the scope of these new systems to learn robot behavioral models. These techniques can be classified into the categories of non-optimal and optimal. Non-optimal techniques are more classical in the domain, have been worked upon for years, and are symbolic in nature. However, they have their share of quirks, resulting in a less-than-desired learning rate. The optimal techniques are pivoted on the recent advances in deep learning, in particular the Long Short Term Memory (LSTM) family of recurrent neural networks. These techniques are more cutting edge by virtue, and produce higher learning rates as well. This study brings into the limelight these two aforementioned techniques which are tested on AI benchmarks to evaluate their prowess. They are then applied to HRI traces to estimate the quality of the learnt robot behavioral model. This is in the interest of a long term objective to introduce behavioral autonomy in robots, such that they can communicate autonomously with humans without the need of ``wizard'' intervention
Sandel, Olivier. "Modèle d'Interface Intelligente pour Terminaux de Communication". Phd thesis, Université Louis Pasteur - Strasbourg I, 2002. http://tel.archives-ouvertes.fr/tel-00453013.
Klaser, Alexander. "Apprentissage pour la reconnaissance d'actions humaines en vidéo". Phd thesis, Grenoble, 2010. http://www.theses.fr/2010GRENM039.
This dissertation targets the recognition of human actions in realistic video data, such as movies. To this end, we develop state-of-the-art feature extraction algorithms that robustly encode video information for both, action classification and action localization. In a first part, we study bag-of-features approaches for action classification. Recent approaches that use bag-of-features as representation have shown excellent results in the case of realistic video data. We, therefore, conduct an extensive comparison of existing methods for local feature detection and description. We, then, propose two new approaches to describe local features in videos. The first method extends the concept of histograms over gradient orientations to the spatio-temporal domain. The second method describes trajectories of local interest points detected spatially. Both descriptors are evaluated in a bag-of-features setup and show an improvement over the state-of-the-art for action classification. In a second part, we investigate how human detection can help action recognition. Firstly, we develop an approach that combines human detection with a bag-of-features model. The performance is evaluated for action classification with varying resolutions of spatial layout information. Next, we explore the spatio-temporal localization of human actions in Hollywood movies. We extend a human tracking approach to work robustly on realistic video data. Furthermore we develop an action representation that is adapted to human tracks. Our experiments suggest that action localization benefits significantly from human detection. In addition, our system shows a large improvement over current state-of-the-art approaches
Klaser, Alexander. "Apprentissage pour la reconnaissance d'actions humaines en vidéo". Phd thesis, Grenoble, 2010. http://tel.archives-ouvertes.fr/tel-00514814.
Cette thèse s'intéresse à la reconnaissance des actions humaines dans des données vidéo réalistes, tels que les films. À cette fin, nous développons des algorithmes d'extraction de caractéristiques visuelles pour la classification et la localisation d'actions.
Dans une première partie, nous étudions des approches basées sur les sacs-de-mots pour la classification d'action. Dans le cas de vidéo réalistes, certains travaux récents qui utilisent le modèle sac-de-mots pour la représentation d'actions ont montré des résultats prometteurs. Par conséquent, nous effectuons une comparaison approfondie des méthodes existantes pour la détection et la description des caractéristiques locales. Ensuite, nous proposons deux nouvelles approches pour la descriptions des caractéristiques locales en vidéo. La première méthode étend le concept d'histogrammes sur les orientations de gradient dans le domaine spatio-temporel. La seconde méthode est basée sur des trajectoires de points d'intérêt détectés spatialement. Les deux descripteurs sont évalués avec une représentation par sac-de-mots et montrent une amélioration par rapport à l'état de l'art pour la classification d'actions.
Dans une seconde partie, nous examinons comment la détection de personnes peut contribuer à la reconnaissance d'actions. Tout d'abord, nous développons une approche qui combine la détection de personnes avec une représentation sac-de-mots. La performance est évaluée pour la classification d'actions à plusieurs niveaux d'échelle spatiale. Ensuite, nous explorons la localisation spatio-temporelle des actions humaines dans les films. Nous étendons une approche de suivi de personnes pour des vidéos réalistes. En outre, nous développons une représentation d'actions qui est adaptée aux détections de personnes. Nos expériences suggèrent que la détection de personnes améliore significativement la localisation d'actions. De plus, notre système montre une grande amélioration par rapport à l'état de l'art actuel.
Deramgozin, Mohammadmahdi. "Développement de modèles de reconnaissance des expressions faciales à base d’apprentissage profond pour les applications embarquées". Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0286.
The field of Facial Emotion Recognition (FER) is pivotal in advancing human-machine interactions and finds essential applications in healthcare for conditions like depression and anxiety. Leveraging Convolutional Neural Networks (CNNs), this thesis presents a progression of models aimed at optimizing emotion detection and interpretation. The initial model is resource-frugal but competes favorably with state-of-the-art solutions, making it a strong candidate for embedded systems constrained in computational and memory resources. To capture the complexity and ambiguity of human emotions, the research work presented in this thesis enhances this CNN-based foundational model by incorporating facial Action Units (AUs). This approach not only refines emotion detection but also provides interpretability by identifying specific AUs tied to each emotion. Further sophistication is achieved by introducing neural attention mechanisms—both spatial and channel-based—improving the model's focus on salient facial features. This makes the CNN-based model adapted well to real-world scenarios, such as partially obscured or subtle facial expressions. Based on the previous results, in this thesis we propose finally an optimized, yet computationally efficient, CNN model that is ideal for resource-limited environments like embedded systems. While it provides a robust solution for FER, this research also identifies perspectives for future work, such as real-time applications and advanced techniques for model interpretability
Baillie, Jean-Christophe. "Apprentissage et reconnaissance qualitative d'actions dans des séquences vidéo". Paris 6, 2001. http://www.theses.fr/2001PA066533.
Phan, Thi Hai Hong. "Reconnaissance d'actions humaines dans des vidéos avec l'apprentissage automatique". Thesis, Cergy-Pontoise, 2019. http://www.theses.fr/2019CERG1038.
In recent years, human action recognition (HAR) has attracted the research attention thanks to its various applications such as intelligent surveillance systems, video indexing, human activities analysis, human-computer interactions and so on. The typical issues that the researchers are envisaging can be listed as the complexity of human motions, the spatial and temporal variations, cluttering, occlusion and change of lighting condition. This thesis focuses on automatic recognizing of the ongoing human actions in a given video. We address this research problem by using both shallow learning and deep learning approaches.First, we began the research work with traditional shallow learning approaches based on hand-scrafted features by introducing a novel feature named Motion of Oriented Magnitudes Patterns (MOMP) descriptor. We then incorporated this discriminative descriptor into simple yet powerful representation techniques such as Bag of Visual Words, Vector of locally aggregated descriptors (VLAD) and Fisher Vector to better represent actions. Also, PCA (Principal Component Analysis) and feature selection (statistical dependency, mutual information) are applied to find out the best subset of features in order to improve the performance and decrease the computational expense. The proposed method obtained the state-of-the-art results on several common benchmarks.Recent deep learning approaches require an intensive computations and large memory usage. They are therefore difficult to be used and deployed on the systems with limited resources. In the second part of this thesis, we present a novel efficient algorithm to compress Convolutional Neural Network models in order to decrease both the computational cost and the run-time memory footprint. We measure the redundancy of parameters based on their relationship using the information theory based criteria, and we then prune the less important ones. The proposed method significantly reduces the model sizes of different networks such as AlexNet, ResNet up to 70% without performance loss on the large-scale image classification task.Traditional approach with the proposed descriptor achieved the great performance for human action recognition but only on small datasets. In order to improve the performance on the large-scale datasets, in the last part of this thesis, we therefore exploit deep learning techniques to classify actions. We introduce the concepts of MOMP Image as an input layer of CNNs as well as incorporate MOMP image into deep neural networks. We then apply our network compression algorithm to accelerate and improve the performance of system. The proposed method reduces the model size, decreases the over-fitting, and thus increases the overall performance of CNN on the large-scale action datasets.Throughout the thesis, we have showed that our algorithms obtain good performance in comparison to the state-of-the-art on challenging action datasets (Weizmann, KTH, UCF Sports, UCF-101 and HMDB51) with low resource required
Dekdouk, Abdelkader. "Modèles algébriques pour le parallélisme vrai et le raffinement d'actions". Nancy 1, 1997. http://www.theses.fr/1997NAN10188.
This work fits into the process algebra framework. Its goal is twofold. Firstly, to define algebraic models for true-concurrency and secondly to extend these models with action refinement concept. We begin by defining two operational models of true-concurrency for an extension of an ACPlike language. True-concurrency of the first model rests on the causality principle and assumes instantaneity of action occurrences. While the second one is based on the ST idea assuming that action occurrences are durable. Then we establish for each operational model its corresponding algebraic model for which it is proved correct and complete. These models define pro cess algebras that provide formalisms to express explicitly a true-concurrent behaviour, in addition to their ability of algebraic verification. The second step of this work is the semantic definition of action refinement operator within both the defined causal and ST models. The action refinement operator permits to relate specifications at different levels of abstraction by implementing an abstract action with a concrete activity. Hence it introduces the notion of vertical modularity which is very relevant for the design of action systems. We finalise this work by enriching both the true-concurrent models including action refinement with the mechanism of abstraction w. R. T unobservable actions, following the abstraction principles stated by the observational equivalence of Milner and the branching equivalence of Van Glabbeek and Weijland. As far as we know this mechanism constitutes a crucial tool for the verification of reactive systems
Bubeck, Sébastien. "JEUX DE BANDITS ET FONDATIONS DU CLUSTERING". Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2010. http://tel.archives-ouvertes.fr/tel-00845565.
Thevenin, Dominique. "Anomalies des marchés d'actions : le cas des bulles spéculatives". Grenoble 2, 1998. http://www.theses.fr/1998GRE21046.
The object of this research paper is to test if speculative bubbles exist on stock markets, and to supply a procedure for testing bubbles. Chapters 1 to 4 retraces the evolution of financial theories, and the attempts to broaden the concept of rational bubbles using assumptions of fundamental value. In chapter 5 to 7, the most commonly used econometric tests are analysed : volatility tests, cointegration tests, and regressions. Tests are extended to a larger sample in time available on the american stock indice, and the conclusions are not modified: it is impossible to reject bubbles on the american stock market. In chapter 8, we introduce floating discount rates in the tests, but this element does not change the results. In chapter 9, the proposed procedure is applied to the french stock market indice. Their results are different for the same period 1871-1997 : it seems that bubbles do not affect frenc^stocks
Bondu, Alexis. "Apprentissage actif par modèles locaux". Phd thesis, Université d'Angers, 2008. http://tel.archives-ouvertes.fr/tel-00450124.
Grar, Adel. "Incidence de la division d'actions et de l'attribution d'actions gratuites sur la valeur : une étude empirique sur le marché français entre 1977 et 1990". Paris 9, 1993. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1993PA090048.
This thesis deals with the impact of financial decisions on stock's characteristics. The decisions studied here are stock splits and stock dividends which generate an increase in the number of shares outstanding without increasing the firm resources. In France, stock splits appear to be increasingly common. Empirically, we find that the market reacts favorably to distributions of stocks. However, stock splits result in higher volatility and lower liquidity. We investigate why firms split their stocks or distribute stock dividends. The findings suggest that stock splits are mainly aimed at restoring stock prices to a "normal range". Stock dividends are altogether different from stock splits, in that they seem to be related to a firm's cash dividend policy
Bensimhon, Larry. "Excès de confiance et mimétisme informationnel sur les marchés d'actions". Paris 1, 2006. http://www.theses.fr/2006PA010081.
Yang, Gen. "Modèles prudents en apprentissage statistique supervisé". Thesis, Compiègne, 2016. http://www.theses.fr/2016COMP2263/document.
In some areas of supervised machine learning (e.g. medical diagnostics, computer vision), predictive models are not only evaluated on their accuracy but also on their ability to obtain more reliable representation of the data and the induced knowledge, in order to allow for cautious decision making. This is the problem we studied in this thesis. Specifically, we examined two existing approaches of the literature to make models and predictions more cautious and more reliable: the framework of imprecise probabilities and the one of cost-sensitive learning. These two areas are both used to make models and inferences more reliable and cautious. Yet few existing studies have attempted to bridge these two frameworks due to both theoretical and practical problems. Our contributions are to clarify and to resolve these problems. Theoretically, few existing studies have addressed how to quantify the different classification errors when set-valued predictions are produced and when the costs of mistakes are not equal (in terms of consequences). Our first contribution has been to establish general properties and guidelines for quantifying the misclassification costs for set-valued predictions. These properties have led us to derive a general formula, that we call the generalized discounted cost (GDC), which allow the comparison of classifiers whatever the form of their predictions (singleton or set-valued) in the light of a risk aversion parameter. Practically, most classifiers basing on imprecise probabilities fail to integrate generic misclassification costs efficiently because the computational complexity increases by an order (or more) of magnitude when non unitary costs are used. This problem has led to our second contribution, the implementation of a classifier that can manage the probability intervals produced by imprecise probabilities and the generic error costs with the same order of complexity as in the case where standard probabilities and unitary costs are used. This is to use a binary decomposition technique, the nested dichotomies. The properties and prerequisites of this technique have been studied in detail. In particular, we saw that the nested dichotomies are applicable to all imprecise probabilistic models and they reduce the imprecision level of imprecise models without loss of predictive power. Various experiments were conducted throughout the thesis to illustrate and support our contributions. We characterized the behavior of the GDC using ordinal data sets. These experiences have highlighted the differences between a model based on standard probability framework to produce indeterminate predictions and a model based on imprecise probabilities. The latter is generally more competent because it distinguishes two sources of uncertainty (ambiguity and the lack of information), even if the combined use of these two types of models is also of particular interest as it can assist the decision-maker to improve the data quality or the classifiers. In addition, experiments conducted on a wide variety of data sets showed that the use of nested dichotomies significantly improves the predictive power of an indeterminate model with generic costs
Kreit, Zakwan. "Contribution à l'étude des méthodes quantitatives d'aide à la décision appliquées aux indices du marché d'actions". Bordeaux 4, 2007. https://tel.archives-ouvertes.fr/tel-00413979.
This thesis is divided into two parts : first, it concerns the study of different quantitative methods used for decision making support in all situations. Second, study and analysis of the stock market index in Egypt. Indeed, The Egyptian stock market is considered to be inefficient with respect to the international stock market. According to this, we expect that it is very difficult to use the traditional forecasting methods to predict the trend of the stock market index. In order to predict the Cairo & Alexandria Stock Exchanges (CASE), the Box-Jenkins Auto Regressive Integrated Moving Average (ARIMA) and Artificial Neural Network (ANN) methods were applied to predict the stock market index of (CASE) in Egypt. For this purpose, we have used the stock market index samples for the CASE collected from 1992-2005 (3311 daily time series observation). The traditional forecasting method ARIMA was found to be not able to predict the CASE stock market index. However, the ANN prediction method was found to be able to follow the real trend of the index. This was confirmed by the Mean Absolute Percentage Error (MAPE) and Mean Square Error (MSE). Hence, neural networks for weekly prediction of financial stock markets are efficient. Consequently, the individual investor could make the most of the use of this forecasting method for his decision especially in the stock market
Le, Cun Yann. "Modèles connexionnistes de l'apprentissage". Paris 6, 1987. http://www.theses.fr/1987PA066180.
Picardat, Jean-François. "Controle d'execution, comprehension et apprentissage de plans d'actions : developpement de la methode de la table triangulaire". Toulouse 3, 1987. http://www.theses.fr/1987TOU30122.
Picardat, Jean-François. "Contrôle d'exécution, compréhension et apprentissage de plans d'actions développement de la méthode de la "Table triangulaire /". Grenoble 2 : ANRT, 1987. http://catalogue.bnf.fr/ark:/12148/cb37608908j.
Escobar-Zuniga, María-José. "Modèles bio-inspirés pour l'estimation et l'analyse de mouvement : reconnaissance d'actions et intégration du mouvement". Nice, 2009. http://www.theses.fr/2009NICE4050.
This thesis addresses the study of the motion perception in mammals and how bio-inspired systems can be applied to real applications. The first part of this thesis relates how the visual information is processed in the mammal's brains and how motion estimation is usually modeled. Based on this analysis of the state of the art, we propose a feedforward V1-MT core architecture. This feedforward V1-MT core architecture will be a basis to study two different kinds of applications. The first application is human action recognition, which is still a challenging problem in the computer vision community. We show how our bio-inspired method can be successfully applied to this real application. Interestingly, we show how several computational properties inspired from motion processing in mammals, allow us to reach high quality results, which will be compared to latest reference results. The second application of the bio-inspired architecture proposed in this thesis, is to consider the problem of motion integration for the solution of the aperture problem. We investigate the role of delayed V1 surround suppression, and how the 2D information extracted through this mechanism can be integrated to propose a solution for the aperture problem. Finally, we highlight a variety of important issues in the determination of motion estimation and additionally we present many potential avenues for future research efforts
Zeng, Tieyong. "Études de Modèles Variationnels et Apprentissage de Dictionnaires". Phd thesis, Université Paris-Nord - Paris XIII, 2007. http://tel.archives-ouvertes.fr/tel-00178024.
Binsztok, Henri. "Apprentissage de modèles Markoviens pour l'analyse de séquences". Paris 6, 2007. http://www.theses.fr/2007PA066568.
Initially, Machine Learning allowed to learn models from labeled data. But, for numerous tasks, notably for the task of user modeling, if the available quantity of data is potentially without limit, the quantity of labeled data is almost nonexistent. Within the framework of this thesis, we are interested in the unsupervised learning of sequence models. The information of sequence constitutes the first level of structured data, where the data are no more simple vectors of characteristics. We propose approaches that we apply to the automatic learning of Hidden Markov Models ( HMMs) and Hierarchical HMMs (HHMMs). Our purpose is to learn simultaneously the structure and the parameters of these Markovian Models, to minimize the quantity of prior information necessary to learn them
Do, Quoc khanh. "Apprentissage discriminant des modèles continus en traduction automatique". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS071/document.
Over the past few years, neural network (NN) architectures have been successfully applied to many Natural Language Processing (NLP) applications, such as Automatic Speech Recognition (ASR) and Statistical Machine Translation (SMT).For the language modeling task, these models consider linguistic units (i.e words and phrases) through their projections into a continuous (multi-dimensional) space, and the estimated distribution is a function of these projections. Also qualified continuous-space models (CSMs), their peculiarity hence lies in this exploitation of a continuous representation that can be seen as an attempt to address the sparsity issue of the conventional discrete models. In the context of SMT, these echniques have been applied on neural network-based language models (NNLMs) included in SMT systems, and oncontinuous-space translation models (CSTMs). These models have led to significant and consistent gains in the SMT performance, but are also considered as very expensive in training and inference, especially for systems involving large vocabularies. To overcome this issue, Structured Output Layer (SOUL) and Noise Contrastive Estimation (NCE) have been proposed; the former modifies the standard structure on vocabulary words, while the latter approximates the maximum-likelihood estimation (MLE) by a sampling method. All these approaches share the same estimation criterion which is the MLE ; however using this procedure results in an inconsistency between theobjective function defined for parameter stimation and the way models are used in the SMT application. The work presented in this dissertation aims to design new performance-oriented and global training procedures for CSMs to overcome these issues. The main contributions lie in the investigation and evaluation of efficient training methods for (large-vocabulary) CSMs which aim~:(a) to reduce the total training cost, and (b) to improve the efficiency of these models when used within the SMT application. On the one hand, the training and inference cost can be reduced (using the SOUL structure or the NCE algorithm), or by reducing the number of iterations via a faster convergence. This thesis provides an empirical analysis of these solutions on different large-scale SMT tasks. On the other hand, we propose a discriminative training framework which optimizes the performance of the whole system containing the CSM as a component model. The experimental results show that this framework is efficient to both train and adapt CSM within SMT systems, opening promising research perspectives
Zeng, TieYong. "Etude de modèles variationnels et apprentissage de dictionnaires". Paris 13, 2007. http://www.theses.fr/2007PA132009.
Slama, Rim. "Geometric approaches for 3D human motion analysis : application to action recognition and retrieval". Thesis, Lille 1, 2014. http://www.theses.fr/2014LIL10078/document.
In this thesis, we focus on the development of adequate geometric frameworks in order to model and compare accurately human motion acquired from 3D sensors. In the first framework, we address the problem of pose/motion retrieval in full 3D reconstructed sequences. The human shape representation is formulated using Extremal Human Curve (EHC) descriptor extracted from the body surface. It allows efficient shape to shape comparison taking benefits from Riemannian geometry in the open curve shape space. As each human pose represented by this descriptor is viewed as a point in the shape space, we propose to model the motion sequence by a trajectory on this space. Dynamic Time Warping in the feature vector space is then used to compare different motions. In the second framework, we propose a solution for action and gesture recognition from both skeleton and depth data acquired by low cost cameras such as Microsoft Kinect. The action sequence is represented by a dynamical system whose observability matrix is characterized as an element of a Grassmann manifold. Thus, recognition problem is reformulated as a point classification on this manifold. Here, a new learning algorithm based on the notion of tangent spaces is proposed to improve recognition task. Performances of our approach on several benchmarks show high recognition accuracy with low latency
BASTIE, CHRISTINE. "Integration de la planification et du suivi d'execution d'actions paralleles : le systeme speedy". Toulouse 3, 1997. http://www.theses.fr/1997TOU30200.
Gaudel, Romaric. "Paramètres d'ordre et sélection de modèles en apprentissage : caractérisation des modèles et sélection d'attributs". Phd thesis, Université Paris Sud - Paris XI, 2010. http://tel.archives-ouvertes.fr/tel-00549090.
Letard, Vincent. "Apprentissage incrémental de modèles de domaines par interaction dialogique". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS100/document.
Artificial Intelligence is the field of research aiming at mimicking or replacing human cognitive abilities. As such, one of its subfields is focused on the progressive automation of the programming process. In other words, the goal is to transfer cognitive load from the human to the system, whether it be autonomous or guided by the user. In this thesis, we investigate the conditions for making a user-guided system autonomous using another subfield of Artificial Intelligence : Machine Learning. As an implementation framework, we chose the design of an incremental operational assistant, that is a system able to react to natural language requests from the user with relevant actions. The system must also be able to learn the correct reactions, incrementally. In our work, the requests are in written French, and the associated actions are represented by corresponding instructions in a programming language (here R and bash). The learning is performed using a set of examples composed by the users themselves while interacting. Thus they progressively define the most relevant actions for each request, making the system more autonomous. We collected several example sets for evaluation of the learning methods, analyzing and reducing the inherent collection biases. The proposed protocol is based on incremental bootstrapping of the system, starting from an empty or limited knowledge base. As a result of this choice, the obtained knowledge base reflects the user needs, the downside being that the overall number of examples is limited. To avoid this problem, after assessing a baseline method, we apply a case base reasoning approach to the request to command transfer problem: formal analogical reasoning. We show that this method yields answers with a very high precision, but also a relatively low coverage. We explore the analogical extension of the example base in order to increase the coverage of the provided answers. We also assess the relaxation of analogical constraints for an increased tolerance of analogical reasoning to noise in the examples. The running delay of the simple analogical approach is already around 1 second, and is badly influenced by both the automatic extension of the base and the relaxation of the constraints. We explored several segmentation strategies on the input examples in order to reduce reduce this time. The delay however remains the main obstacle to using analogical reasoning for natural language processing with usual volumes of data. Finally, the incremental operational assistant based on analogical reasoning was tested in simulated incremental condition in order to assess the learning behavior over time. The system reaches a stable correct answer rate after a dozen examples given in average for each command type. Although the effective performance depends on the total number of accounted commands, this observation opens interesting applicative tracks for the considered task of transferring from a rich source domain (natural language) to a less rich target domain (programming language)
Ghali, Ali. "Transactions intérimaires : impact sur l'évaluation de la performance des fonds mutuels d'actions américaines". Master's thesis, Université Laval, 2015. http://hdl.handle.net/20.500.11794/26352.
Ghorbel, Enjie. "Reconnaissance rapide et précise d'actions humaines à partir de caméras RGB-D". Thesis, Normandie, 2017. http://www.theses.fr/2017NORMR027/document.
The recent availability of RGB-D cameras has renewed the interest of researchers in the topic of human action recognition. More precisely, several action recognition methods have been proposed based on the novel modalities provided by these cameras, namely, depth maps and skeleton sequences. These approaches have been mainly evaluated in terms of recognition accuracy. This thesis aims to study the issue of fast action recognition from RGB-D cameras. It focuses on proposing an action recognition method realizing a trade-off between accuracy and latency for the purpose of applying it in real-time scenarios. As a first step, we propose a comparative study of recent RGB-D based action recognition methods using the two cited criteria: accuracy of recognition and rapidity of execution. Then, oriented by the conclusions stated thanks to this comparative study, we introduce a novel, fast and accurate human action descriptor called Kinematic Spline Curves (KSC).This latter is based on the cubic spline interpolation of kinematic values. Moreover, fast spatialand temporal normalization are proposed in order to overcome anthropometric variability, orientation variation and rate variability. The experiments carried out on four different benchmarks show the effectiveness of this approach in terms of execution time and accuracy. As a second step, another descriptor is introduced, called Hierarchical Kinematic Covariance(HKC). This latter is proposed in order to solve the issue of fast online action recognition. Since this descriptor does not belong to a Euclidean space, but is an element of the space of Symmetric Positive semi-definite (SPsD) matrices, we adapt kernel classification methods by the introduction of a novel distance called Modified Log-Euclidean, which is inspiredfrom Log-Euclidean distance. This extension allows us to use suitable classifiers to the feature space SPsD of matrices. The experiments prove the efficiency of our method, not only in terms of rapidity of calculation and accuracy, but also in terms of observational latency. These conclusions show that this approach combined with an action segmentation method could be appropriate to online recognition, and consequently, opens up new prospects for future works
Châtel, Célia. "Modèles de classification en classes empiétantes : cas des modèles arborés". Electronic Thesis or Diss., Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0538.
Traditionally, classification models (such as partitions and hierarchies) aim at separating without ambiguities and produce non-overlapping clusters (i.e two clusters are either disjoint or one is included in the other). However, this non ambiguity may lead to mask information such as in the case of hybrid plants in biology or of texts which belong to two (or more) different genres in textual analysis for instance. General models like hypergraphs or lattices allow to take into account overlapping clusters. More precisely, "totally balanced" models allows class infringement and presents some useful constraints for classification.In machine learning, decision trees are a widely used model as they are simple to use and understand. They are also based on the idea of partition of sets.We show in this work different links between traditional classification and supervised machine learning and show what each world can bring to the other.We propose two methods of classification which link the two universes. We then extend the notion of binarity, widely-used for trees, to hypergraphs and lattices. We show the equivalence between binarizable systems and totally balanced systems, which makes of totally balanced structures a great candidate for classification models with class infringement. We also propose some approximation methods of any system (lattice, hypergraph, dissimilarity) by a totally balanced one
Bétourné, Nathalie. "De l'existence d'une mémoire pour les rendements d'actions : le cas des titres du CAC 40". Littoral, 2001. http://www.theses.fr/2001DUNK0066.
The volatility of securities is followed analytically on the financial markets by fractals theory introduced by Mandelbrot in 1950. This theory let determine the existence of the volatility long memory by introducing the R/S statistic (or "Hurst" exponent) defined by Lo (1991) and developped by Jacobsen (1996). The tests positive results attained on the analyse of the volatility do not concluded these obtained on the analyse of asset returns. We show indeed that the introduction of short term effect (autoregressiv models) in the statistic reduce the exponent value despite of the sample size : the long memory do not exist because of the short term-long term couple. The more transactions size is high the more the statistic value decrease : the sort term effect prevail on the long term effect ; the private and public information price depend on the investors strategic behavior of the session (the investors mimetism) function of the liquidity, the spread, the volume and the lead-lag effect criterias. An investor with an immediat reaction for speculation or liquid patterns lead a short term dependence of asser returns : their strategy depend on the evolution of past prices. We show that the short memory exist from an autoregressiv model modified GARCH introduced by Zumbach (1999). We can have a second approach of the short term effect from the duration between transaction prices and the volatility negativ correlation. We extend the modified GARCH(1,1) process of Zumbach by a mixed GARCH(1,1)-ACD process for getting account the duration factor. The results show that the asset returns short memory exist and the deviation of the errors estimations are lower with the mixed GARCH(1,1)-ACD process
Galand, Gabriel. "Monnaie et échanges décentralisés : des modèles de prospection aux modèles comportementaux". Châtenay-Malabry, Ecole centrale de Paris, 2006. http://www.theses.fr/2006ECAP1069.
Châtel, Célia. "Modèles de classification en classes empiétantes : cas des modèles arborés". Thesis, Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0538/document.
Traditionally, classification models (such as partitions and hierarchies) aim at separating without ambiguities and produce non-overlapping clusters (i.e two clusters are either disjoint or one is included in the other). However, this non ambiguity may lead to mask information such as in the case of hybrid plants in biology or of texts which belong to two (or more) different genres in textual analysis for instance. General models like hypergraphs or lattices allow to take into account overlapping clusters. More precisely, "totally balanced" models allows class infringement and presents some useful constraints for classification.In machine learning, decision trees are a widely used model as they are simple to use and understand. They are also based on the idea of partition of sets.We show in this work different links between traditional classification and supervised machine learning and show what each world can bring to the other.We propose two methods of classification which link the two universes. We then extend the notion of binarity, widely-used for trees, to hypergraphs and lattices. We show the equivalence between binarizable systems and totally balanced systems, which makes of totally balanced structures a great candidate for classification models with class infringement. We also propose some approximation methods of any system (lattice, hypergraph, dissimilarity) by a totally balanced one
Chan-Hon-Tong, Adrien. "Segmentation supervisée d'actions à partir de primitives haut niveau dans des flux vidéos". Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066226/document.
This thesis focuses on the supervised segmentation of video streams within the application context of daily action recognition.A segmentation algorithm is obtained from Implicit Shape Model by optimising the votes existing in this polling method.We prove that this optimisation can be linked to the sliding windows plus SVM framework and more precisely is equivalent with a standard training by adding temporal constraint, or, by encoding the data through a dense pyramidal decomposition. This algorithm is evaluated on a public database of segmentation where it outperforms other Implicit Shape Model like methods and the standard linear SVM.This algorithm is then integrated into a action segmentation system.Specific features are extracted from skeleton obtained from the video by standard software.These features are then clustered and given to the polling method.This system, combining our feature and our algorithm, obtains the best published performance on a human daily action segmentation dataset
Barnachon, Mathieu. "Reconnaissance d'actions en temps réel à partir d'exemples". Phd thesis, Université Claude Bernard - Lyon I, 2013. http://tel.archives-ouvertes.fr/tel-00820113.
Mensink, Thomas. "Apprentissage de Modèles pour la Classification et la Recherche d'Images". Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00752022.
Keriven, Nicolas. "Apprentissage de modèles de mélange à large échelle par Sketching". Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S055/document.
Learning parameters from voluminous data can be prohibitive in terms of memory and computational requirements. Furthermore, new challenges arise from modern database architectures, such as the requirements for learning methods to be amenable to streaming, parallel and distributed computing. In this context, an increasingly popular approach is to first compress the database into a representation called a linear sketch, that satisfies all the mentioned requirements, then learn the desired information using only this sketch, which can be significantly faster than using the full data if the sketch is small. In this thesis, we introduce a generic methodology to fit a mixture of probability distributions on the data, using only a sketch of the database. The sketch is defined by combining two notions from the reproducing kernel literature, namely kernel mean embedding and Random Features expansions. It is seen to correspond to linear measurements of the underlying probability distribution of the data, and the estimation problem is thus analyzed under the lens of Compressive Sensing (CS), in which a (traditionally finite-dimensional) signal is randomly measured and recovered. We extend CS results to our infinite-dimensional framework, give generic conditions for successful estimation and apply them analysis to many problems, with a focus on mixture models estimation. We base our method on the construction of random sketching operators such that some Restricted Isometry Property (RIP) condition holds in the Banach space of finite signed measures with high probability. In a second part we introduce a flexible heuristic greedy algorithm to estimate mixture models from a sketch. We apply it on synthetic and real data on three problems: the estimation of centroids from a sketch, for which it is seen to be significantly faster than k-means, Gaussian Mixture Model estimation, for which it is more efficient than Expectation-Maximization, and the estimation of mixtures of multivariate stable distributions, for which, to our knowledge, it is the only algorithm capable of performing such a task
Nakoula, Yassar. "Apprentissage des modèles linguistiques flous, par jeu de règles pondérées". Chambéry, 1997. http://www.theses.fr/1997CHAMS018.
Kermorvant, Christopher. "Apprentissage de modèles à états finis stochastiques pour les séquences". Saint-Etienne, 2003. http://www.theses.fr/2003STET4002.
This thesis deals with learning stochastic finite state automata for sequence modelling. We aimed at developing both their structural and probabilistic aspects, through the extension of the models and the design of new learning algorithms. On the one hand, we have developed statistical aspects of stochastic finite state automaton learning algorithms in order to deal with practical cases. We have designed a new learning algorithm based on statistical tests for sample comparison. This framework allows to take into account the size of the learning set in the inference process. On the other hand, we have developed syntactic aspects of finite state automaton and their ability to model the underlying structure of sequences. We have defined typed automata, an extension of classical finite state automata, which permits the introduction of a priori knowledge in the models. From a theoretical point of view, we have studied the search space for the typed automata. We have proposed a modified version of classical automata learning algorithms in the framework of typed automata. Finally, we have applied these models and algorithms to a language modelling task. The obtained automata were competitive with state of the art models on a classical corpus
Zaidenberg, Sofia. "Apprentissage par renforcement de modèles de contexte pour l'informatique ambiante". Grenoble INPG, 2009. http://www.theses.fr/2009INPG0088.
This thesis studies the automatic acquisition by machine learning of a context model for a user in a ubiquitous environment. In such an environment, devices can communicate and cooperate in order to create a consistent computerized space. Some devices possess perceptual capabilities. The environment uses them to detect the user's situation his context. Other devices are able to execute actions. Our problematics consists in determining the optimal associations, for a given user, between situations and actions. Machine learning seems to be a sound approach since it results in a customized environment without requiring an explicit specification from the user. A life long learning lets the environment adapt itself continuously to world changes and user preferences changes. Reinforcement learning can be a solution to this problem, as long as it is adapted to some particular constraints due to our application setting
Montalbano, Pierre. "Contraintes linéaires et apprentissage sans conflit pour les modèles graphiques". Electronic Thesis or Diss., Toulouse 3, 2023. http://www.theses.fr/2023TOU30340.
Graphical models define a family of formalisms and algorithms used in particular for logical and probabilistic reasoning, in fields as varied as image analysis or natural language processing. They are capable of being learned from data, giving probabilistic information that can then be combined with logical information. The goal of the thesis is to improve the efficiency of reasoning algorithms on these models crossing probabilities and logic by generalizing a fundamental mechanism of the most efficient purely logical reasoning tools (SAT solvers) to this hybrid case mixing probabilities and logic: conflict-based learning. The work is based on the concept of duality in linear programming and our learning mechanism is conflict-free, producing linear constraints efficiently solved using a knapsack formulation