Indice
Letteratura scientifica selezionata sul tema "Apprentissage de representation d'etats"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Apprentissage de representation d'etats".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Articoli di riviste sul tema "Apprentissage de representation d'etats"
Kieren, Thomas E. "Review: A Conceptual Collage". Journal for Research in Mathematics Education 19, n. 1 (gennaio 1988): 86–89. http://dx.doi.org/10.5951/jresematheduc.19.1.0086.
Testo completoBENAMMAR-GUENDOUZ, Naima, e Fatna CHERIF HOSNI. "Repenser l’interculturel dans l’enseignement/apprentissage du FLE". ALTRALANG Journal 1, n. 02 (31 dicembre 2019): 52–64. http://dx.doi.org/10.52919/altralang.v1i02.23.
Testo completoCormanski, Alex. "Oser le corps". Voix Plurielles 12, n. 1 (6 maggio 2015): 318–26. http://dx.doi.org/10.26522/vp.v12i1.1194.
Testo completoVallélian, Joëlle. "Projet Insula : Création de marionnettes et d'outils de médiation symboliques pour aider l'enfant qui a un Trouble du Spectre de l'Autisme à exercer les habiletés sociales". Cortica 1, n. 1 (21 marzo 2022): 161–75. http://dx.doi.org/10.26034/cortica.2022.1943.
Testo completoPotter, Michael K., e Brad Wuetherick. "Who is Represented in the Teaching Commons?: SoTL Through the Lenses of the Arts and Humanities". Canadian Journal for the Scholarship of Teaching and Learning 6, n. 2 (11 giugno 2015). http://dx.doi.org/10.5206/cjsotl-rcacea.2015.2.2.
Testo completoTesi sul tema "Apprentissage de representation d'etats"
Hautot, Julien. "Représentation à base radiale pour l'apprentissage par renforcement visuel". Electronic Thesis or Diss., Université Clermont Auvergne (2021-...), 2024. http://www.theses.fr/2024UCFA0093.
Testo completoThis thesis work falls within the context of Reinforcement Learning (RL) from image data. Unlike supervised learning, which enables performing various tasks such as classification, regression, or segmentation from an annotated database, RL allows learning without a database through interactions with an environment. In these methods, an agent, such as a robot, performs different actions to explore its environment and gather training data. Training such an agent involves trial and error; the agent is penalized when it fails at its task and rewarded when it succeeds. The goal for the agent is to improve its behavior to obtain the most long-term rewards.We focus on visual extractions in RL scenarios using first-person view images. The use of visual data often involves deep convolutional networks that work directly on images. However, these networks have significant computational complexity, lack interpretability, and sometimes suffer from instability. To overcome these difficulties, we investigated the development of a network based on radial basis functions, which enable sparse and localized activations in the input space. Radial basis function networks (RBFNs) peaked in the 1990s but were later supplanted by convolutional networks due to their high computational cost on images. In this thesis, we developed a visual feature extractor inspired by RBFNs, simplifying the computational cost on images. We used our network for solving first-person visual tasks and compared its results with various state-of-the-art methods, including end-to-end learning methods, state representation learning methods, and extreme machine learning methods. Different scenarios were tested from the VizDoom simulator and the Pybullet robotics physics simulator. In addition to comparing the rewards obtained after learning, we conducted various tests on noise robustness, parameter generation of our network, and task transfer to reality.The proposed network achieves the best performance in reinforcement learning on the tested scenarios while being easier to use and interpret. Additionally, our network is robust to various noise types, paving the way for the effective transfer of knowledge acquired in simulation to reality
Dos, Santos Ludovic. "Representation learning for relational data". Electronic Thesis or Diss., Paris 6, 2017. http://www.theses.fr/2017PA066480.
Testo completoThe increasing use of social and sensor networks generates a large quantity of data that can be represented as complex graphs. There are many tasks from information analysis, to prediction and retrieval one can imagine on those data where relation between graph nodes should be informative. In this thesis, we proposed different models for three different tasks: - Graph node classification - Relational time series forecasting - Collaborative filtering. All the proposed models use the representation learning framework in its deterministic or Gaussian variant. First, we proposed two algorithms for the heterogeneous graph labeling task, one using deterministic representations and the other one Gaussian representations. Contrary to other state of the art models, our solution is able to learn edge weights when learning simultaneously the representations and the classifiers. Second, we proposed an algorithm for relational time series forecasting where the observations are not only correlated inside each series, but also across the different series. We use Gaussian representations in this contribution. This was an opportunity to see in which way using Gaussian representations instead of deterministic ones was profitable. At last, we apply the Gaussian representation learning approach to the collaborative filtering task. This is a preliminary work to see if the properties of Gaussian representations found on the two previous tasks were also verified for the ranking one. The goal of this work was to then generalize the approach to more relational data and not only bipartite graphs between users and items
Dos, Santos Ludovic. "Representation learning for relational data". Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066480/document.
Testo completoThe increasing use of social and sensor networks generates a large quantity of data that can be represented as complex graphs. There are many tasks from information analysis, to prediction and retrieval one can imagine on those data where relation between graph nodes should be informative. In this thesis, we proposed different models for three different tasks: - Graph node classification - Relational time series forecasting - Collaborative filtering. All the proposed models use the representation learning framework in its deterministic or Gaussian variant. First, we proposed two algorithms for the heterogeneous graph labeling task, one using deterministic representations and the other one Gaussian representations. Contrary to other state of the art models, our solution is able to learn edge weights when learning simultaneously the representations and the classifiers. Second, we proposed an algorithm for relational time series forecasting where the observations are not only correlated inside each series, but also across the different series. We use Gaussian representations in this contribution. This was an opportunity to see in which way using Gaussian representations instead of deterministic ones was profitable. At last, we apply the Gaussian representation learning approach to the collaborative filtering task. This is a preliminary work to see if the properties of Gaussian representations found on the two previous tasks were also verified for the ranking one. The goal of this work was to then generalize the approach to more relational data and not only bipartite graphs between users and items
Zaiem, Mohamed Salah. "Informed Speech Self-supervised Representation Learning". Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAT009.
Testo completoFeature learning has been driving machine learning advancement with the recently proposed methods getting progressively rid of handcrafted parts within the transformations from inputs to desired labels. Self-supervised learning has emerged within this context, allowing the processing of unlabeled data towards better performance on low-labeled tasks. The first part of my doctoral work is aimed towards motivating the choices in the speech selfsupervised pipelines learning the unsupervised representations. In this thesis, I first show how conditional-independence-based scoring can be used to efficiently and optimally select pretraining tasks tailored for the best performance on a target task. The second part of my doctoral work studies the evaluation and usage of pretrained self-supervised representations. I explore, first, the robustness of current speech self-supervision benchmarks to changes in the downstream modeling choices. I propose, second, fine-tuning approaches for better efficicency and generalization
Carvalho, Micael. "Deep representation spaces". Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS292.
Testo completoIn recent years, Deep Learning techniques have swept the state-of-the-art of many applications of Machine Learning, becoming the new standard approach for them. The architectures issued from these techniques have been used for transfer learning, which extended the power of deep models to tasks that did not have enough data to fully train them from scratch. This thesis' subject of study is the representation spaces created by deep architectures. First, we study properties inherent to them, with particular interest in dimensionality redundancy and precision of their features. Our findings reveal a strong degree of robustness, pointing the path to simple and powerful compression schemes. Then, we focus on refining these representations. We choose to adopt a cross-modal multi-task problem, and design a loss function capable of taking advantage of data coming from multiple modalities, while also taking into account different tasks associated to the same dataset. In order to correctly balance these losses, we also we develop a new sampling scheme that only takes into account examples contributing to the learning phase, i.e. those having a positive loss. Finally, we test our approach in a large-scale dataset of cooking recipes and associated pictures. Our method achieves a 5-fold improvement over the state-of-the-art, and we show that the multi-task aspect of our approach promotes a semantically meaningful organization of the representation space, allowing it to perform subtasks never seen during training, like ingredient exclusion and selection. The results we present in this thesis open many possibilities, including feature compression for remote applications, robust multi-modal and multi-task learning, and feature space refinement. For the cooking application, in particular, many of our findings are directly applicable in a real-world context, especially for the detection of allergens, finding alternative recipes due to dietary restrictions, and menu planning
Le, Naour Étienne. "Learning neural representation for time series". Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS211.
Testo completoTime series analysis has become increasingly important in various fields, including industry, finance, and climate science. The proliferation of sensors and the data heterogeneity necessitate effective time series modeling techniques. While complex supervised machine learning models have been developed for specific tasks, representation learning offers a different approach by learning data representations in a new space without explicitly focusing on solving a supervised task. The learned representation is then reused to improve the performance of supervised tasks applied on top of it. Recently, deep learning has transformed time series modeling, with advanced models like convolutional and attention-based neural networks achieving state-of-the-art performance in classification, imputation, or forecasting. The fusion of representation learning and deep learning has given rise to the field of neural representation learning. Neural representations have a greater ability to extract intricate features and patterns compared to non-neural representations, making them more powerful and effective in handling complex time series data. Recent advances in the field have significantly improved the quality of time series representations, enhancing their usefulness for various downstream tasks. This thesis focuses on advancing the field of neural representation learning for time series, targeting both industrial and academic needs. This research addresses open problems in the domain, such as creating interpretable neural representations, developing continuous time series representations that handle irregular and unaligned time series, and creating adaptable models for distribution shifts. This manuscript offers multiple contributions to tackle the previously mentioned challenges in neural representation learning for time series.- First, we propose an interpretable discrete neural representation model for time series based on a vector quantization encoder-decoder architecture, which facilitates interpretable classification.- Secondly, we design a continuous implicit neural representation model, called TimeFlow, for time series imputation and forecasting that can handle unaligned and irregular samples. This model leverages time series data representation, enabling it to adapt to new samples and unseen contexts by adjusting the representations.- Lastly, we demonstrate that TimeFlow learns relevant features, making the representation space effective for downstream tasks such as data generation.These contributions aim to advance the field of neural representation learning for time series and provide practical solutions to real-world industrial challenges
Trottier, Ludovic, e Ludovic Trottier. "Sparse, hierarchical and shared-factors priors for representation learning". Doctoral thesis, Université Laval, 2019. http://hdl.handle.net/20.500.11794/35777.
Testo completoLa représentation en caractéristiques est une préoccupation centrale des systèmes d’apprentissage automatique d’aujourd’hui. Une représentation adéquate peut faciliter une tâche d’apprentissage complexe. C’est le cas lorsque par exemple cette représentation est de faible dimensionnalité et est constituée de caractéristiques de haut niveau. Mais comment déterminer si une représentation est adéquate pour une tâche d’apprentissage ? Les récents travaux suggèrent qu’il est préférable de voir le choix de la représentation comme un problème d’apprentissage en soi. C’est ce que l’on nomme l’apprentissage de représentation. Cette thèse présente une série de contributions visant à améliorer la qualité des représentations apprises. La première contribution élabore une étude comparative des approches par dictionnaire parcimonieux sur le problème de la localisation de points de prises (pour la saisie robotisée) et fournit une analyse empirique de leurs avantages et leurs inconvénients. La deuxième contribution propose une architecture réseau de neurones à convolution (CNN) pour la détection de points de prise et la compare aux approches d’apprentissage par dictionnaire. Ensuite, la troisième contribution élabore une nouvelle fonction d’activation paramétrique et la valide expérimentalement. Finalement, la quatrième contribution détaille un nouveau mécanisme de partage souple de paramètres dans un cadre d’apprentissage multitâche.
Feature representation is a central concern of today’s machine learning systems. A proper representation can facilitate a complex learning task. This is the case when for instance the representation has low dimensionality and consists of high-level characteristics. But how can we determine if a representation is adequate for a learning task? Recent work suggests that it is better to see the choice of representation as a learning problem in itself. This is called Representation Learning. This thesis presents a series of contributions aimed at improving the quality of the learned representations. The first contribution elaborates a comparative study of Sparse Dictionary Learning (SDL) approaches on the problem of grasp detection (for robotic grasping) and provides an empirical analysis of their advantages and disadvantages. The second contribution proposes a Convolutional Neural Network (CNN) architecture for grasp detection and compares it to SDL. Then, the third contribution elaborates a new parametric activation function and validates it experimentally. Finally, the fourth contribution details a new soft parameter sharing mechanism for multitasking learning.
Feature representation is a central concern of today’s machine learning systems. A proper representation can facilitate a complex learning task. This is the case when for instance the representation has low dimensionality and consists of high-level characteristics. But how can we determine if a representation is adequate for a learning task? Recent work suggests that it is better to see the choice of representation as a learning problem in itself. This is called Representation Learning. This thesis presents a series of contributions aimed at improving the quality of the learned representations. The first contribution elaborates a comparative study of Sparse Dictionary Learning (SDL) approaches on the problem of grasp detection (for robotic grasping) and provides an empirical analysis of their advantages and disadvantages. The second contribution proposes a Convolutional Neural Network (CNN) architecture for grasp detection and compares it to SDL. Then, the third contribution elaborates a new parametric activation function and validates it experimentally. Finally, the fourth contribution details a new soft parameter sharing mechanism for multitasking learning.
Gerald, Thomas. "Representation Learning for Large Scale Classification". Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS316.
Testo completoThe past decades have seen the rise of new technologies that simplify information sharing. Today, a huge part of the data is accessible to most users. In this thesis, we propose to study the problems of document annotation to ease access to information thanks to retrieved annotations. We will be interested in extreme classification-related tasks which characterizes the tasks of automatic annotation when the number of labels is important. Many difficulties arise from the size and complexity of this data: prediction time, storage and the relevance of the annotations are the most representative. Recent research dealing with this issue is based on three classification schemes: "one against all" approaches learning as many classifiers as labels; "hierarchical" methods organizing a simple classifier structure; representation approaches embedding documents into small spaces. In this thesis, we study the representation classification scheme. Through our contributions, we study different approaches either to speed up prediction or to better structure representations. In a first part, we will study discrete representations such as "ECOC" methods to speed up the annotation process. In a second step, we will consider hyperbolic embeddings to take advantage of the qualities of this space for the representation of structured data
Coria, Juan Manuel. "Continual Representation Learning in Written and Spoken Language". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG025.
Testo completoAlthough machine learning has recently witnessed major breakthroughs, today's models are mostly trained once on a target task and then deployed, rarely (if ever) revisiting their parameters.This problem affects performance after deployment, as task specifications and data may evolve with user needs and distribution shifts.To solve this, continual learning proposes to train models over time as new data becomes available.However, models trained in this way suffer from significant performance loss on previously seen examples, a phenomenon called catastrophic forgetting.Although many studies have proposed different strategies to prevent forgetting, they often rely on labeled data, which is rarely available in practice. In this thesis, we study continual learning for written and spoken language.Our main goal is to design autonomous and self-learning systems able to leverage scarce on-the-job data to adapt to the new environments they are deployed in.Contrary to recent work on learning general-purpose representations (or embeddings), we propose to leverage representations that are tailored to a downstream task.We believe the latter may be easier to interpret and exploit by unsupervised training algorithms like clustering, that are less prone to forgetting. Throughout our work, we improve our understanding of continual learning in a variety of settings, such as the adaptation of a language model to new languages for sequence labeling tasks, or even the adaptation to a live conversation in the context of speaker diarization.We show that task-specific representations allow for effective low-resource continual learning, and that a model's own predictions can be exploited for full self-learning
Venkataramanan, Shashanka. "Metric learning for instance and category-level visual representation". Electronic Thesis or Diss., Université de Rennes (2023-....), 2024. http://www.theses.fr/2024URENS022.
Testo completoThe primary goal in computer vision is to enable machines to extract meaningful information from visual data, such as images and videos, and leverage this information to perform a wide range of tasks. To this end, substantial research has focused on developing deep learning models capable of encoding comprehensive and robust visual representations. A prominent strategy in this context involves pretraining models on large-scale datasets, such as ImageNet, to learn representations that can exhibit cross-task applicability and facilitate the successful handling of diverse downstream tasks with minimal effort. To facilitate learning on these large-scale datasets and encode good representations, com- plex data augmentation strategies have been used. However, these augmentations can be limited in their scope, either being hand-crafted and lacking diversity, or generating images that appear unnatural. Moreover, the focus of these augmentation techniques has primarily been on the ImageNet dataset and its downstream tasks, limiting their applicability to a broader range of computer vision problems. In this thesis, we aim to tackle these limitations by exploring different approaches to en- hance the efficiency and effectiveness in representation learning. The common thread across the works presented is the use of interpolation-based techniques, such as mixup, to generate diverse and informative training examples beyond the original dataset. In the first work, we are motivated by the idea of deformation as a natural way of interpolating images rather than using a convex combination. We show that geometrically aligning the two images in the fea- ture space, allows for more natural interpolation that retains the geometry of one image and the texture of the other, connecting it to style transfer. Drawing from these observations, we explore the combination of mixup and deep metric learning. We develop a generalized formu- lation that accommodates mixup in metric learning, leading to improved representations that explore areas of the embedding space beyond the training classes. Building on these insights, we revisit the original motivation of mixup and generate a larger number of interpolated examples beyond the mini-batch size by interpolating in the embedding space. This approach allows us to sample on the entire convex hull of the mini-batch, rather than just along lin- ear segments between pairs of examples. Finally, we investigate the potential of using natural augmentations of objects from videos. We introduce a "Walking Tours" dataset of first-person egocentric videos, which capture a diverse range of objects and actions in natural scene transi- tions. We then propose a novel self-supervised pretraining method called DoRA, which detects and tracks objects in video frames, deriving multiple views from the tracks and using them in a self-supervised manner
Libri sul tema "Apprentissage de representation d'etats"
Bareiss, Ray. Exemplar-based knowledge acquisition: A unified approach to concept representation, classification, and learning. Boston: Academic Press, 1989.
Cerca il testo completo1966-, McBride Kecia Driver, a cura di. Visual media and the humanities: A pedagogy of representation. Knoxville: University of Tennessee Press, 2004.
Cerca il testo completoR, Cocking Rodney, e Renninger K. Ann, a cura di. The development and meaning ofpsychological distance. Hillsdale, N.J: L. Erlbaum Associates, 1993.
Cerca il testo completoR, Cocking Rodney, e Renninger K. Ann, a cura di. The development and meaning of psychological distance. Hillsdale, N.J: L. Erlbaum, 1993.
Cerca il testo completoBratko, Ivan. Prolog: Algoritm'i iskusstvennogo intellekta na jaz'ike. 3a ed. Moskva: Vil'jams, 2004.
Cerca il testo completoIvan, Bratko. Prolog programming for artificial intelligence. 2a ed. Wokingham, England: Addison-Wesley Pub. Co., 1991.
Cerca il testo completoIvan, Bratko. Prolog programming for artificial intelligence. 2a ed. Wokingham, England: Addison-Wesley Pub. Co, 1990.
Cerca il testo completoBareiss, Ray, e B. Chandrasekaran. Exemplar-Based Knowledge Acquisition: A Unified Approach to Concept Representation, Classification, and Learning. Elsevier Science & Technology Books, 2014.
Cerca il testo completoRepresentation and recognition in vision. Cambridge, Mass: MIT Press, 1999.
Cerca il testo completoEdelman, Shimon. Representation and Recognition in Vision. MIT Press, 1999.
Cerca il testo completo