Dissertationen zum Thema „Apprentissage en continu“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Apprentissage en continu" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Munos, Rémi. „Apprentissage par renforcement, étude du cas continu“. Paris, EHESS, 1997. http://www.theses.fr/1997EHESA021.
Der volle Inhalt der QuelleSors, Arnaud. „Apprentissage profond pour l'analyse de l'EEG continu“. Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAS006/document.
Der volle Inhalt der QuelleThe objective of this research is to explore and develop machine learning methods for the analysis of continuous electroencephalogram (EEG). Continuous EEG is an interesting modality for functional evaluation of cerebral state in the intensive care unit and beyond. Today its clinical use remains more limited that it could be because interpretation is still mostly performed visually by trained experts. In this work we develop automated analysis tools based on deep neural models.The subparts of this work hinge around post-anoxic coma prognostication, chosen as pilot application. A small number of long-duration records were performed and available existing data was gathered from CHU Grenoble. Different components of a semi-supervised architecture that addresses the application are imagined, developed, and validated on surrogate tasks.First, we validate the effectiveness of deep neural networks for EEG analysis from raw samples. For this we choose the supervised task of sleep stage classification from single-channel EEG. We use a convolutional neural network adapted for EEG and we train and evaluate the system on the SHHS (Sleep Heart Health Study) dataset. This constitutes the first neural sleep scoring system at this scale (5000 patients). Classification performance reaches or surpasses the state of the art.In real use for most clinical applications, the main challenge is the lack of (and difficulty of establishing) suitable annotations on patterns or short EEG segments. Available annotations are high-level (for example, clinical outcome) and therefore they are few. We search how to learn compact EEG representations in an unsupervised/semi-supervised manner. The field of unsupervised learning using deep neural networks is still young. To compare to existing work we start with image data and investigate the use of generative adversarial networks (GANs) for unsupervised adversarial representation learning. The quality and stability of different variants are evaluated. We then apply Gradient-penalized Wasserstein GANs on EEG sequences generation. The system is trained on single channel sequences from post-anoxic coma patients and is able to generate realistic synthetic sequences. We also explore and discuss original ideas for learning representations through matching distributions in the output space of representative networks.Finally, multichannel EEG signals have specificities that should be accounted for in characterization architectures. Each EEG sample is an instantaneous mixture of the activities of a number of sources. Based on this statement we propose an analysis system made of a spatial analysis subsystem followed by a temporal analysis subsystem. The spatial analysis subsystem is an extension of source separation methods built with a neural architecture with adaptive recombination weights, i.e. weights that are not learned but depend on features of the input. We show that this architecture learns to perform Independent Component Analysis if it is trained on a measure of non-gaussianity. For temporal analysis, standard (shared) convolutional neural networks applied on separate recomposed channels can be used
Zimmer, Matthieu. „Apprentissage par renforcement développemental“. Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0008/document.
Der volle Inhalt der QuelleReinforcement learning allows an agent to learn a behavior that has never been previously defined by humans. The agent discovers the environment and the different consequences of its actions through its interaction: it learns from its own experience, without having pre-established knowledge of the goals or effects of its actions. This thesis tackles how deep learning can help reinforcement learning to handle continuous spaces and environments with many degrees of freedom in order to solve problems closer to reality. Indeed, neural networks have a good scalability and representativeness. They make possible to approximate functions on continuous spaces and allow a developmental approach, because they require little a priori knowledge on the domain. We seek to reduce the amount of necessary interaction of the agent to achieve acceptable behavior. To do so, we proposed the Neural Fitted Actor-Critic framework that defines several data efficient actor-critic algorithms. We examine how the agent can fully exploit the transitions generated by previous behaviors by integrating off-policy data into the proposed framework. Finally, we study how the agent can learn faster by taking advantage of the development of his body, in particular, by proceeding with a gradual increase in the dimensionality of its sensorimotor space
Zimmer, Matthieu. „Apprentissage par renforcement développemental“. Electronic Thesis or Diss., Université de Lorraine, 2018. http://www.theses.fr/2018LORR0008.
Der volle Inhalt der QuelleReinforcement learning allows an agent to learn a behavior that has never been previously defined by humans. The agent discovers the environment and the different consequences of its actions through its interaction: it learns from its own experience, without having pre-established knowledge of the goals or effects of its actions. This thesis tackles how deep learning can help reinforcement learning to handle continuous spaces and environments with many degrees of freedom in order to solve problems closer to reality. Indeed, neural networks have a good scalability and representativeness. They make possible to approximate functions on continuous spaces and allow a developmental approach, because they require little a priori knowledge on the domain. We seek to reduce the amount of necessary interaction of the agent to achieve acceptable behavior. To do so, we proposed the Neural Fitted Actor-Critic framework that defines several data efficient actor-critic algorithms. We examine how the agent can fully exploit the transitions generated by previous behaviors by integrating off-policy data into the proposed framework. Finally, we study how the agent can learn faster by taking advantage of the development of his body, in particular, by proceeding with a gradual increase in the dimensionality of its sensorimotor space
Lefort, Mathieu. „Apprentissage spatial de corrélations multimodales par des mécanismes d'inspiration corticale“. Phd thesis, Université Nancy II, 2012. http://tel.archives-ouvertes.fr/tel-00756687.
Der volle Inhalt der QuelleMainsant, Marion. „Apprentissage continu sous divers scénarios d'arrivée de données : vers des applications robustes et éthiques de l'apprentissage profond“. Electronic Thesis or Diss., Université Grenoble Alpes, 2023. http://www.theses.fr/2023GRALS045.
Der volle Inhalt der QuelleThe human brain continuously receives information from external stimuli. It then has the ability to adapt to new knowledge while retaining past events. Nowadays, more and more artificial intelligence algorithms aim to learn knowledge in the same way as a human being. They therefore have to be able to adapt to a large variety of data arriving sequentially and available over a limited period of time. However, when a deep learning algorithm learns new data, the knowledge contained in the neural network overlaps old one and the majority of the past information is lost, a phenomenon referred in the literature as catastrophic forgetting. Numerous methods have been proposed to overcome this issue, but as they were focused on providing the best performance, studies have moved away from real-life applications where algorithms need to adapt to changing environments and perform, no matter the type of data arrival. In addition, most of the best state of the art methods are replay methods which retain a small memory of the past and consequently do not preserve data privacy.In this thesis, we propose to explore data arrival scenarios existing in the literature, with the aim of applying them to facial emotion recognition, which is essential for human-robot interactions. To this end, we present Dream Net - Data-Free, a privacy preserving algorithm, able to adapt to a large number of data arrival scenarios without storing any past samples. After demonstrating the robustness of this algorithm compared to existing state-of-the-art methods on standard computer vision databases (Mnist, Cifar-10, Cifar-100 and Imagenet-100), we show that it can also adapt to more complex facial emotion recognition databases. We then propose to embed the algorithm on a Nvidia Jetson nano card creating a demonstrator able to learn and predict emotions in real-time. Finally, we discuss the relevance of our approach for bias mitigation in artificial intelligence, opening up perspectives towards a more ethical AI
Lesort, Timothée. „Continual Learning : Tackling Catastrophic Forgetting in Deep Neural Networks with Replay Processes“. Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAE003.
Der volle Inhalt der QuelleHumans learn all their life long. They accumulate knowledge from a sequence of learning experiences and remember the essential concepts without forgetting what they have learned previously. Artificial neural networks struggle to learn similarly. They often rely on data rigorously preprocessed to learn solutions to specific problems such as classification or regression.In particular, they forget their past learning experiences if trained on new ones.Therefore, artificial neural networks are often inept to deal with real-lifesuch as an autonomous-robot that have to learn on-line to adapt to new situations and overcome new problems without forgetting its past learning-experiences.Continual learning (CL) is a branch of machine learning addressing this type of problems.Continual algorithms are designed to accumulate and improve knowledge in a curriculum of learning-experiences without forgetting.In this thesis, we propose to explore continual algorithms with replay processes.Replay processes gather together rehearsal methods and generative replay methods.Generative Replay consists of regenerating past learning experiences with a generative model to remember them. Rehearsal consists of saving a core-set of samples from past learning experiences to rehearse them later. The replay processes make possible a compromise between optimizing the current learning objective and the past ones enabling learning without forgetting in sequences of tasks settings.We show that they are very promising methods for continual learning. Notably, they enable the re-evaluation of past data with new knowledge and the confrontation of data from different learning-experiences. We demonstrate their ability to learn continually through unsupervised learning, supervised learning and reinforcement learning tasks
Lefort, Mathieu. „Apprentissage spatial de corrélations multimodales par des mécanismes d'inspiration corticale“. Electronic Thesis or Diss., Université de Lorraine, 2012. http://www.theses.fr/2012LORR0106.
Der volle Inhalt der QuelleThis thesis focuses on unifying multiple modal data flows that may be provided by sensors of an agent. This unification, inspired by psychological experiments like the ventriloquist effect, is based on detecting correlations which are defined as temporally recurrent spatial patterns that appear in the input flows. Learning of the input flow correlations space consists on sampling this space and generalizing theselearned samples. This thesis proposed some functional paradigms for multimodal data processing, leading to the connectionist, generic, modular and cortically inspired architecture SOMMA (Self-Organizing Maps for Multimodal Association). In this model, each modal stimulus is processed in a cortical map. Interconnectionof these maps provides an unifying multimodal data processing. Sampling and generalization of correlations are based on the constrained self-organization of each map. The model is characterised by a gradual emergence of these functional properties : monomodal properties lead to the emergence of multimodal ones and learning of correlations in each map precedes self-organization of these maps.Furthermore, the use of a connectionist architecture and of on-line and unsupervised learning provides plasticity and robustness properties to the data processing in SOMMA. Classical artificial intelligence models usually miss such properties
Oulhadj, Hamouche. „Des primitives aux lettres : une méthode structurelle de reconnaissance en ligne de mots d'écriture cursive manuscrite avec un apprentissage continu“. Paris 12, 1990. http://www.theses.fr/1990PA120045.
Der volle Inhalt der QuelleDouillard, Arthur. „Continual Learning for Computer Vision“. Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS165.
Der volle Inhalt der QuelleI first review the existing methods based on regularization for continual learning. While regularizing a model's probabilities is very efficient to reduce forgetting in large-scale datasets, there are few works considering constraints on intermediate features. I cover in this chapter two contributions aiming to regularize directly the latent space of ConvNet. The first one, PODNet, aims to reduce the drift of spatial statistics between the old and new model, which in effect reduces drastically forgetting of old classes while enabling efficient learning of new classes. I show in a second part a complementary method where we avoid pre-emptively forgetting by allocating locations in the latent space for yet unseen future class. Then, I describe a recent application of CIL to semantic segmentation. I show that the very nature of CSS offer new specific challenges, namely forgetting on large images and a background shift. We tackle the first problem by extending our distillation loss introduced in the previous chapter to multi-scales. The second problem is solved by an efficient pseudo-labeling strategy. Finally, we consider the common rehearsal learning, but applied this time to CSS. I show that it cannot be used naively because of memory complexity and design a light-weight rehearsal that is even more efficient. Finally, I consider a completely different approach to continual learning: dynamic networks where the parameters are extended during training to adapt to new tasks. Previous works on this domain are hard to train and often suffer from parameter count explosion. For the first time in continual computer vision, we propose to use the Transformer architecture: the model dimension mostly fixed and shared across tasks, except for an expansion of learned task tokens. With an encoder/decoder strategy where the decoder forward is specialized by a task token, we show state-of-the-art robustness to forgetting while our memory and computational complexities barely grow
Coria, Juan Manuel. „Continual Representation Learning in Written and Spoken Language“. Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG025.
Der volle Inhalt der QuelleAlthough machine learning has recently witnessed major breakthroughs, today's models are mostly trained once on a target task and then deployed, rarely (if ever) revisiting their parameters.This problem affects performance after deployment, as task specifications and data may evolve with user needs and distribution shifts.To solve this, continual learning proposes to train models over time as new data becomes available.However, models trained in this way suffer from significant performance loss on previously seen examples, a phenomenon called catastrophic forgetting.Although many studies have proposed different strategies to prevent forgetting, they often rely on labeled data, which is rarely available in practice. In this thesis, we study continual learning for written and spoken language.Our main goal is to design autonomous and self-learning systems able to leverage scarce on-the-job data to adapt to the new environments they are deployed in.Contrary to recent work on learning general-purpose representations (or embeddings), we propose to leverage representations that are tailored to a downstream task.We believe the latter may be easier to interpret and exploit by unsupervised training algorithms like clustering, that are less prone to forgetting. Throughout our work, we improve our understanding of continual learning in a variety of settings, such as the adaptation of a language model to new languages for sequence labeling tasks, or even the adaptation to a live conversation in the context of speaker diarization.We show that task-specific representations allow for effective low-resource continual learning, and that a model's own predictions can be exploited for full self-learning
Gaya, Jean-Baptiste. „Subspaces of Policies for Deep Reinforcement Learning“. Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS075.
Der volle Inhalt der QuelleThis work explores "Subspaces of Policies for Deep Reinforcement Learning," introducing an innovative approach to address adaptability and generalization challenges in deep reinforcement learning (RL). Situated within the broader context of the AI revolution, this research emphasizes the shift toward scalable and generalizable models in RL, inspired by advancements in deep learning architectures and methodologies. It identifies the limitations of current RL applications, particularly in achieving generalization across varied tasks and domains, proposing a paradigm shift towards adaptive methods.The research initially tackles zero-shot generalization, assessing deep RL's maturity in generalizing across unseen tasks without additional training. Through investigations into morphological generalization and multi-objective reinforcement learning (MORL), critical limitations in current methods are identified, and novel approaches to improve generalization capabilities are introduced. Notably, work on weight averaging in MORL presents a straightforward method for optimizing multiple objectives, showing promise for future exploration.The core contribution lies in developing a "Subspace of Policies" framework. This novel approach advocates for maintaining a dynamic landscape of solutions within a smaller parametric space, taking profit of neural network weight averaging. Functional diversity is achieved with minimal computational overhead through weight interpolation between neural network parameters. This methodology is explored through various experiments and settings, including few-shot adaptation and continual reinforcement learning, demonstrating its efficacy and potential for scalability and adaptability in complex RL tasks.The conclusion reflects on the research journey, emphasizing the implications of the "Subspaces of Policies" framework for future AI research. Several future directions are outlined, including enhancing the scalability of subspace methods, exploring their potential in decentralized settings, and addressing challenges in efficiency and interpretability. This foundational contribution to the field of RL paves the way for innovative solutions to long-standing challenges in adaptability and generalization, marking a significant step forward in the development of autonomous agents capable of navigating a wide array of tasks seamlessly
Besedin, Andrey. „Continual forgetting-free deep learning from high-dimensional data streams“. Electronic Thesis or Diss., Paris, CNAM, 2019. http://www.theses.fr/2019CNAM1263.
Der volle Inhalt der QuelleIn this thesis, we propose a new deep-learning-based approach for online classification on streams of high-dimensional data. In recent years, Neural Networks (NN) have become the primary building block of state-of-the-art methods in various machine learning problems. Most of these methods, however, are designed to solve the static learning problem, when all data are available at once at training time. Performing Online Deep Learning is exceptionally challenging.The main difficulty is that NN-based classifiers usually rely on the assumption that the sequence of data batches used during training is stationary, or in other words, that the distribution of data classes is the same for all batches (i.i.d. assumption).When this assumption does not hold Neural Networks tend to forget the concepts that are temporarily not available in thestream. In the literature, this phenomenon is known as catastrophic forgetting. The approaches we propose in this thesis aim to guarantee the i.i.d. nature of each batch that comes from the stream and compensates for the lack of historical data. To do this, we train generative models and pseudo-generative models capable of producing synthetic samples from classes that are absent or misrepresented in the stream and complete the stream’s batches with these samples. We test our approaches in an incremental learning scenario and a specific type of continuous learning. Our approaches perform classification on dynamic data streams with the accuracy close to the results obtained in the static classification configuration where all data are available for the duration of the learning. Besides, we demonstrate the ability of our methods to adapt to invisible data classes and new instances of already known data categories, while avoiding forgetting the previously acquired knowledge
Mazac, Sébastien. „Approche décentralisée de l'apprentissage constructiviste et modélisation multi-agent du problème d'amorçage de l'apprentissage sensorimoteur en environnement continu : application à l'intelligence ambiante“. Thesis, Lyon 1, 2015. http://www.theses.fr/2015LYO10147/document.
Der volle Inhalt der QuelleThe theory of cognitive development from Jean Piaget (1923) is a constructivist perspective of learning that has substantially influenced cognitive science domain. Within AI, lots of works have tried to take inspiration from this paradigm since the beginning of the discipline. Indeed it seems that constructivism is a possible trail in order to overcome the limitations of classical techniques stemming from cognitivism or connectionism and create autonomous agents, fitted with strong adaptation ability within their environment, modelled on biological organisms. Potential applications concern intelligent agents in interaction with a complex environment, with objectives that cannot be predefined. Like robotics, Ambient Intelligence (AmI) is a rich and ambitious paradigm that represents a high complexity challenge for AI. In particular, as a part of constructivist theory, the agent has to build a representation of the world that relies on the learning of sensori-motor patterns starting from its own experience only. This step is difficult to set up for systems in continuous environments, using raw data from sensors without a priori modelling.With the use of multi-agent systems, we investigate the development of new techniques in order to adapt constructivist approach of learning on actual cases. Therefore, we use ambient intelligence as a reference domain for the application of our approach
Aissani, Nassima. „Pilotage adaptatif et réactif pour un système de production à flux continu : application à un système de production pétrochimique“. Phd thesis, Valenciennes, 2010. http://tel.archives-ouvertes.fr/tel-00553512.
Der volle Inhalt der QuelleHocquet, Guillaume. „Class Incremental Continual Learning in Deep Neural Networks“. Thesis, université Paris-Saclay, 2021. http://www.theses.fr/2021UPAST070.
Der volle Inhalt der QuelleWe are interested in the problem of continual learning of artificial neural networks in the case where the data are available for only one class at a time. To address the problem of catastrophic forgetting that restrain the learning performances in these conditions, we propose an approach based on the representation of the data of a class by a normal distribution. The transformations associated with these representations are performed using invertible neural networks, which can be trained with the data of a single class. Each class is assigned a network that will model its features. In this setting, predicting the class of a sample corresponds to identifying the network that best fit the sample. The advantage of such an approach is that once a network is trained, it is no longer necessary to update it later, as each network is independent of the others. It is this particularly advantageous property that sets our method apart from previous work in this area. We support our demonstration with experiments performed on various datasets and show that our approach performs favorably compared to the state of the art. Subsequently, we propose to optimize our approach by reducing its impact on memory by factoring the network parameters. It is then possible to significantly reduce the storage cost of these networks with a limited performance loss. Finally, we also study strategies to produce efficient feature extractor models for continual learning and we show their relevance compared to the networks traditionally used for continual learning
Izard, Amélie. „La lecture du CP au CM2 avec le test de l'alouette : que peut-on dire du niveau des élèves à quarante ans de distance ? : comment se déroule cet apprentissage (continu/discontinu) ?“ Phd thesis, Université Toulouse le Mirail - Toulouse II, 2013. http://tel.archives-ouvertes.fr/tel-00961020.
Der volle Inhalt der QuelleMorette, Nathalie. „Mesure et analyse par apprentissage artificiel des décharges partielles sous haute tension continue pour la reconnaissance de l'état de dégradation des isolants électriques“. Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS006.
Der volle Inhalt der QuellePartial discharges (PD) are one of the key drivers of degradation and ageing of insulating materials used in high-voltage switchgear. Consequently, partial discharges measurement has become an essential assessment tool for the monitoring of insulation systems. Given the continuing growth of renewable energy, the transport under direct current (DC) is economically advantageous. However, the relationship between partial discharges characteristics and the degradation of cables insulation under high voltage direct current (HVDC) remains unclear. In this work, a methodology is proposed for ageing state recognition of electrical insulation systems based on PD measurements under DC. For this purpose, original measuring devices have been developed and PD measurements were performed within different cable types under HVDC. In order to ensure a reliable monitoring and diagnosis of the insulation, noise signals must be eliminated. This thesis tackles the problem of the discrimination of partial discharge and noise signals acquired in different environments by applying machine learning methods. The techniques developed are a promising tool to improve the diagnosis of HV equipment under HVDC, where the need to discard automatically noise signals with high accuracy is of great importance. Once disturbances were eliminated from the databases, ageing state recognition was performed on different cable types. The feature extraction, ranking and selection methods, combined with classification techniques allowed to obtain recognition rates up to 100%
Cottier, Jean-Bernard. „Soigner son travail pour prendre soin des autres : l’expérience d’un espace de parole entre soignants : une occasion de professionnalisation du rôle relationnel ?“ Thesis, Nantes, 2019. http://www.theses.fr/2019NANT2037.
Der volle Inhalt der QuelleThis research is based on a five-year experience (2003-2008) in a gastroenterology department. The author of this thesis has felt the need to request some caregivers having voluntarily participated in a think tank which gathered them regularly in the department; these moments allowed them to express their hardships, their questioning, their doubts, even their suffering. Many years later, the nurse and PhD student who actively participated wanted to know why this experience had motivated some of the caregivers. It was important to meet them ten years later to identify with hindsight the benefits which they had possibly gained from these educational informal times. Thanks to the collected narratives, a hypothesis became obvious: to take care of others, the caregivers has no recourse but to talk, surrounded by his peers. This is the position of the learner which emerges through these learning narratives realized into group practice. By making their choice, these learners have access to four dimensions that characterize the subject : he is capable, sensitive, socially situated and able to lead a reflexion. By the emergence of this enigmatic learning subject within think tank, these caregivers make a criticism of their own knowledge, question themselves and so participate in a process of professional and personal self-growth both for themselves and others
Cámara, Chávez Guillermo. „Analyse du contenu vidéo par apprentissage actif“. Cergy-Pontoise, 2007. http://www.theses.fr/2007CERG0380.
Der volle Inhalt der QuelleThis thesis presents work towards a unified framework for semi-automated video indexing and interactive retrieval. To create an efficient index, a set of representative key frames are selected from the entire video content. We developed an automatic shot boundary detection algorithm to get rid of parameters and thresholds. We adopted a SVM classifier due to its ability to use very high dimensional feature spaces while at the same time keeping strong generalization guarantees from few training examples. We deeply evaluated the combination of features and kernels and present interesting results obtained, for shot extraction TRECVID 2006 Task. We then propose an interactive video retrieval system: RETINVID, to significantly reduce the number of key frames annotated by the user. The key frames are selected based on their ability to increase the knowledge of the data. We perform an experiment against the 2005 TRECVID benchmark for high-level task
Yang, Rui. „Online continual learning for 3D detection of road participants in autonomous driving“. Electronic Thesis or Diss., Bourgogne Franche-Comté, 2023. http://www.theses.fr/2023UBFCA021.
Der volle Inhalt der QuelleAutonomous driving has witnessed remarkable progress over the past decades, and machine perception stands as a critical foundational issue, encompassing the detection and tracking of road participants such as vehicles, pedestrians, and cyclists. While vision-based object detection has achieved significant progress thanks to deep learning techniques, challenges still exist in 3D detection.Firstly, non-visual sensors, such as 3D LiDAR, demonstrate unparalleled advantages in achieving precise detection and adaptability to varying lighting conditions. However, the complexity of handling points cloud data, which can be challenging to interpret, coupled with the high cost of manual annotation, pose primary challenges in the use of 3D LiDAR.Secondly, concerns arise from the lack of interpretability in deep learning models, coupled with their heavy reliance on extensive training data, which often necessitates costly retraining for acceptable generalization performance when adapting to new scenes or environments.This dissertation addresses these challenges from three main perspectives: Generation of Samples, Preservation of Knowledge, and Avoidance of Catastrophic Forgetting. We introduce the concept of Online Continual Learning (OCL) and propose a general framework that encompasses detection, tracking, learning, and control. This framework enables models to update in real-time, preserving knowledge rather than raw data, and effectively mitigating the performance degradation caused by catastrophic forgetting.The main work of this dissertation includes: 1) Generation of Samples: To address sparse point clouds generated by 3D LiDAR and the labor-intensive manual annotation, we leverage the advantages of multi-sensor data and employ an efficient online transfer learning framework. This framework effectively transfers mature image-based detection capabilities to 3D LiDAR-based detectors. An innovative aspect is the "learn-by-use" process, achieved through closed-loop detection, facilitating continuous self-supervised learning. A novel information fusion strategy is proposed to combine spatio-temporal correlations, enhancing the effectiveness of knowledge transfer. 2) Preservation of Knowledge: Online Learning (OL) is introduced to address knowledge preservation without retaining training data. An improved Online Random Forest (ORF) model is incorporated, enabling rapid model training with limited computational resources and immediate deployment. The ORF model's parameters are dynamically shared throughout the training process to address the unknown data distribution. The exploration of ORF tree structures ensures independence in training processes, enhancing the model's ability to capture complex patterns and variations. Implementing octrees improves storage efficiency and model access. 3) Avoidance of Catastrophic Forgetting: To tackle the inevitable forgetting problem in online learning frameworks during long-term deployment, we propose the Long Short-Term Online Learning (LSTOL) framework. LSTOL combines multiple short-term learners based on ensemble learning with a long-term controller featuring a probabilistic decision mechanism. This framework ensures effective knowledge maintenance and adapts to changes during long-term deployment, without making assumptions about model types and data continuity. Cross-dataset evaluations on tasks such as 3D detection of road participants demonstrate the effectiveness of LSTOL in avoiding forgetting
Risser-Maroix, Olivier. „Similarité visuelle et apprentissage de représentations“. Electronic Thesis or Diss., Université Paris Cité, 2022. http://www.theses.fr/2022UNIP7327.
Der volle Inhalt der QuelleThe objective of this CIFRE thesis is to develop an image search engine, based on computer vision, to assist customs officers. Indeed, we observe, paradoxically, an increase in security threats (terrorism, trafficking, etc.) coupled with a decrease in the number of customs officers. The images of cargoes acquired by X-ray scanners already allow the inspection of a load without requiring the opening and complete search of a controlled load. By automatically proposing similar images, such a search engine would help the customs officer in his decision making when faced with infrequent or suspicious visual signatures of products. Thanks to the development of modern artificial intelligence (AI) techniques, our era is undergoing great changes: AI is transforming all sectors of the economy. Some see this advent of "robotization" as the dehumanization of the workforce, or even its replacement. However, reducing the use of AI to the simple search for productivity gains would be reductive. In reality, AI could allow to increase the work capacity of humans and not to compete with them in order to replace them. It is in this context, the birth of Augmented Intelligence, that this thesis takes place. This manuscript devoted to the question of visual similarity is divided into two parts. Two practical cases where the collaboration between Man and AI is beneficial are proposed. In the first part, the problem of learning representations for the retrieval of similar images is still under investigation. After implementing a first system similar to those proposed by the state of the art, one of the main limitations is pointed out: the semantic bias. Indeed, the main contemporary methods use image datasets coupled with semantic labels only. The literature considers that two images are similar if they share the same label. This vision of the notion of similarity, however fundamental in AI, is reductive. It will therefore be questioned in the light of work in cognitive psychology in order to propose an improvement: the taking into account of visual similarity. This new definition allows a better synergy between the customs officer and the machine. This work is the subject of scientific publications and a patent. In the second part, after having identified the key components allowing to improve the performances of thepreviously proposed system, an approach mixing empirical and theoretical research is proposed. This secondcase, augmented intelligence, is inspired by recent developments in mathematics and physics. First applied tothe understanding of an important hyperparameter (temperature), then to a larger task (classification), theproposed method provides an intuition on the importance and role of factors correlated to the studied variable(e.g. hyperparameter, score, etc.). The processing chain thus set up has demonstrated its efficiency byproviding a highly explainable solution in line with decades of research in machine learning. These findings willallow the improvement of previously developed solutions
Cámara, Chávez Guillermo Philipp-Foliguet Sylvie. „Analyse du contenu vidéo par apprentissage actif“. [s.l.] : [s.n.], 2009. http://biblioweb.u-cergy.fr/theses/07CERG0380.pdf.
Der volle Inhalt der QuelleThèse soutenue en co-tutelle. Titre provenant de l'écran titre. Bibliogr. p. 157-174.
Legha, Daniel. „Predictive maintenance and remote diagnosis for electro-mechanical drives of Very High Speed Trains“. Electronic Thesis or Diss., La Rochelle, 2023. http://www.theses.fr/2023LAROS015.
Der volle Inhalt der QuelleThe main objective of this research is to implement predictive and remote diagnosis solutions for the train’s accessibility systems, which are driven by direct current motors. And these systems are the Internal Doors, the Gap Filler, the Passengers’ Access Door, and the Lift. The research tackles multiple predictive maintenance and remote diagnosis equations, such as: Test of the belt tension, for all the types of Internal Doors. The good condition of the door open stopper, for all types of Internal Doors. Signature of proper operation of Internal Doors, using the Big Data recorded signals such as the motor current, motor voltage, door position, speed, position sensors, cycles’ timings, and other contextual information recorded on the subsystem. Signature of proper operation of Gap Filler, which has the same objectives as the signature of proper operation of Internal Doors...Regarding the academic side, the research aims to identify a set of selected failure modes based on the following signals: Motor current, Motor Voltage, Motor position, Motor speed, Position sensors, and contextual data such as the temperature, the cant/tilt... The research aims to study the signals intransient and non-transient regimes, with and without position sensors in some cases, with features engineering based on the time domain, the frequency domain, and time-frequency. Furthermore, the research tackles Machine Learning techniques for data/failure classification. The main objective is to work on signal-based techniques, and if possible, additional investigation will be done using model-based techniques
Do, Quoc khanh. „Apprentissage discriminant des modèles continus en traduction automatique“. Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS071/document.
Der volle Inhalt der QuelleOver the past few years, neural network (NN) architectures have been successfully applied to many Natural Language Processing (NLP) applications, such as Automatic Speech Recognition (ASR) and Statistical Machine Translation (SMT).For the language modeling task, these models consider linguistic units (i.e words and phrases) through their projections into a continuous (multi-dimensional) space, and the estimated distribution is a function of these projections. Also qualified continuous-space models (CSMs), their peculiarity hence lies in this exploitation of a continuous representation that can be seen as an attempt to address the sparsity issue of the conventional discrete models. In the context of SMT, these echniques have been applied on neural network-based language models (NNLMs) included in SMT systems, and oncontinuous-space translation models (CSTMs). These models have led to significant and consistent gains in the SMT performance, but are also considered as very expensive in training and inference, especially for systems involving large vocabularies. To overcome this issue, Structured Output Layer (SOUL) and Noise Contrastive Estimation (NCE) have been proposed; the former modifies the standard structure on vocabulary words, while the latter approximates the maximum-likelihood estimation (MLE) by a sampling method. All these approaches share the same estimation criterion which is the MLE ; however using this procedure results in an inconsistency between theobjective function defined for parameter stimation and the way models are used in the SMT application. The work presented in this dissertation aims to design new performance-oriented and global training procedures for CSMs to overcome these issues. The main contributions lie in the investigation and evaluation of efficient training methods for (large-vocabulary) CSMs which aim~:(a) to reduce the total training cost, and (b) to improve the efficiency of these models when used within the SMT application. On the one hand, the training and inference cost can be reduced (using the SOUL structure or the NCE algorithm), or by reducing the number of iterations via a faster convergence. This thesis provides an empirical analysis of these solutions on different large-scale SMT tasks. On the other hand, we propose a discriminative training framework which optimizes the performance of the whole system containing the CSM as a component model. The experimental results show that this framework is efficient to both train and adapt CSM within SMT systems, opening promising research perspectives
Foulon, Lucas. „Détection d'anomalies dans les flux de données par structure d'indexation et approximation : Application à l'analyse en continu des flux de messages du système d'information de la SNCF“. Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI082.
Der volle Inhalt der QuelleIn this thesis, we propose methods to approximate an anomaly score in order to detect abnormal parts in data streams. Two main problems are considered in this context. Firstly, the handling of the high dimensionality of the objects describing the time series extracted from the raw streams, and secondly, the low computation cost required to perform the analysis on-the-fly. To tackle the curse of dimensionality, we have selected the CFOF anomaly score, that has been proposed recently and proven to be robust to the increase of the dimensionality. Our main contribution is then the proposition of two methods to quickly approximate the CFOF score of new objects in a stream. The first one is based on safe pruning and approximation during the exploration of object neighbourhood. The second one is an approximation obtained by the aggregation of scores computed in several subspaces. Both contributions complete each other and can be combined. We show on a reference benchmark that our proposals result in important reduction of the execution times, while providing approximations that preserve the quality of anomaly detection. Then, we present our application of these approaches within the SNCF information system. In this context, we have extended the existing monitoring modules by a new tool to help to detect abnormal behaviours in the real stream of messages within the SNCF communication system
Gosselin, Philippe-Henri. „Apprentissage interactif pour la recherche par le contenu dans les bases multimédias“. Habilitation à diriger des recherches, Université de Cergy Pontoise, 2011. http://tel.archives-ouvertes.fr/tel-00660316.
Der volle Inhalt der QuellePhilippeau, Jérémy. „Apprentissage de similarités pour l'aide à l'organisation de contenus audiovisuels“. Toulouse 3, 2009. http://thesesups.ups-tlse.fr/564/.
Der volle Inhalt der QuelleIn the perspective of new usages in the field of the access to audiovisual archives, we have created a semi-automatic system that helps a user to organize audiovisual contents while performing tasks of classification, characterization, identification and ranking. To do so, we propose to use a new vocabulary, different from the one already available in INA documentary notices, to answer needs which can not be easily defined with words. We have conceived a graphical interface based on graph formalism designed to express an organisational task. The digital similarity is a good tool in respect with the handled elements which are informational objects shown on the computer screen and the automatically extracted audio and video low-level features. We have made the choice to estimate the similarity between those elements with a predictive process through a statistical model. Among the numerous existing models, the statistical prediction based on the univaried regression and on support vectors has been chosen. H)
Vielzeuf, Valentin. „Apprentissage neuronal profond pour l'analyse de contenus multimodaux et temporels“. Thesis, Normandie, 2019. http://www.theses.fr/2019NORMC229/document.
Der volle Inhalt der QuelleOur perception is by nature multimodal, i.e. it appeals to many of our senses. To solve certain tasks, it is therefore relevant to use different modalities, such as sound or image.This thesis focuses on this notion in the context of deep learning. For this, it seeks to answer a particular problem: how to merge the different modalities within a deep neural network?We first propose to study a problem of concrete application: the automatic recognition of emotion in audio-visual contents.This leads us to different considerations concerning the modeling of emotions and more particularly of facial expressions. We thus propose an analysis of representations of facial expression learned by a deep neural network.In addition, we observe that each multimodal problem appears to require the use of a different merge strategy.This is why we propose and validate two methods to automatically obtain an efficient fusion neural architecture for a given multimodal problem, the first one being based on a central fusion network and aimed at preserving an easy interpretation of the adopted fusion strategy. While the second adapts a method of neural architecture search in the case of multimodal fusion, exploring a greater number of strategies and therefore achieving better performance.Finally, we are interested in a multimodal view of knowledge transfer. Indeed, we detail a non-traditional method to transfer knowledge from several sources, i.e. from several pre-trained models. For that, a more general neural representation is obtained from a single model, which brings together the knowledge contained in the pre-trained models and leads to state-of-the-art performances on a variety of facial analysis tasks
Carrière, Véronique. „Apprentissage médié par les TICE : le cas des étudiants déficients visuels“. Phd thesis, Université Paul Valéry - Montpellier III, 2012. http://tel.archives-ouvertes.fr/tel-00718602.
Der volle Inhalt der QuelleDutt, Anuvabh. „Continual learning for image classification“. Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM063.
Der volle Inhalt der QuelleThis thesis deals with deep learning applied to image classification tasks. The primary motivation for the work is to make current deep learning techniques more efficient and to deal with changes in the data distribution. We work in the broad framework of continual learning, with the aim to have in the future machine learning models that can continuously improve.We first look at change in label space of a data set, with the data samples themselves remaining the same. We consider a semantic label hierarchy to which the labels belong. We investigate how we can utilise this hierarchy for obtaining improvements in models which were trained on different levels of this hierarchy.The second and third contribution involve continual learning using a generative model. We analyse the usability of samples from a generative model in the case of training good discriminative classifiers. We propose techniques to improve the selection and generation of samples from a generative model. Following this, we observe that continual learning algorithms do undergo some loss in performance when trained on several tasks sequentially. We analyse the training dynamics in this scenario and compare with training on several tasks simultaneously. We make observations that point to potential difficulties in the learning of models in a continual learning scenario.Finally, we propose a new design template for convolutional networks. This architecture leads to training of smaller models without compromising performance. In addition the design lends itself to easy parallelisation, leading to efficient distributed training.In conclusion, we look at two different types of continual learning scenarios. We propose methods that lead to improvements. Our analysis also points to greater issues, to over come which we might need changes in our current neural network training procedure
Le, Goff Matthieu. „Techniques d'analyse de contenu appliquées à l'imagerie spatiale“. Phd thesis, Toulouse, INPT, 2017. http://oatao.univ-toulouse.fr/19243/1/LE_GOFF_Matthieu.pdf.
Der volle Inhalt der QuelleDupont, Pierre. „Utilisation et apprentissage de modeles de langage pour la reconnaissance de la parole continue“. Paris, ENST, 1996. http://www.theses.fr/1996ENST0011.
Der volle Inhalt der QuelleDupont, Pierre. „Utilisation et apprentissage de modèles de langage pour la reconnaissance de la parole continue /“. Paris : École nationale supérieure des télécommunications, 1996. http://catalogue.bnf.fr/ark:/12148/cb35827695q.
Der volle Inhalt der QuelleBélanger, France. „Développement et expérimentation d'un cours en formation continue auprès de conseillers agricoles selon l'approche par problèmes“. Sherbrooke : Université de Sherbrooke, 1997.
Den vollen Inhalt der Quelle findenSun, Rémy. „Content combination strategies for Image Classification“. Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS272.
Der volle Inhalt der QuelleIn this thesis, we tackle the question of deep image classification, a fundamental issue for computer vision and visual understanding in general. We look into the common practice of engineering new examples to augment the dataset. We take this as an opportunity to teach neural algorithms to reconcile information mixed from different samples with Mixing Sample Data Augmentation so as to better understand the problem. To this end, we study both how to edit the content in a mixed image, and what the model should predict for the mixed images. We first propose a new type of data augmentation that helps model generalize by embedding the semantic content of samples into the non-semantic context of other samples to generate in-class mixed samples. To this end, we design new neural architectures capable of generating such mixed samples, and then show the resulting mixed inputs help train stronger classifiers in a semi-supervised setting where few labeled samples are available. In a second part, we show input mixing can be used as an input compression method to train multiple subnetworks in a base network from compressed inputs. Indeed, by formalizing the seminal multi-input multi-output (MIMO) framework as a mixing data augmentation and changing the underlying mixing mechanisms, we obtain strong improvements of over standard models and MIMO models. Finally, we adapt this MIMO technique to the emerging Vision Transformer (ViT) models. Our work shows ViTs present unique challenges for MIMO training, but that they are also uniquely suited for it
Hanna, Dima. „Usages numériques informels des enseignants du primaire et contribution à leur développement professionnel“. Thesis, Toulouse 2, 2016. http://www.theses.fr/2016TOU20092.
Der volle Inhalt der QuelleToday’s teachers are required by the National Education to get involved in an individual and collective process of professional development throughout their career. This concept is actually very present in professional and scientific debates. Teachers’ professional development is a process that can be extended in the private sphere with personal work. For about fifteen years this work has benefited from the emergence of digital tools. These days digital circulates in all professional spheres and the teaching profession can’t escape it. New opportunities of access to digital resources open up new « spaces of knowledge » and new learning methods for teachers.This thesis work attempts to describe and understand how informal digital uses can bring about learning, collaborating situations and therefore how they can take part in the professional development of primary school teachers, more precisely those in the Haute-Garonne department. A mixed method both quantitative and qualitative has been used in response to this problem.The obtained results show an interrelation between the digital uses in teachers’ private sphere and the indicators related to the process of professional development.Outside the school i.e. during their working time that is most difficult to identify, teachers keep working as professionals : they do research and preparation work, they think, create, exchange about the act of learning and teaching. The digital procedures we will describe are good means to become professional and to structure one’s abilities
Côté, Louis. „Analyse de contenu de manuels scolaires en lien avec l'enseignement-apprentissage de la notation exponentielle“. Mémoire, Université de Sherbrooke, 2015. http://hdl.handle.net/11143/6734.
Der volle Inhalt der QuelleMarcelli, Agnès. „Temps et apprentissage d'une langue étrangère : vers un modèle bicontextuel d'enseignement - apprentissage, initié en présentiel et continué à distance à l'étranger (approche théorique et mise en oeuvre)“. Besançon, 2004. http://www.theses.fr/2004BESA1009.
Der volle Inhalt der QuelleThis research lies within teh framework of a project entitled FR 2000 (French in the Year 2000). The goal of this research is the effective modeling of an experimental course in the subject area of french as a foreign Language. The experimental course is initiated in a face-to-face classroom at the Centre de Linguistique Appliquée de Besançon (France) and continues in a distance mode at the Univesity of Technology of Brisbane (Australia) via WebCT 3. 2 online courseware. Our thesis begins by describing the theoretic and pedagogical principles, with a pedagogy in context approach, that govern our methods and didactic choices. The description of this hybrid experimental course as well as its set of collected data allowed us to put together a profile of learners and to observe, from a point of view focusing on time, the pertinence and the influence such a context can play on the teaching and learning of a foreign langage
Contemori, Giulio. „Amélioration de la fonctionnalité visuelle par l'utilisation concomitante de l'apprentissage perceptif et de la stimulation cérébrale : le cas de la dégénérescence maculaire“. Thesis, Toulouse 3, 2020. http://www.theses.fr/2020TOU30054.
Der volle Inhalt der QuelleMacular degeneration (MD) is a common visual disorder in the aging population characterized by a loss of central vision, reduced visual acuity contrast sensitivity, and increased crowding. This impairment strongly affects the quality of life and personal autonomy. There is currently no cure for AMD, available treatment options are only able to slow down the disease, and even palliative treatments are rare. After the emergence of the central scotoma, patients with MD develop one or more eccentric fixation areas - preferred retinal loci (PRLs) - that are used for fixation, reading, tracking, and other visual tasks that require finer ocular abilities. The final goal of the project was to investigate and to improve the residual visual abilities in the PRL. Four studies were conducted in total. Study 1 was conducted in MD patients to investigate whether after the emergence of the scotoma, the PRL acquire enhanced abilities in the processing of the visual information through spontaneous or use-dependent adaptive plasticity. Study 2 aimed to assess the effects of a single administration of transcranial random noise electrical stimulation (tRNS), a subtype of non-invasive transcranial electrical stimulation, on the spatial integration in the healthy visual cortex. Study 3 aimed to assess the between session effect of daily repeated tRNS coupled with perceptual training. The objective of study 4 was to translate the previous findings into a clinically applicable treatment approach by combining tRNS and perceptual training in adult patients with MD. Contrary to previous results, we found neither a phenomenon of spontaneous nor use-dependent cortical plasticity undergoing in the PRL before the training. We also found that the tRNS was able to modulate the visuospatial integration in the early visual processing, promoting plastic changes in the stimulated network. Its effects were not limited to the short-term modulation but also produced a boosting of the learning in a crowding task. The final experiment showed that a combination of tRNS and perceptual training could result in greater improvements and larger transfer to untrained visual tasks in adults with MD than training alone. Overall, our results indicate that tRNS of the visual cortex has potential application as an additional therapy to improve vision in adults with bilateral central blindness
Margeta, Ján. „Apprentissage automatique pour simplifier l’utilisation de banques d’images cardiaques“. Thesis, Paris, ENMP, 2015. http://www.theses.fr/2015ENMP0055/document.
Der volle Inhalt der QuelleThe recent growth of data in cardiac databases has been phenomenal. Cleveruse of these databases could help find supporting evidence for better diagnosis and treatment planning. In addition to the challenges inherent to the large quantity of data, the databases are difficult to use in their current state. Data coming from multiple sources are often unstructured, the image content is variable and the metadata are not standardised. The objective of this thesis is therefore to simplify the use of large databases for cardiology specialists withautomated image processing, analysis and interpretation tools. The proposed tools are largely based on supervised machine learning techniques, i.e. algorithms which can learn from large quantities of cardiac images with groundtruth annotations and which automatically find the best representations. First, the inconsistent metadata are cleaned, interpretation and visualisation of images is improved by automatically recognising commonly used cardiac magnetic resonance imaging views from image content. The method is based on decision forests and convolutional neural networks trained on a large image dataset. Second, the thesis explores ways to use machine learning for extraction of relevant clinical measures (e.g. volumes and masses) from3D and 3D+t cardiac images. New spatio-temporal image features are designed andclassification forests are trained to learn how to automatically segment the main cardiac structures (left ventricle and left atrium) from voxel-wise label maps. Third, a web interface is designed to collect pairwise image comparisons and to learn how to describe the hearts with semantic attributes (e.g. dilation, kineticity). In the last part of the thesis, a forest-based machinelearning technique is used to map cardiac images to establish distances and neighborhoods between images. One application is retrieval of the most similar images
Guélorget, Paul. „Active learning for the detection of objects of operational interest in open-source multimedia content“. Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAS018.
Der volle Inhalt der QuelleA profusion of openly accessible content, actors and interactions is targeted by analysts for intelligence, marketing or political purposes. Analysing the immensity of open source data requires automated assistance. Although recent propositions in neural network architectures have demonstrated strong capacities for image and text modalities, their training harnesses massive training datasets, non-existent for the majority of operational classes of interest. To address this issue, active learning takes advantage of the great amounts of unlabelled documents by soliciting from a human oracle the ground-truth labels of the presumed most informative documents, to improve accuracy. Yet, the model's decision-making rationales are opaque and might be unrelated to those of the oracle. Furthermore, with its time-consuming iterative steps, the active learning workflow is detrimental to its real-time performances. Our contributions in this thesis aim to analyse and address these issues at four levels. Firstly, we observe the rationales behind a neural network's decisions. Secondly, we put these rationales into perspective with human rationales. Thirdly, we try and make the neural network align its decision-making rationales with those of a teacher model to simulate the rationales of a human oracle and improve accuracy in what is called active learning with rationales. Finally, we design and exploit an active learning framework to overcome its usual limitations. These studies were conducted with uni-modal text and image data, and multi-modal text and image associations, principally press articles in English and French. Throughout this work's chapters, we address several use cases among which fake news classification, vagueness classification, the detection of lack of contradiction in articles, the detection of arbitrary topics such as demonstrations and violence
Carriere, Véronique. „Apprentissage médié par les TICE : le cas des étudiants déficients visuels“. Phd thesis, Université Paul Valéry - Montpellier III, 2012. http://tel.archives-ouvertes.fr/tel-00718447.
Der volle Inhalt der QuellePeterson, Esperança. „Apprentissage du français langue étrangère et valeurs socioculturelles : le contenu socioculturel des manuels en usage en Angola“. Pau, 2008. http://www.theses.fr/2008PAUU1002.
Der volle Inhalt der QuelleThis work concerns the teaching and the training of the sociocultural contents of the French handbooks of Angola use at the initial level, level where learning have contact with the unknown or the foreigner. It from a diagnosis and an analysis the difficulties encountered by the Angolan teachers in their confrontation to the culture French and present the reflections on questions like which sociocultural contents are built to start of teaching with learning the Angolan French beginners and according to which method? What to make sociocultural contents presents in the handbooks of fault use of methodological indications and training of the teachers? It has the results of analysis and the handbooks and the investigations carried out in public and deprived schools of three provinces. It concludes that, often the teachers neglect the cultural part of the handbooks for lack of formation, methodological information and means. Consequently, it finishes by the proposals of a module of formation and an intercultural guide to improve their practices of classes
Mondou, Damien. „Gestion adaptative des contenus numériques : proposition d’un framework générique par apprentissage et re-scénarisation dynamique“. Thesis, La Rochelle, 2019. http://www.theses.fr/2019LAROS029.
Der volle Inhalt der QuelleThis thesis aims to propose an architecture that addresses the design, supervision, management and adaptation of an interactive experience. We therefore propose a complete framework to facilitate the modeling phase of an interactive system and guarantee sufficient flexibility to achieve the objectives of complexity, scalability, adaptability and improvement through automatic learning. For this purpose, the formal model, CIT, based on two layers of description was introduced. The dynamic supervision process consists in controlling the interactive experience with regard to the formal model, based on networks of timed input/output automata. Two softwares, CELTIC (Common Editor for Location Time Interaction and Content) and EDAIN (Execution Driver based on Artificial INtelligence), implementing the CIT model and the activity supervision engine respectively, were developed during this thesis
Jollivet, Chantal, und Eric Blanchard. „L'apprentissage coopératif dans la formation continue des enseignants du premier degré“. Université Louis Pasteur (Strasbourg) (1971-2008), 2002. http://www.theses.fr/2002STR1PS04.
Der volle Inhalt der QuelleThe objectives of our joint research are to demonstrate how the concept of educational reciprocity encourages the ongoing personal and professional development of teachers, and secondly, from a sociological point of view, to explore how teachers in a group situation begin to cooperate in terms of attitudes and knowledge. Moreover, our thesis which is based on the notion of reciprocity, is doubly centered on constructive cooperation, this is in effect both the aim and the means of our interactive research. The aim of our joint research consists of the putting into practice and the analysis of a teacher training mechanism on the basis of cooperative work. Cooperation is also the means of our research. This research is innovative in the sense that it places two researchers in a personal and sometimes cooperative thought process, where they need to construct the various stages of the undertaking using parallel approaches. We explore issues concerned with the professional development of teachers : how to help professionals to become reflective practitioners and how to help them appreciate what can be gained from cooperative teaching. Our proposition is to put in a place a training facility that not only analyses current practice, but takes into account the parallel layers of knowledge and understanding. First hand experience has shown us that personal enrichment is developed through the participation of all partners and is based on a collegial approach. It has also shown of that if cooperation is necessary to surmount complexity, there are also a certain number of challenges to overcome, due the variety of different approaches. The challenge of such a cooperative training program is that teachers, in their professional setting, can with the cooperation of teacher trainers and university researchers construct a training program that unites classroom practice and theory
Ettehadi, Seyedrohollah. „Model-based and machine learning techniques for nonlinear image reconstruction in diffuse optical tomography“. Thèse, Université de Sherbrooke, 2017. http://hdl.handle.net/11143/11895.
Der volle Inhalt der QuelleAbstract : Diffuse optical tomography (DOT) is a low cost and noninvasive 3D biomedical imaging technique to reconstruct the optical properties of biological tissues. Image reconstruction in DOT is inherently a difficult problem, because the inversion process is nonlinear and ill-posed. During DOT image reconstruction, the optical properties of the medium are recovered from the boundary measurements at the surface of the medium. In this work, two approaches are proposed for non-linear DOT image reconstruction. The first approach relies on the use of iterative model-based image reconstruction, which is still under development for DOT and that can be found in the literature. A 3D forward model is developed based on the diffusion equation, which is an approximation of the radiative transfer equation. The forward model developed can simulate light propagation in complex geometries. Additionally, the forward model is developed to deal with different types of optical data such as continuous-wave (CW) and time-domain (TD) data for both intrinsic and fluorescence signals. First, a multispectral image reconstruction algorithm is developed to reconstruct the concentration of different tissue chromophores simultaneously from a set of CW measurements at different wavelengths. A second image reconstruction algorithm is developed to reconstruct the fluorescence lifetime (FLT) of different fluorescent markers from time-domain fluorescence measurements. In this algorithm, all the information contained in full temporal curves is used along with an acceleration technique to render the algorithm of practical use. Moreover, the proposed algorithm has the potential of being able to distinguish more than 3 FLTs, which is a first in fluorescence imaging. The second approach is based on machine learning techniques, in particular deep learning models. A deep generative model is proposed to reconstruct the fluorescence distribution map from CW fluorescence measurements. It is the first time that such a model is applied for fluorescence DOT image reconstruction. The performance of the proposed algorithm is validated with an optical phantom and a fluorescent marker. The proposed algorithm recovers the fluorescence distribution even from very noisy and sparse measurements, which is a big limitation in fluorescence DOT imaging.
Blanchart, Pierre. „Apprentissage rapide adapté aux spécificités de l'utilisateur : application à l'extraction d'informations d'images de télédétection“. Phd thesis, Paris, Télécom ParisTech, 2011. https://pastel.hal.science/pastel-00662747.
Der volle Inhalt der QuelleAn important emerging topic in satellite image content extraction and classification is building retrieval systems that automatically learn high-level semantic interpretations from images, possibly under the direct supervision of the user. In this thesis, we envisage successively the two very broad categories of auto-annotation systems and interactive image search engine to propose our own solutions to the recurring problem of learning from small and non-exhaustive training datasets and of generalizing over a very high-volume of unlabeled data. In our first contribution, we look into the problem of exploiting the huge volume of unlabeled data to discover "unknown" semantic structures, that is, semantic classes which are not represented in the training dataset. We propose a semi-supervised algorithm able to build an auto-annotation model over non-exhaustive training datasets and to point out to the user new interesting semantic structures in the purpose of guiding him in his database exploration task. In our second contribution, we envisage the problem of speeding up the learning in interactive image search engines. We derive a semi-supervised active learning algorithm which exploits the intrinsic data distribution to achieve faster identification of the target category. In our last contribution, we describe a cascaded active learning strategy to retrieve objects in large satellite image scenes. We propose consequently an active learning method which exploits a coarse-to-fine scheme to avoid the computational overload inherent to multiple evaluations of the decision function of complex classifiers such as needed to retrieve complex object classes
Blanchart, Pierre. „Apprentissage rapide adapté aux spécificités de l'utilisateur : application à l'extraction d'informations d'images de télédétection“. Phd thesis, Télécom ParisTech, 2011. http://pastel.archives-ouvertes.fr/pastel-00662747.
Der volle Inhalt der QuelleVeniat, Tom. „Neural Architecture Search under Budget Constraints“. Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS443.
Der volle Inhalt der QuelleThe recent increase in computation power and the ever-growing amount of data available ignited the rise in popularity of deep learning. However, the expertise, the amount of data, and the computing power necessary to build such algorithms as well as the memory footprint and the inference latency of the resulting system are all obstacles preventing the widespread use of these methods. In this thesis, we propose several methods allowing to make a step towards a more efficient and automated procedure to build deep learning models. First, we focus on learning an efficient architecture for image processing problems. We propose a new model in which we can guide the architecture learning procedure by specifying a fixed budget and cost function. Then, we consider the problem of sequence classification, where a model can be even more efficient by dynamically adapting its size to the complexity of the signal to come. We show that both approaches result in significant budget savings. Finally, we tackle the efficiency problem through the lens of transfer learning. Arguing that a learning procedure can be made even more efficient if, instead of starting tabula rasa, it builds on knowledge acquired during previous experiences. We explore modular architectures in the continual learning scenario and present a new benchmark allowing a fine-grained evaluation of different kinds of transfer