Contents
Academic literature on the topic 'Distillation de connaissances'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Distillation de connaissances.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Dissertations / Theses on the topic "Distillation de connaissances"
Sourty, Raphael. "Apprentissage de représentation de graphes de connaissances et enrichissement de modèles de langue pré-entraînés par les graphes de connaissances : approches basées sur les modèles de distillation." Electronic Thesis or Diss., Toulouse 3, 2023. http://www.theses.fr/2023TOU30337.
Full textNatural language processing (NLP) is a rapidly growing field focusing on developing algorithms and systems to understand and manipulate natural language data. The ability to effectively process and analyze natural language data has become increasingly important in recent years as the volume of textual data generated by individuals, organizations, and society as a whole continues to grow significantly. One of the main challenges in NLP is the ability to represent and process knowledge about the world. Knowledge graphs are structures that encode information about entities and the relationships between them, they are a powerful tool that allows to represent knowledge in a structured and formalized way, and provide a holistic understanding of the underlying concepts and their relationships. The ability to learn knowledge graph representations has the potential to transform NLP and other domains that rely on large amounts of structured data. The work conducted in this thesis aims to explore the concept of knowledge distillation and, more specifically, mutual learning for learning distinct and complementary space representations. Our first contribution is proposing a new framework for learning entities and relations on multiple knowledge bases called KD-MKB. The key objective of multi-graph representation learning is to empower the entity and relation models with different graph contexts that potentially bridge distinct semantic contexts. Our approach is based on the theoretical framework of knowledge distillation and mutual learning. It allows for efficient knowledge transfer between KBs while preserving the relational structure of each knowledge graph. We formalize entity and relation inference between KBs as a distillation loss over posterior probability distributions on aligned knowledge. Grounded on this finding, we propose and formalize a cooperative distillation framework where a set of KB models are jointly learned by using hard labels from their own context and soft labels provided by peers. Our second contribution is a method for incorporating rich entity information from knowledge bases into pre-trained language models (PLM). We propose an original cooperative knowledge distillation framework to align the masked language modeling pre-training task of language models and the link prediction objective of KB embedding models. By leveraging the information encoded in knowledge bases, our proposed approach provides a new direction to improve the ability of PLM-based slot-filling systems to handle entities
Kara, Sandra. "Unsupervised object discovery in images and video data." Electronic Thesis or Diss., université Paris-Saclay, 2025. http://www.theses.fr/2025UPASG019.
Full textThis thesis explores self-supervised learning methods for object localization, commonly known as Object Discovery. Object localization in images and videos is an essential component of computer vision tasks such as detection, re-identification, tracking etc. Current supervised algorithms can localize (and classify) objects accurately but are costly due to the need for annotated data. The process of labeling is typically repeated for each new data or category of interest, limiting their scalability. Additionally, the semantically specialized approaches require prior knowledge of the target classes, restricting their use to known objects. Object Discovery aims to address these limitations by being more generic. The first contribution of this thesis focused on the image modality, investigating how features from self-supervised vision transformers can serve as cues for multi-object discovery. To localize objects in their broadest definition, we extended our focus to video data, leveraging motion cues and targeting the localization of objects that can move. We introduced background modeling and knowledge distillation in object discovery to tackle the background over-segmentation issue in existing object discovery methods and to reintegrate static objects, significantly improving the signal-to-noise ratio in predictions. Recognizing the limitations of single-modality data, we incorporated 3D data through a cross-modal distillation framework. The knowledge exchange between 2D and 3D domains improved alignment on object regions between the two modalities, enabling the use of multi-modal consistency as a confidence criterion
Bayle, Jean-Claude. "Contribution à la connaissance de la sauge sclarée : étude de l'huile essentielle et du sclaréol." Aix-Marseille 3, 1989. http://www.theses.fr/1989AIX30020.
Full textBernet, Cyril. "Contribution à la connaissance des composés d'arôme clés des vins du cépage gewurztraminer cultivé en Alsace." Dijon, 2000. http://www.theses.fr/2000DIJOS049.
Full textDaou, Andrea. "Real-time Indoor Localization with Embedded Computer Vision and Deep Learning." Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMR002.
Full textThe need to determine the location of individuals or objects in indoor environments has become an essential requirement. The Global Navigation Satellite System, a predominant outdoor localization solution, encounters limitations when applied indoors due to signal reflections and attenuation caused by obstacles. To address this, various indoor localization solutions have been explored. Wireless-based indoor localization methods exploit wireless signals to determine a device's indoor location. However, signal interference, often caused by physical obstructions, reflections, and competing devices, can lead to inaccuracies in location estimation. Additionally, these methods require access points deployment, incurring associated costs and maintenance efforts. An alternative approach is dead reckoning, which estimates a user's movement using a device's inertial sensors. However, this method faces challenges related to sensor accuracy, user characteristics, and temporal drift. Other indoor localization techniques exploit magnetic fields generated by the Earth and metal structures. These techniques depend on the used devices and sensors as well as the user's surroundings.The goal of this thesis is to provide an indoor localization system designed for professionals, such as firefighters, police officers, and lone workers, who require precise and robust positioning solutions in challenging indoor environments. In this thesis, we propose a vision-based indoor localization system that leverages recent advances in computer vision to determine the location of a person within indoor spaces. We develop a room-level indoor localization system based on Deep Learning (DL) and built-in smartphone sensors combining visual information with smartphone magnetic heading. To achieve localization, the user captures an image of the indoor surroundings using a smartphone, equipped with a camera, an accelerometer, and a magnetometer. The captured image is then processed using our proposed multiple direction-driven Convolutional Neural Networks to accurately predict the specific indoor room. The proposed system requires minimal infrastructure and provides accurate localization. In addition, we highlight the importance of ongoing maintenance of the vision-based indoor localization system. This system necessitates regular maintenance to adapt to changing indoor environments, particularly when new rooms have to be integrated into the existing localization framework. Class-Incremental Learning (Class-IL) is a computer vision approach that allows deep neural networks to incorporate new classes over time without forgetting the knowledge previously learned. In the context of vision-based indoor localization, this concept must be applied to accommodate new rooms. The selection of representative samples is essential to control memory limits, avoid forgetting, and retain knowledge from previous classes. We develop a coherence-based sample selection method for Class-IL, bringing forward the advantages of the coherence measure to a DL framework. The relevance of the methodology and algorithmic contributions of this thesis is rigorously tested and validated through comprehensive experimentation and evaluations on real datasets