Letteratura scientifica selezionata sul tema "Modèles de langue pré-entraînés"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Modèles de langue pré-entraînés".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Articoli di riviste sul tema "Modèles de langue pré-entraînés"
Piok, Maria, e Nicolas Vicquery. "Les rôles de Nestroy et leurs modèles français". Austriaca 75, n. 1 (2012): 95–107. http://dx.doi.org/10.3406/austr.2012.4964.
Testo completoMISSIRE, Régis. "Perception sémantique et perception sémiotique: propositions pour un modèle perceptif du signe linguistique". Acta Semiótica et Lingvistica 24, n. 2 (27 dicembre 2019): 80–100. http://dx.doi.org/10.22478/ufpb.2446-7006.2019v24n2.50083.
Testo completoZiegler, Nicole, Kara Moranski, George Smith e Huy Phung. "Metacognitive Instruction and Interactional Feedback in a Computer-Mediated Environment". TESL Canada Journal 37, n. 2 (2 dicembre 2020): 210–33. http://dx.doi.org/10.18806/tesl.v37i2.1337.
Testo completoBamgbose, Ayo. "Issues for a Model of Language Planning". Language Problems and Language Planning 13, n. 1 (1 gennaio 1989): 24–34. http://dx.doi.org/10.1075/lplp.13.1.03bam.
Testo completoCHIRA, Rodica-Gabriela. "Sophie Hébert-Loizelet and Élise Ouvrard. (Eds.) Les carnets aujourd’hui. Outils d’apprentissage et objets de recherche. Presses universitaires de Caen, 2019. Pp. 212. ISBN 979-2-84133-935-8". Journal of Linguistic and Intercultural Education 13 (1 dicembre 2020): 195–200. http://dx.doi.org/10.29302/jolie.2020.13.12.
Testo completoRaynal-Astier, Corinne, e Mireille Jullien. "À la recherche d’une francophonie didactique india-océanique. Premières recensions dans les manuels scolaires utilisés aux Comores". La F/francophonie dans l’aire indiaocéanique : singularités, héritages et pratiques, n. 11 (17 luglio 2023). http://dx.doi.org/10.35562/rif.1495.
Testo completoBénéi, Veronique. "Nationalisme". Anthropen, 2016. http://dx.doi.org/10.17184/eac.anthropen.021.
Testo completoKohler, Gun-Britt, e Pavel Navumenka. "Literaturgeschichte, Feldformation und transnationale Möglichkeitsraüme: Literatur im Raum Belarus in den 1920er Jahren". Slovo How to think of literary... (25 febbraio 2020). http://dx.doi.org/10.46298/slovo.2020.6143.
Testo completoBromberger, Christian. "Méditerranée". Anthropen, 2019. http://dx.doi.org/10.17184/eac.anthropen.106.
Testo completoTesi sul tema "Modèles de langue pré-entraînés"
Lovon, Melgarejo Jesus Enrique. "Évaluation et intégration des connaissances structurelles pour modèles de langue pré-entraînés". Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://thesesups.ups-tlse.fr/6065/.
Testo completoThe field of knowledge representation is a constantly evolving domain. Thanks to recent advancements in deep neural networks, particularly the Transformer architecture, the natural language processing (NLP) field has been provided with groundbreaking tools leading to improved performance across multiple NLP tasks. Pre-trained language models (PLMs), such as BERT and GPT, which are Transformer-based models trained on extensive amounts of textual data, have played a significant role in this progress. PLMs can produce contextualized representations embedding rich syntactic and semantic patterns of language. However, they do not provide structured and factual knowledge representations, essential for a better understanding of language. To alleviate these issues, researchers explored combining classical PLMs with external knowledge resources, such as knowledge bases (KBs). This approach aims to complement PLMs by providing the missing structural and factual components inherently present in KBs. As a result, this approach has given rise to a new family of knowledge-enhanced PLMs (KEPLMs). In this thesis, we focus on integrating KBs into PLMs, with a particular interest in their structure or hierarchy. We explore different research directions towards enhancing these PLMs, which include (i) exploring the limitations and methods to implicitly integrate KBs and their impact on reasoning-based tasks and (ii) defining evaluation methodologies for explicit hierarchical signals for PLMs and their transferability to other NLP tasks. In a first contribution, we propose to revisit the training methods of PLMs for reasoning-based tasks. Current methods are limited to generalizing this task to different difficulty levels, treating each level as a separate task. Instead, we suggest an incremental learning reasoning approach, where reasoning is learned gradually from simple to complex difficulty levels. This approach takes advantage of previously overlooked components that do not participate in the main reasoning chain, and we evaluate whether it improves the generalization of this task. We use an implicit methodology that transforms structured information into unstructured text with rich hierarchical content. We further conducted experiments on reasoning-related tasks such as reading comprehension and question answering to assess the pertinence of our proposal. For our second contribution, we aim to improve the performance of PLMs by incorporating explicit hierarchical signals into them. While various evaluation and integration approaches have been developed for static word embeddings, there is limited exploration of these methods for contextualized word embeddings. The current evaluation methods for PLMs inherit limitations from static embedding evaluations, such as dataset biases and superficial hierarchical signals. Therefore, we propose a new evaluation methodology for PLMs that considers multiple hierarchy signals. Our work characterizes the hierarchical representation by decomposing it into basic hierarchical distributions that we call hierarchy properties. We evaluate the hierarchical knowledge present in state-of-the-art PLMs using these properties and analyze if learning them aims to improve inner hierarchical representations of the models and their applicability to related NLP tasks
Sourty, Raphael. "Apprentissage de représentation de graphes de connaissances et enrichissement de modèles de langue pré-entraînés par les graphes de connaissances : approches basées sur les modèles de distillation". Electronic Thesis or Diss., Toulouse 3, 2023. http://www.theses.fr/2023TOU30337.
Testo completoNatural language processing (NLP) is a rapidly growing field focusing on developing algorithms and systems to understand and manipulate natural language data. The ability to effectively process and analyze natural language data has become increasingly important in recent years as the volume of textual data generated by individuals, organizations, and society as a whole continues to grow significantly. One of the main challenges in NLP is the ability to represent and process knowledge about the world. Knowledge graphs are structures that encode information about entities and the relationships between them, they are a powerful tool that allows to represent knowledge in a structured and formalized way, and provide a holistic understanding of the underlying concepts and their relationships. The ability to learn knowledge graph representations has the potential to transform NLP and other domains that rely on large amounts of structured data. The work conducted in this thesis aims to explore the concept of knowledge distillation and, more specifically, mutual learning for learning distinct and complementary space representations. Our first contribution is proposing a new framework for learning entities and relations on multiple knowledge bases called KD-MKB. The key objective of multi-graph representation learning is to empower the entity and relation models with different graph contexts that potentially bridge distinct semantic contexts. Our approach is based on the theoretical framework of knowledge distillation and mutual learning. It allows for efficient knowledge transfer between KBs while preserving the relational structure of each knowledge graph. We formalize entity and relation inference between KBs as a distillation loss over posterior probability distributions on aligned knowledge. Grounded on this finding, we propose and formalize a cooperative distillation framework where a set of KB models are jointly learned by using hard labels from their own context and soft labels provided by peers. Our second contribution is a method for incorporating rich entity information from knowledge bases into pre-trained language models (PLM). We propose an original cooperative knowledge distillation framework to align the masked language modeling pre-training task of language models and the link prediction objective of KB embedding models. By leveraging the information encoded in knowledge bases, our proposed approach provides a new direction to improve the ability of PLM-based slot-filling systems to handle entities
Pelloin, Valentin. "La compréhension de la parole dans les systèmes de dialogues humain-machine à l'heure des modèles pré-entraînés". Electronic Thesis or Diss., Le Mans, 2024. http://www.theses.fr/2024LEMA1002.
Testo completoIn this thesis, spoken language understanding (SLU) is studied in the application context of telephone dialogues with defined goals (hotel booking reservations, for example). Historically, SLU was performed through a cascade of systems: a first system would transcribe the speech into words, and a natural language understanding system would link those words to a semantic annotation. The development of deep neural methods has led to the emergence of end-to-end architectures, where the understanding task is performed by a single system, applied directly to the speech signal to extract the semantic annotation. Recently, so-called self-supervised learning (SSL) pre-trained models have brought new advances in natural language processing (NLP). Learned in a generic way on very large datasets, they can then be adapted for other applications. To date, the best SLU results have been obtained with pipeline systems incorporating SSL models.However, none of the architectures, pipeline or end-to-end, is perfect. In this thesis, we study these architectures and propose hybrid versions that attempt to benefit from the advantages of each. After developing a state-of-the-art end-to-end SLU model, we evaluated different hybrid strategies. The advances made by SSL models during the course of this thesis led us to integrate them into our hybrid architecture
Hamza, Ghazoi. "Contribution aux développements des modèles analytiques compacts pour l’analyse vibratoire des systèmes mécatroniques". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLC018/document.
Testo completoThis thesis focuses on the development of a method for the preliminary design of mechatronic systems, taking into account the vibratory aspect, without going through costly design techniques, such as 3D CAD and finite element method.In an early stage of the design process of mechatronic systems, simple analytical models are necessary to the architect engineer in Mechatronics, for important conceptual decisions related to multi-physics coupling and vibration. For this purpose, a library of flexible elements, based on analytical models, was developed in this thesis, using the Modelica modeling language.To demonstrate the possibilities of this approach, we conducted a study of the vibration response of some mechatronic systems. Therefore, the pre-sizing approach was applied in a first phase to a simple mechatronic system, formed with a rectangular plate supporting electrical components such as electric motors and electronic cards, and in a second phase the approach was applied to a wind turbine, considered as a complete mechatronic system. Simulation results were compared with the finite elements method and other studies found in the scientific literature. Simulation results have enabled us to prove that the developed compact models assist the mechatronic architect to find results of simulation with an important accuracy and a low computational cost
Ortiz, Suarez Pedro. "A Data-driven Approach to Natural Language Processing for Contemporary and Historical French". Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS155.
Testo completoIn recent years, neural methods for Natural Language Processing (NLP) have consistently and repeatedly improved the state of the art in a wide variety of NLP tasks. One of the main contributing reasons for this steady improvement is the increased use of transfer learning techniques. These methods consist in taking a pre-trained model and reusing it, with little to no further training, to solve other tasks. Even though these models have clear advantages, their main drawback is the amount of data that is needed to pre-train them. The lack of availability of large-scale data previously hindered the development of such models for contemporary French, and even more so for its historical states.In this thesis, we focus on developing corpora for the pre-training of these transfer learning architectures. This approach proves to be extremely effective, as we are able to establish a new state of the art for a wide range of tasks in NLP for contemporary, medieval and early modern French as well as for six other contemporary languages. Furthermore, we are able to determine, not only that these models are extremely sensitive to pre-training data quality, heterogeneity and balance, but we also show that these three features are better predictors of the pre-trained models' performance in downstream tasks than the pre-training data size itself. In fact, we determine that the importance of the pre-training dataset size was largely overestimated, as we are able to repeatedly show that such models can be pre-trained with corpora of a modest size
Atti di convegni sul tema "Modèles de langue pré-entraînés"
Guillaume, Severine, Guillaume Wisniewski, Cécile Macaire, Guillaume Jacques, Alexis Michaud, Benjamin Galliot, Maximin Coavoux, Solange Rossato, Minh-Châu Nguyên e Maxime Fily. "Les modèles pré-entraînés à l'épreuve des langues rares : expériences de reconnaissance de mots sur la langue japhug (sino-tibétain)". In XXXIVe Journées d'Études sur la Parole -- JEP 2022. ISCA: ISCA, 2022. http://dx.doi.org/10.21437/jep.2022-52.
Testo completo