Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Espaces latents de phrases“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Espaces latents de phrases" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Espaces latents de phrases"
Sène, Abdourahmane Mbade. „Conflit Autour d’Un Espace Protégé : Cas du Parc National de Basse Casamance“. European Scientific Journal, ESJ 19, Nr. 5 (28.02.2023): 36. http://dx.doi.org/10.19044/esj.2023.v19n5p36.
Der volle Inhalt der QuelleSène, Abdourahmane Mbade. „Conflit Autour d’Un Espace Protégé: Cas du Parc National de Basse Casamance“. European Scientific Journal ESJ 12 (19.12.2022). http://dx.doi.org/10.19044/esipreprint.12.2022.p278.
Der volle Inhalt der QuelleFIEDLER, Sabine. „“Mit dem Topping bin ich auch fein”–Anglicisms in a German TV cooking show“. Anglicismes : variétés diatopiques et genres textuels, Nr. 4 (05.12.2022). http://dx.doi.org/10.25965/espaces-linguistiques.488.
Der volle Inhalt der QuelleDobson, James, und Scott Sanders. „Distant Approaches to the Printed Page“. Digital Studies/le champ numérique (DSCN) Open Issue 2022 12, Nr. 1 (05.05.2022). http://dx.doi.org/10.16995/dscn.8107.
Der volle Inhalt der QuelleDissertationen zum Thema "Espaces latents de phrases"
Duquenne, Paul-Ambroise. „Sentence Embeddings for Massively Multilingual Speech and Text Processing“. Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS039.
Der volle Inhalt der QuelleRepresentation learning of sentences has been widely studied in NLP. While many works have explored different pre-training objectives to create contextual representations from sentences, several others have focused on learning sentence embeddings for multiple languages with the aim of closely encoding paraphrases and translations in the sentence embedding space.In this thesis, we first study how to extend text sentence embedding spaces to the speech modality in order to build a multilingual speech/text sentence embedding space. Next, we explore how to use this multilingual and multimodal sentence embedding space for large-scale speech mining. This allows us to automatically create alignments between written and spoken sentences in different languages. For high similarity thresholds in the latent space, aligned sentences can be considered as translations. If the alignments involve written sentences on one side and spoken sentences on the other, then these are potential speech-to-text translations. If the alignments involve on both sides spoken sentences, then these are potential speech-to-speech translations. To validate the quality of the mined data, we train speech-to-text translation models and speech-to-speech translation models. We show that adding the automatically mined data significantly improves the quality of the learned translation models, demonstrating the quality of the alignments and the usefulness of the mined data.Then, we study how to decode these sentence embeddings into text or speech in different languages. We explore several methods for training decoders and analyze their robustness to modalities/languages not seen during training, to evaluate cross-lingual and cross-modal transfers. We demonstrate that we could perform zero-shot cross-modal translation in this framework, achieving translation results close to systems learned in a supervised manner with a cross-attention mechanism. The compatibility between speech/text representations from different languages enables these very good performances, despite an intermediate fixed-size representation.Finally, we develop a new state-of-the-art massively multilingual speech/text sentence embedding space, named SONAR, based on conclusions drawn from the first two projects. We study different objective functions to learn such a space and we analyze their impact on the organization of the space as well as on the capabilities to decode these representations. We show that such sentence embedding space outperform previous state-of-the-art methods for both cross-lingual and cross-modal similarity search as well as decoding capabilities. This new space covers 200 written languages and 37 spoken languages. It also offers text translation results close to the NLLB system on which it is based, and speech translation results competitive with the Whisper supervised system. We also present SONAR EXPRESSIVE, which introduces an additional representation encoding non-semantic speech properties, such as vocal style or expressivity of speech
Mirault, Jonathan. „L'identification des mots lors de la lecture de phrases : de l'Intégration des informations orthographiques aux représentations syntaxiques“. Thesis, Aix-Marseille, 2019. http://www.theses.fr/2019AIXM0255.
Der volle Inhalt der QuelleHowcanwereadsentenceswithoutsapces or wehn the lerttes in the wrods are srcamlbed? In this thesis, we will present seven research articles which focus on the role of efficient orthographic processing, word identification, and syntactic processing in enabling reading in noisy conditions. The results of these studies reveal the spatial integration of orthographic information followed by parallel, cascaded and interactive processing of word identities, their syntactic function and contribution to sentence-level processing. One of the key elements to answer to the questions raised in this thesis is the idea that words are for sentences what letters are for words. Just like noisy letter position coding enables identification of words with transposed letters, the noisy spatiotopic coding of word positions generates transposed-word effects. The results also demonstrate the rapidity of access to syntactic structures (from 300 ms), taken as further evidence for the processing of multiple words in parallel as a key aspect of efficient skilled reading behavior