Contents
Academic literature on the topic 'Modèles linguistiques neuronaux'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Modèles linguistiques neuronaux.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Modèles linguistiques neuronaux"
Alvarez-Pereyre, Frank. "Linguistique, anthropologie, ethnomusicologie." Anthropologie et Sociétés 38, no. 1 (July 10, 2014): 47–61. http://dx.doi.org/10.7202/1025808ar.
Full textBacquelaine, Françoise. "DeepL et Google Translate face à l'ambiguïté phraséologique." Journal of Data Mining & Digital Humanities Towards robotic translation?, III. Biotranslation vs.... (December 11, 2022). http://dx.doi.org/10.46298/jdmdh.9118.
Full textDissertations / Theses on the topic "Modèles linguistiques neuronaux"
García, Martínez Mercedes. "Factored neural machine translation." Thesis, Le Mans, 2018. http://www.theses.fr/2018LEMA1002/document.
Full textCommunication between humans across the lands is difficult due to the diversity of languages. Machine translation is a quick and cheap way to make translation accessible to everyone. Recently, Neural Machine Translation (NMT) has achievedimpressive results. This thesis is focus on the Factored Neural Machine Translation (FNMT) approach which is founded on the idea of using the morphological and grammatical decomposition of the words (lemmas and linguistic factors) in the target language. This architecture addresses two well-known challenges occurring in NMT. Firstly, the limitation on the target vocabulary size which is a consequence of the computationally expensive softmax function at the output layer of the network, leading to a high rate of unknown words. Secondly, data sparsity which is arising when we face a specific domain or a morphologically rich language. With FNMT, all the inflections of the words are supported and larger vocabulary is modelled with similar computational cost. Moreover, new words not included in the training dataset can be generated. In this work, I developed different FNMT architectures using various dependencies between lemmas and factors. In addition, I enhanced the source language side also with factors. The FNMT model is evaluated on various languages including morphologically rich ones. State of the art models, some using Byte Pair Encoding (BPE) are compared to the FNMT model using small and big training datasets. We found out that factored models are more robust in low resource conditions. FNMT has been combined with BPE units performing better than pure FNMT model when trained with big data. We experimented with different domains obtaining improvements with the FNMT models. Furthermore, the morphology of the translations is measured using a special test suite showing the importance of explicitly modeling the target morphology. Our work shows the benefits of applying linguistic factors in NMT
Swaileh, Wassim. "Des modèles de langage pour la reconnaissance de l'écriture manuscrite." Thesis, Normandie, 2017. http://www.theses.fr/2017NORMR024/document.
Full textThis thesis is about the design of a complete processing chain dedicated to unconstrained handwriting recognition. Three main difficulties are adressed: pre-processing, optical modeling and language modeling. The pre-processing stage is related to extracting properly the text lines to be recognized from the document image. An iterative text line segmentation method using oriented steerable filters was developed for this purpose. The difficulty in the optical modeling stage lies in style diversity of the handwriting scripts. Statistical optical models are traditionally used to tackle this problem such as Hidden Markov models (HMM-GMM) and more recently recurrent neural networks (BLSTM-CTC). Using BLSTM we achieve state of the art performance on the RIMES (for French) and IAM (for English) datasets. The language modeling stage implies the integration of a lexicon and a statistical language model to the recognition processing chain in order to constrain the recognition hypotheses to the most probable sequence of words (sentence) from the language point of view. The difficulty at this stage is related to the finding the optimal vocabulary with minimum Out-Of-Vocabulary words rate (OOV). Enhanced language modeling approaches has been introduced by using sub-lexical units made of syllables or multigrams. The sub-lexical units cover an important portion of the OOV words. Then the language coverage depends on the domain of the language model training corpus, thus the need to train the language model with in domain data. The recognition system performance with the sub-lexical units outperformes the traditional recognition systems that use words or characters language models, in case of high OOV rates. Otherwise equivalent performances are obtained with a compact sub-lexical language model. Thanks to the compact lexicon size of the sub-lexical units, a unified multilingual recognition system has been designed. The unified system performance have been evaluated on the RIMES and IAM datasets. The unified multilingual system shows enhanced recognition performance over the specialized systems, especially when a unified optical model is used
Imadache, Abdelmalek. "Reconnaissance de l'écriture manuscrite : extension à de grands lexiques de l'analyse de la forme globale des mots." Paris 6, 1990. http://www.theses.fr/1990PA066551.
Full textZaki, Ahmed. "Modélisation de la prosodie pour la synthèse de la parole arabe standard à partir du texte." Bordeaux 1, 2004. http://www.theses.fr/2004BOR12913.
Full textStrub, Florian. "Développement de modèles multimodaux interactifs pour l'apprentissage du langage dans des environnements visuels." Thesis, Lille 1, 2020. http://www.theses.fr/2020LIL1I030.
Full textWhile our representation of the world is shaped by our perceptions, our languages, and our interactions, they have traditionally been distinct fields of study in machine learning. Fortunately, this partitioning started opening up with the recent advents of deep learning methods, which standardized raw feature extraction across communities. However, multimodal neural architectures are still at their beginning, and deep reinforcement learning is often limited to constrained environments. Yet, we ideally aim to develop large-scale multimodal and interactive models towards correctly apprehending the complexity of the world. As a first milestone, this thesis focuses on visually grounded language learning for three reasons (i) they are both well-studied modalities across different scientific fields (ii) it builds upon deep learning breakthroughs in natural language processing and computer vision (ii) the interplay between language and vision has been acknowledged in cognitive science. More precisely, we first designed the GuessWhat?! game for assessing visually grounded language understanding of the models: two players collaborate to locate a hidden object in an image by asking a sequence of questions. We then introduce modulation as a novel deep multimodal mechanism, and we show that it successfully fuses visual and linguistic representations by taking advantage of the hierarchical structure of neural networks. Finally, we investigate how reinforcement learning can support visually grounded language learning and cement the underlying multimodal representation. We show that such interactive learning leads to consistent language strategies but gives raise to new research issues
Bonnasse-Gahot, Laurent. "Modélisation du codage neuronal de catégories et étude des conséquences perceptives." Paris, EHESS, 2009. http://www.theses.fr/2009EHES0102.
Full textAt the crossroads between theoretical neuroscience and psycholinguistics, this dissertation deals with the neural coding of categories and aims at studying the perceptual consequences resulting from an optimized representation. The focus is on situations where categorization is difficult due to overlapping of categories in stimulus space (a system of vowels, for example). Taking advantage of a neurobiological interpretation of the so-called 'exemplar models' originally introduced in psychology, and using mathematical tools from information theory, this work proposes an analytic study of the coding efficiency of a neuronal population with respect to a discrete set of categories. Introducing a perceptual distance based on the Kullback-Leibler divergence between, patterns of neural activity evoked by two different stimuli, it is shown not only that categorical perception naturally emerges from category learning, but also that several prototypical effects (the magnet effect, for instance) result from an optimized representation. A plausible model of information decoding is finally proposed, and reaction times during an identification task are characterized analytically. The obtained formula gives a «relationship between discrimination accuracy and response time. In the present work, all the analytical results, that are derived are supported by numerical studies as well as by qualitative and quantitative comparisons with experimental data available in both the neuroscience and the psycholinguistics Iiterature
Al, Saied Hazem. "Analyse automatique par transitions pour l'identification des expressions polylexicales." Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0206.
Full textThis thesis focuses on the identification of multi-word expressions, addressed through a transition-based system. A multi-word expression (MWE) is a linguistic construct composed of several elements whose combination shows irregularity at one or more linguistic levels. Identifying MWEs in context amounts to annotating the occurrences of MWEs in texts, i.e. to detecting sets of tokens forming such occurrences. For example, in the sentence This has nothing to do with the book, the tokens has, to, do and with would be marked as forming an occurrence of the MWE have to do with. Transition-based analysis is a famous NLP technique to build a structured output from a sequence of elements, applying a sequence of actions (called «transitions») chosen from a predefined set, to incrementally build the output structure. In this thesis, we propose a transition system dedicated to MWE identification within sentences represented as token sequences, and we study various architectures for the classifier which selects the transitions to apply to build the sentence analysis. The first variant of our system uses a linear support vector machine (SVM) classifier. The following variants use neural models: a simple multilayer perceptron (MLP), followed by variants integrating one or more recurrent layers. The preferred scenario is an identification of MWEs without the use of syntactic information, even though we know the two related tasks. We further study a multitasking approach, which jointly performs and take mutual advantage of morphosyntactic tagging, transition-based MWE identification and dependency parsing. The thesis comprises an important experimental part. Firstly, we studied which resampling techniques allow good learning stability despite random initializations. Secondly, we proposed a method for tuning the hyperparameters of our models by trend analysis within a random search for a hyperparameter combination. We produce systems with the constraint of using the same hyperparameter combination for different languages. We use data from the two PARSEME international competitions for verbal MWEs. Our variants produce very good results, including state-of-the-art scores for many languages in the PARSEME 1.0 and 1.1 datasets. One of the variants ranked first for most languages in the PARSEME 1.0 shared task. By the way, our models have poor performance on MWEs that are were not seen at learning time