Rozprawy doktorskie na temat „Traitement du language”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Traitement du language”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Coria, Juan Manuel. "Continual Representation Learning in Written and Spoken Language". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG025.
Pełny tekst źródłaAlthough machine learning has recently witnessed major breakthroughs, today's models are mostly trained once on a target task and then deployed, rarely (if ever) revisiting their parameters.This problem affects performance after deployment, as task specifications and data may evolve with user needs and distribution shifts.To solve this, continual learning proposes to train models over time as new data becomes available.However, models trained in this way suffer from significant performance loss on previously seen examples, a phenomenon called catastrophic forgetting.Although many studies have proposed different strategies to prevent forgetting, they often rely on labeled data, which is rarely available in practice. In this thesis, we study continual learning for written and spoken language.Our main goal is to design autonomous and self-learning systems able to leverage scarce on-the-job data to adapt to the new environments they are deployed in.Contrary to recent work on learning general-purpose representations (or embeddings), we propose to leverage representations that are tailored to a downstream task.We believe the latter may be easier to interpret and exploit by unsupervised training algorithms like clustering, that are less prone to forgetting. Throughout our work, we improve our understanding of continual learning in a variety of settings, such as the adaptation of a language model to new languages for sequence labeling tasks, or even the adaptation to a live conversation in the context of speaker diarization.We show that task-specific representations allow for effective low-resource continual learning, and that a model's own predictions can be exploited for full self-learning
Moncecchi, Guillermo. "Recognizing speculative language in research texts". Paris 10, 2013. http://www.theses.fr/2013PA100039.
Pełny tekst źródłaThis thesis presents a methodology to solve certain classification problems, particularly those involving sequential classification for Natural Language Processing tasks. It proposes the use of an iterative, error-based approach to improve classification performance, suggesting the incorporation of expert knowledge into the learning process through the use of knowledge rules. We applied and evaluated the methodology to two tasks related with the detection of hedging in scientific articles: those of hedge cue identification and hedge cue scope detection. Results are promising: for the first task, we improved baseline results by 2. 5 points in terms of F-score incorporating cue cooccurence information, while for scope detection, the incorporation of syntax information and rules for syntax scope pruning allowed us to improve classification performance from an F-score of 0. 712 to a final number of 0. 835. Compared with state-of-the-art methods, results are competitive, suggesting that the approach of improving classifiers based only on committed errors on a held out corpus could be successfully used in other, similar tasks. Additionally, this thesis proposes a class schema for representing sentence analysis in a unique structure, including the results of different linguistic analysis. This allows us to better manage the iterative process of classifier improvement, where different attribute sets for learning are used in each iteration. We also propose to store attributes in a relational model, instead of the traditional text-based structures, to facilitate learning data analysis and manipulation
Caucheteux, Charlotte. "Language representations in deep learning algorithms and the brain". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG031.
Pełny tekst źródłaRecent deep language models -- like GPT-3 and ChatGPT -- are capable to produce text that closely resembles that of humans. Such similarity raises questions about how the brain and deep models process language, the mechanisms they use, and the internal representations they construct. In this thesis, I compare the internal representations of the brain and deep language models, with the goal of identifying their similarities and differences. To this aim, I analyze functional resonance imaging (fMRI) and magnetoencephalography (MEG) recordings of participants listening to and reading sentences, and compare them to the activations of thousands of language algorithms corresponding to these same sentences.Our results first highlight high-level similarities between the internal representations of the brain and deep language models. We find that deep nets' activations significantly predict brain activity across subjects for different cohorts (>500 participants), recording modalities (MEG and fMRI), stimulus types (isolated words, sentences, and natural stories), stimulus modalities (auditory and visual presentation), languages (Dutch, English and French), and deep language models. This alignment is maximal in brain regions repeatedly associated with language, for the best-performing algorithms and for participants who best understand the stories. Critically, we evidence a similar processing hierarchy between the two systems. The first layers of the algorithms align with low-level processing regions in the brain, such as auditory areas and the temporal lobe, while the deep layers align with regions associated with higher-level processing, such fronto-parietal areas.We then show how such similarities can be leveraged to build better predictive models of brain activity and better decompose several linguistic processes in the brain, such as syntax and semantics. Finally, we explore the differences between deep language models and the brain's activations. We find that the brain predicts distant and hierarchical representations, unlike current language models that are mostly trained to make short-term and word-level predictions. Overall, modern algorithms are still far from processing language in the same way that humans do. However, the direct links between their inner workings and that of the brain provide an promising platform for better understanding both systems, and pave the way for building better algorithms inspired by the human brain
Ayotte, Nathalie. "Le traitement lexicographique du vocabulaire politique Trois études de cas: Nationalisme, nationaliste et nation". Thesis, University of Ottawa (Canada), 2006. http://hdl.handle.net/10393/27328.
Pełny tekst źródłaMuller, Benjamin. "How Can We Make Language Models Better at Handling the Diversity and Variability of Natural Languages ?" Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS399.
Pełny tekst źródłaDeep Learning for NLP has led to impressive empirical progress in recent years. In essence, this progress is based on better contextualized representations that can be easily used for a wide variety of tasks. However, these models usually require substantial computing power and large amounts of raw textual data. This makes language’s inherent diversity and variability a vivid challenge in NLP. We focus on the following: How can we make language models better at handling the variability and diversity of natural languages?. First, we explore the generalizability of language models by building and analyzing one of the first large-scale replication of a BERT model for a non-English language. Our results raise the question of using these language models on highly-variable domains such as these found online. Focusing on lexical normalization, we show that this task can be approached with BERT-like models. However, we show that it only partially helps downstream performance. In consequence, we focus on adaptation techniques using what we refer to as representation transfer and explore challenging settings such as the zero-shot setting, low-resource languages. We show that multilingual language models can be adapted and used efficiently with low-resource languages, even with the ones unseen during pretraining, and that the script is a critical component in this adaptation
Millour, Alice. "Myriadisation de ressources linguistiques pour le traitement automatique de langues non standardisées". Thesis, Sorbonne université, 2020. http://www.theses.fr/2020SORUL126.
Pełny tekst źródłaCitizen science, in particular voluntary crowdsourcing, represents a little experimented solution to produce language resources for some languages which are still little resourced despite the presence of sufficient speakers online. We present in this work the experiments we have led to enable the crowdsourcing of linguistic resources for the development of automatic part-of-speech annotation tools. We have applied the methodology to three non-standardised languages, namely Alsatian, Guadeloupean Creole and Mauritian Creole. For different historical reasons, multiple (ortho)-graphic practices coexist for these three languages. The difficulties encountered by the presence of this variation phenomenon led us to propose various crowdsourcing tasks that allow the collection of raw corpora, part-of-speech annotations, and graphic variants. The intrinsic and extrinsic analysis of these resources, used for the development of automatic annotation tools, show the interest of using crowdsourcing in a non-standardized linguistic framework: the participants are not seen in this context a uniform set of contributors whose cumulative efforts allow the completion of a particular task, but rather as a set of holders of complementary knowledge. The resources they collectively produce make possible the development of tools that embrace the variation.The platforms developed, the language resources, as well as the models of trained taggers are freely available
Cadène, Rémi. "Deep Multimodal Learning for Vision and Language Processing". Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS277.
Pełny tekst źródłaDigital technologies have become instrumental in transforming our society. Recent statistical methods have been successfully deployed to automate the processing of the growing amount of images, videos, and texts we produce daily. In particular, deep neural networks have been adopted by the computer vision and natural language processing communities for their ability to perform accurate image recognition and text understanding once trained on big sets of data. Advances in both communities built the groundwork for new research problems at the intersection of vision and language. Integrating language into visual recognition could have an important impact on human life through the creation of real-world applications such as next-generation search engines or AI assistants.In the first part of this thesis, we focus on systems for cross-modal text-image retrieval. We propose a learning strategy to efficiently align both modalities while structuring the retrieval space with semantic information. In the second part, we focus on systems able to answer questions about an image. We propose a multimodal architecture that iteratively fuses the visual and textual modalities using a factorized bilinear model while modeling pairwise relationships between each region of the image. In the last part, we address the issues related to biases in the modeling. We propose a learning strategy to reduce the language biases which are commonly present in visual question answering systems
Leybaert, Jacqueline. "Le traitement du mot écrit chez l'enfant sourd". Doctoral thesis, Universite Libre de Bruxelles, 1987. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/213416.
Pełny tekst źródłaSaadane, Houda. "Le traitement automatique de l’arabe dialectalisé : aspects méthodologiques et algorithmiques". Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAL022/document.
Pełny tekst źródłaGonthier, Isabelle. "L'influence des connaissances phonologiques et semantiques dans le traitement lexical: Le role de la valeur d'imagerie des mots". Thesis, University of Ottawa (Canada), 2003. http://hdl.handle.net/10393/29017.
Pełny tekst źródłaBoulanger, Hugo. "Data augmentation and generation for natural language processing". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG019.
Pełny tekst źródłaMore and more fields are looking to automate part of their process.Automatic language processing contains methods for extracting information from texts.These methods can use machine learning.Machine learning requires annotated data to perform information extraction.Applying these methods to new domains requires obtaining annotated data related to the task.In this thesis, our goal is to study generation methods to improve the performance of learned models with low amounts of data.Different methods of generation are explored that either contain machine learning or do not, which are used to generate the data needed to learn sequence labeling models.The first method explored is pattern filling.This data generation method generates annotated data by combining sentences with slots, or patterns, with mentions.We have shown that this method improves the performance of labeling models with tiny amounts of data.The amount of data needed to use this method is also studied.The second approach tested is the use of language models for text generation alongside a semi-supervised learning method for tagging.The semi-supervised learning method used is tri-training and is used to add labels to the generated data.The tri-training is tested on several generation methods using different pre-trained language models.We proposed a version of tri-training called generative tri-training, where the generation is not done in advance but during the tri-training process and takes advantage of it.The performance of the models trained during the semi-supervision process and of the models trained on the data generated by it are tested.In most cases, the data produced match the performance of the models trained with the semi-supervision.This method allows the improvement of the performances at all the tested data levels with respect to the models without augmentation.The third avenue of study combines some aspects of the previous approaches.For this purpose, different approaches are tested.The use of language models to do sentence replacement in the manner of the pattern-filling generation method is unsuccessful.Using a set of data coming from the different generation methods is tested, which does not outperform the best method.Finally, applying the pattern-filling method to the data generated with the tri-training is tested and does not improve the results obtained with the tri-training.While much remains to be studied, we have highlighted simple methods, such as pattern filling, and more complex ones, such as the use of supervised learning with sentences generated by a language model, to improve the performance of labeling models through the generation of annotated data
Bull, Hannah. "Learning sign language from subtitles". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG013.
Pełny tekst źródłaSign languages are an essential means of communication for deaf communities. Sign languages are visuo-gestual languages using the modalities of hand gestures, facial expressions, gaze and body movements. They possess rich grammar structures and lexicons that differ considerably from those found among spoken languages. The uniqueness of transmission medium, structure and grammar of sign languages requires distinct methodologies. The performance of automatic translations systems between high-resource written languages or spoken languages is currently sufficient for many daily use cases, such as translating videos, websites, emails and documents. On the other hand, automatic translation systems for sign languages do not exist outside of very specific use cases with limited vocabulary. Automatic sign language translation is challenging for two main reasons. Firstly, sign languages are low-resource languages with little available training data. Secondly, sign languages are visual-spatial languages with no written form, naturally represented as video rather than audio or text. To tackle the first challenge, we contribute large datasets for training and evaluating automatic sign language translation systems with both interpreted and original sign language video content, as well as written text subtitles. Whilst interpreted data allows us to collect large numbers of hours of videos, original sign language video is more representative of sign language usage within deaf communities. Written subtitles can be used as weak supervision for various sign language understanding tasks. To address the second challenge, we develop methods to better understand visual cues from sign language video. Whilst sentence segmentation is mostly trivial for written languages, segmenting sign language video into sentence-like units relies on detecting subtle semantic and prosodic cues from sign language video. We use prosodic cues to learn to automatically segment sign language video into sentence-like units, determined by subtitle boundaries. Expanding upon this segmentation method, we then learn to align text subtitles to sign language video segments using both semantic and prosodic cues, in order to create sentence-level pairs between sign language video and text. This task is particularly important for interpreted TV data, where subtitles are generally aligned to the audio and not to the signing. Using these automatically aligned video-text pairs, we develop and improve multiple different methods to densely annotate lexical signs by querying words in the subtitle text and searching for visual cues in the sign language video for the corresponding signs
Kla, Régis. "Osmose : a natural language based object oriented approach with its CASE tool". Paris 1, 2004. http://www.theses.fr/2004PA010020.
Pełny tekst źródłaCuriel, Diaz Arturo Tlacaélel. "Using formal logic to represent sign language phonetics in semi-automatic annotation tasks". Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30308/document.
Pełny tekst źródłaThis thesis presents a formal framework for the representation of Signed Languages (SLs), the languages of Deaf communities, in semi-automatic recognition tasks. SLs are complex visio-gestural communication systems; by using corporal gestures, signers achieve the same level of expressivity held by sound-based languages like English or French. However, unlike these, SL morphemes correspond to complex sequences of highly specific body postures, interleaved with postural changes: during signing, signers use several parts of their body simultaneously in order to combinatorially build phonemes. This situation, paired with an extensive use of the three-dimensional space, make them difficult to represent with tools already existent in Natural Language Processing (NLP) of vocal languages. For this reason, the current work presents the development of a formal representation framework, intended to transform SL video repositories (corpus) into an intermediate representation layer, where automatic recognition algorithms can work under better conditions. The main idea is that corpora can be described with a specialized Labeled Transition System (LTS), which can then be annotated with logic formulae for its study. A multi-modal logic was chosen as the basis of the formal language: the Propositional Dynamic Logic (PDL). This logic was originally created to specify and prove properties on computer programs. In particular, PDL uses the modal operators [a] and to denote necessity and possibility, respectively. For SLs, a particular variant based on the original formalism was developed: the PDL for Sign Language (PDLSL). With the PDLSL, body articulators (like the hands or head) are interpreted as independent agents; each articulator has its own set of valid actions and propositions, and executes them without influence from the others. The simultaneous execution of different actions by several articulators yield distinct situations, which can be searched over an LTS with formulae, by using the semantic rules of the logic. Together, the use of PDLSL and the proposed specialized data structures could help curb some of the current problems in SL study; notably the heterogeneity of corpora and the lack of automatic annotation aids. On the same vein, this may not only increase the size of the available datasets, but even extend previous results to new corpora; the framework inserts an intermediate representation layer which can serve to model any corpus, regardless of its technical limitations. With this, annotations is possible by defining with formulae the characteristics to annotate. Afterwards, a formal verification algorithm may be able to find those features in corpora, as long as they are represented as consistent LTSs. Finally, the development of the formal framework led to the creation of a semi-automatic annotator based on the presented theoretical principles. Broadly, the system receives an untreated corpus video, converts it automatically into a valid LTS (by way of some predefined rules), and then verifies human-created PDLSL formulae over the LTS. The final product, is an automatically generated sub-lexical annotation, which can be later corrected by human annotators for their use in other areas such as linguistics
Gainon, de Forsan de Gabriac Clara. "Deep Natural Language Processing for User Representation". Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS274.
Pełny tekst źródłaThe last decade has witnessed the impressive expansion of Deep Learning (DL) methods, both in academic research and the private sector. This success can be explained by the ability DL to model ever more complex entities. In particular, Representation Learning methods focus on building latent representations from heterogeneous data that are versatile and re-usable, namely in Natural Language Processing (NLP). In parallel, the ever-growing number of systems relying on user data brings its own lot of challenges. This work proposes methods to leverage the representation power of NLP in order to learn rich and versatile user representations.Firstly, we detail the works and domains associated with this thesis. We study Recommendation. We then go over recent NLP advances and how they can be applied to leverage user-generated texts, before detailing Generative models.Secondly, we present a Recommender System (RS) that is based on the combination of a traditional Matrix Factorization (MF) representation method and a sentiment analysis model. The association of those modules forms a dual model that is trained on user reviews for rating prediction. Experiments show that, on top of improving performances, the model allows us to better understand what the user is really interested in in a given item, as well as to provide explanations to the suggestions made.Finally, we introduce a new task-centered on UR: Professional Profile Learning. We thus propose an NLP-based framework, to learn and evaluate professional profiles on different tasks, including next job generation
Albert, Sabine. "Analyse diachronique du Trésor de la Langue Française et de l'Oxford English Dictionary : le traitement des emprunts". Thesis, Cergy-Pontoise, 2018. http://www.theses.fr/2018CERG0936/document.
Pełny tekst źródłaTHE TRÉSOR DE LA LANGUE FRANÇAISE ANDTHE OXFORD ENGLISH DICTIONARY :A DIACHRONICAL ANALYSIS OF LOAN-WORDSABSTRACTThere is no language that does not expand thanks to loan-words : they permit the lexical stock to get richer and refreshed as are developed the relationships between cultures and countries. English and French languages, since they have been spreading over all continents, have acquired a lot of words from other horizons, that, moreover, they often shared. Actually, we can but notice that their geographic proximity and the richness of their history have aroused an important interpenetration during more than ten centuries. That is why we wanted to show, in this study, the impact of loan-words on both languages, and to analyse the way the most extensive dictionaries on either side of the Channel — the Trésor de la Langue Française and the Oxford English Dictionary — dealt with them.In the first part of this work, we study how French and English lexicons were built up over the course of time according to foreign contributions, and we define the very notion of loan-word in order to show how complex it is. Afterwards, we present the corpus on which rests this study.The second part is dedicated to an exhaustive presentation of the Trésor de la Langue Française and of the Oxford English Dictionary. After a recounting of language dictionaries and of the creation of those two dictionaries, their main features are highlighted and their constitution accurately examined, as well macrostructurally as microstructurally. We also point out the advantages of their informatisation.In the last part, we observe more precisely how the different types of loan-words are reported and what kind of indications are given about them. Then, we point out the distinctive characteristics of the way loan-words are dealt with and the lexicographical difficulties in describing words from elsewhere
Pasquiou, Alexandre. "Deciphering the neural bases of language comprehension using latent linguistic representations". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG041.
Pełny tekst źródłaIn the last decades, language models (LMs) have reached human level performance on several tasks. They can generate rich representations (features) that capture various linguistic properties such has semantics or syntax. Following these improvements, neuroscientists have increasingly used them to explore the neural bases of language comprehension. Specifically, LM's features computed from a story are used to fit the brain data of humans listening to the same story, allowing the examination of multiple levels of language processing in the brain. If LM's features closely align with a specific brain region, then it suggests that both the model and the region are encoding the same information. LM-brain comparisons can then teach us about language processing in the brain. Using the fMRI brain data of fifty US participants listening to "The Little Prince" story, this thesis 1) investigates the reasons why LMs' features fit brain activity and 2) examines the limitations of such comparisons. The comparison of several pre-trained and custom-trained LMs (GloVe, LSTM, GPT-2 and BERT) revealed that Transformers better fit fMRI brain data than LSTM and GloVe. Yet, none are able to explain all the fMRI signal, suggesting either limitations related to the encoding paradigm or to the LMs. Focusing specifically on Transformers, we found that no brain region is better fitted by specific attentional head or layer. Our results caution that the nature and the amount of training data greatly affects the outcome, indicating that using off-the-shelf models trained on small datasets is not effective in capturing brain activations. We showed that LMs' training influences their ability to fit fMRI brain data, and that perplexity was not a good predictor of brain score. Still, training LMs particularly improves their fitting performance in core semantic regions, irrespective of the architecture and training data. Moreover, we showed a partial convergence between brain's and LM's representations.Specifically, they first converge during model training before diverging from one another. This thesis further investigates the neural bases of syntax, semantics and context-sensitivity by developing a method that can probe specific linguistic dimensions. This method makes use of "information-restricted LMs", that are customized LMs architectures trained on feature spaces containing a specific type of information, in order to fit brain data. First, training LMs on semantic and syntactic features revealed a good fitting performance in a widespread network, albeit with varying relative degrees. The quantification of this relative sensitivity to syntax and semantics showed that brain regions most attuned to syntax tend to be more localized, while semantic processing remain widely distributed over the cortex. One notable finding from this analysis was that the extent of semantic and syntactic sensitive brain regions was similar across hemispheres. However, the left hemisphere had a greater tendency to distinguish between syntactic and semantic processing compared to the right hemisphere. In a last set of experiments we designed "masked-attention generation", a method that controls the attention mechanisms in transformers, in order to generate latent representations that leverage fixed-size context. This approach provides evidence of context-sensitivity across most of the cortex. Moreover, this analysis found that the left and right hemispheres tend to process shorter and longer contextual information respectively
Asadullah, Munshi. "Identification of Function Points in Software Specifications Using Natural Language Processing". Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112228/document.
Pełny tekst źródłaThe inevitable emergence of the necessity to estimate the size of a software thus estimating the probable cost and effort is a direct outcome of increasing need of complex and large software in almost every conceivable situation. Furthermore, due to the competitive nature of the software development industry, the increasing reliance on accurate size estimation at early stages of software development becoming a commonplace practice. Traditionally, estimation of software was performed a posteriori from the resultant source code and several metrics were in practice for the task. However, along with the understanding of the importance of code size estimation in the software engineering community, the realization of early stage software size estimation, became a mainstream concern. Once the code has been written, size and cost estimation primarily provides contrastive study and possibly productivity monitoring. On the other hand, if size estimation can be performed at an early development stage (the earlier the better), the benefits are virtually endless. The most important goals of the financial and management aspect of software development namely development cost and effort estimation can be performed even before the first line of code is being conceived. Furthermore, if size estimation can be performed periodically as the design and development progresses, it can provide valuable information to project managers in terms of progress, resource allocation and expectation management. This research focuses on functional size estimation metrics commonly known as Function Point Analysis (FPA) that estimates the size of a software in terms of the functionalities it is expected to deliver from a user’s point of view. One significant problem with FPA is the requirement of human counters, who need to follow a set of standard counting rules, making the process labour and cost intensive (the process is called Function Point Counting and the professional, either analysts or counters). Moreover, these rules, in many occasion, are open to interpretation, thus they often produce inconsistent counts. Furthermore, the process is entirely manual and requires Function Point (FP) counters to read large specification documents, making it a rather slow process. Some level of automation in the process can make a significant difference in the current counting practice. Automation of the process of identifying the FPs in a document accurately, will at least reduce the reading requirement of the counters, making the process faster and thus shall significantly reduce the cost. Moreover, consistent identification of FPs will allow the production of consistent raw function point counts. To the best of our knowledge, the works presented in this thesis is an unique attempt to analyse specification documents from early stages of the software development, using a generic approach adapted from well established Natural Language Processing (NLP) practices
Martin, Alexander. "Les biais dans le traitement et l'apprentissage phonologiques". Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE071/document.
Pełny tekst źródłaDuring speech perception, listeners are biased by a great number of factors, including cognitive limitations such as memory and attention and linguistic limitations such as their native language. This thesis focuses on two of these factors: processing bias during word recognition, and learning bias during the transmission process. These factors are combinatorial and can, over time, affect the way languages evolve. In the first part of this thesis, we focus on the process of word recognition. Previous research has established the importance of phonological features (e.g., voicing or place of articulation) during speech processing, but little is known about their weight relative to one another, and how this influences listeners' ability to recognize words. We tested French participants on their ability to recognize mispronounced words and found that the manner and place features were more important than the voicing feature. We then explored two sources of this asymmetry and found that listeners were biased both by bottom-up acoustic perception (manner contrasts are easier to perceive because of their acoustic distance compared to the other features) and top-down lexical knowledge (the place feature is used more in the French lexicon than the other two features). We suggest that these two sources of bias coalesce during the word recognition process to influence listeners. In the second part of this thesis, we turn to the question of bias during the learning process. It has been suggested that language learners may be biased towards the learning of certain phonological patterns because of phonetic knowledge they have. This in turn can explain why certain patterns are recurrent in the typology while others remain rare or unattested. Specifically, we explored the role of learning bias on the acquisition of the typologically common rule of vowel harmony compared to the unattested (but logically equivalent) rule of vowel disharmony. We found that in both perception and production, there was evidence of a learning bias, and using a simulated iterated learning model, showed how even a small bias favoring one pattern over the other could influence the linguistic typology over time, thus explaining (in part) the prevalence of harmonic systems. We additionally explored the role of sleep on memory consolidation and showed evidence that the common pattern benefits from consolidation that the unattested pattern does not, a factor that may also contribute to the typological asymmetry. Overall, this thesis considers a few of the wide-ranging sources of bias in the individual and discusses how these influences can over time shape linguistic systems. We demonstrate the dynamic and complicated nature of speech processing (both in perception and learning) and open the door for future research to explore in finer detail just how these different sources of bias are weighted relative to one another
Jalalzai, Hamid. "Learning from multivariate extremes : theory and application to natural language processing". Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAT043.
Pełny tekst źródłaExtremes surround us and appear in a large variety of data. Natural data likethe ones related to environmental sciences contain extreme measurements; inhydrology, for instance, extremes may correspond to floods and heavy rainfalls or on the contrary droughts. Data related to human activity can also lead to extreme situations; in the case of bank transactions, the money allocated to a sale may be considerable and exceed common transactions. The analysis of this phenomenon is one of the basis of fraud detection. Another example related to humans is the frequency of encountered words. Some words are ubiquitous while others are rare. No matter the context, extremes which are rare by definition, correspond to uncanny data. These events are of particular concern because of the disastrous impact they may have. Extreme data, however, are less considered in modern statistics and applied machine learning, mainly because they are substantially scarce: these events are out numbered –in an era of so-called ”big data”– by the large amount of classical and non-extreme data that corresponds to the bulk of a distribution. Thus, the wide majority of machine learning tools and literature may not be well-suited or even performant on the distributional tails where extreme observations occur. Through this dissertation, the particular challenges of working with extremes are detailed and methods dedicated to them are proposed. The first part of the thesisis devoted to statistical learning in extreme regions. In Chapter 4, non-asymptotic bounds for the empirical angular measure are studied. Here, a pre-established anomaly detection scheme via minimum volume set on the sphere, is further im-proved. Chapter 5 addresses empirical risk minimization for binary classification of extreme samples. The resulting non-parametric analysis and guarantees are detailed. The approach is particularly well suited to treat new samples falling out of the convex envelop of encountered data. This extrapolation property is key to designing new embeddings achieving label preserving data augmentation. Chapter 6 focuses on the challenge of learning the latter heavy-tailed (and to be precise regularly varying) representation from a given input distribution. Empirical results show that the designed representation allows better classification performanceon extremes and leads to the generation of coherent sentences. Lastly, Chapter7 analyses the dependence structure of multivariate extremes. By noticing that extremes tend to concentrate on particular clusters where features tend to be recurrently large simulatenously, we define an optimization problem that identifies the aformentioned subgroups through weighted means of features
Ben, Nasr Sana. "Mining and modeling variability from natural language documents : two case studies". Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S013/document.
Pełny tekst źródłaDomain analysis is the process of analyzing a family of products to identify their common and variable features. This process is generally carried out by experts on the basis of existing informal documentation. When performed manually, this activity is both time-consuming and error-prone. In this thesis, our general contribution is to address mining and modeling variability from informal documentation. We adopt Natural Language Processing (NLP) and data mining techniques to identify features, commonalities, differences and features dependencies among related products. We investigate the applicability of this idea by instantiating it in two different contexts: (1) reverse engineering Feature Models (FMs) from regulatory requirements in nuclear domain and (2) synthesizing Product Comparison Matrices (PCMs) from informal product descriptions. In the first case study, we adopt NLP and data mining techniques based on semantic analysis, requirements clustering and association rules to assist experts when constructing feature models from these regulations. The evaluation shows that our approach is able to retrieve 69% of correct clusters without any user intervention. Moreover, features dependencies show a high predictive capacity: 95% of the mandatory relationships and 60% of optional relationships are found, and the totality of requires and exclude relationships are extracted. In the second case study, our proposed approach relies on contrastive analysis technology to mine domain specific terms from text, information extraction, terms clustering and information clustering. Overall, our empirical study shows that the resulting PCMs are compact and exhibit numerous quantitative and comparable information. The user study shows that our automatic approach retrieves 43% of correct features and 68% of correct values in one step and without any user intervention. We show that there is a potential to complement or even refine technical information of products. The main lesson learnt from the two case studies is that the exploitability and the extraction of variability knowledge depend on the context, the nature of variability and the nature of text
Dinkar, Tanvi. "Computational models of disfluencies : fillers and discourse markers in spoken language understanding". Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAT001.
Pełny tekst źródłaPeople rarely speak in the same manner that they write – they are generally disfluent. Disfluencies can be defined as interruptions in the regular flow of speech, such as pausing silently, repeating words, or interrupting oneself to correct something said previously. Despite being a natural characteristic of spontaneous speech, and the rich linguistic literature that discusses their informativeness, they are often removed as noise in post-processing from the output transcripts of speech recognisers. So far, their consideration in a Spoken Language Understanding (SLU) context has been rarely explored. The aim of this thesis is to develop computational models of disfluencies in SLU. To do so, we take inspiration from psycholinguistic models of disfluencies, which focus on the role that disfluencies play in the production (by the speaker) and comprehension (by the listener) of speech. Specifically, when we use the term ``computational models of disfluencies'', we mean to develop methodologies that automatically process disfluencies to empirically observe 1) their impact on the production and comprehension of speech, and 2) how they interact with the primary signal (the lexical, or what was said in essence). To do so, we focus on two discourse contexts; monologues and task-oriented dialogues.Our results contribute to broader tasks in SLU, and also research relevant to Spoken Dialogue Systems. When studying monologues, we use a combination of traditional and neural models to study the representations and impact of disfluencies on SLU performance. Additionally, we develop methodologies to study disfluencies as a cue for incoming information in the flow of the discourse. In studying task-oriented dialogues, we focus on developing computational models to study the roles of disfluencies in the listener-speaker dynamic. We specifically study disfluencies in the context of verbal alignment; i.e. the alignment of the interlocutors' lexical expressions, and the role of disfluencies in behavioural alignment; a new alignment context that we propose to mean when instructions given by one interlocutor are followed with an action by another interlocutor. We also consider how these disfluencies in local alignment contexts can be associated with discourse level phenomena; such as success in the task. We consider this thesis one of the many first steps that could be undertaken to integrate disfluencies in SLU contexts
Colin, Émilie. "Traitement automatique des langues et génération automatique d'exercices de grammaire". Electronic Thesis or Diss., Université de Lorraine, 2020. http://www.theses.fr/2020LORR0059.
Pełny tekst źródłaOur perspectives are educational, to create grammar exercises for French. Paraphrasing is an operation of reformulation. Our work tends to attest that sequence-to-sequence models are not simple repeaters but can learn syntax. First, by combining various models, we have shown that the representation of information in multiple forms (using formal data (RDF), coupled with text to extend or reduce it, or only text) allows us to exploit a corpus from different angles, increasing the diversity of outputs, exploiting the syntactic levers put in place. We also addressed a recurrent problem, that of data quality, and obtained paraphrases with a high syntactic adequacy (up to 98% coverage of the demand) and a very good linguistic level. We obtain up to 83.97 points of BLEU-4*, 78.41 more than our baseline average, without syntax leverage. This rate indicates a better control of the outputs, which are varied and of good quality in the absence of syntax leverage. Our idea was to be able to work from raw text : to produce a representation of its meaning. The transition to French text was also an imperative for us. Working from plain text, by automating the procedures, allowed us to create a corpus of more than 450,000 sentence/representation pairs, thanks to which we learned to generate massively correct texts (92% on qualitative validation). Anonymizing everything that is not functional contributed significantly to the quality of the results (68.31 of BLEU, i.e. +3.96 compared to the baseline, which was the generation of text from non-anonymized data). This second work can be applied the integration of a syntax lever guiding the outputs. What was our baseline at time 1 (generate without constraint) would then be combined with a constrained model. By applying an error search, this would allow the constitution of a silver base associating representations to texts. This base could then be multiplied by a reapplication of a generation under constraint, and thus achieve the applied objective of the thesis. The formal representation of information in a language-specific framework is a challenging task. This thesis offers some ideas on how to automate this operation. Moreover, we were only able to process relatively short sentences. The use of more recent neural modelswould likely improve the results. The use of appropriate output strokes would allow for extensive checks. *BLEU : quality of a text (scale from 0 (worst) to 100 (best), Papineni et al. (2002))
Petitjean, Simon. "Génération modulaire de grammaires formelles". Thesis, Orléans, 2014. http://www.theses.fr/2014ORLE2048/document.
Pełny tekst źródłaThe work presented in this thesis aim at facilitating the development of resources for natural language processing. Resources of this type take different forms, because of the existence of several levels of linguistic description (syntax, morphology, semantics, . . . ) and of several formalisms proposed for the description of natural languages at each one of these levels. The formalisms featuring different types of structures, a unique description language is not enough: it is necessary to create a domain specific language (or DSL) for every formalism, and to implement a new tool which uses this language, which is a long a complex task. For this reason, we propose in this thesis a method to assemble in a modular way development frameworks specific to tasks of linguistic resource generation. The frameworks assembled thanks to our method are based on the fundamental concepts of the XMG (eXtensible MetaGrammar) approach, allowing the generation of tree based grammars. The method is based on the assembling of a description language from reusable bricks, and according to a unique specification file. The totality of the processing chain for the DSL is automatically assembled thanks to the same specification. In a first time, we validated this approach by recreating the XMG tool from elementary bricks. Some collaborations with linguists also brought us to assemble compilers allowing the description of morphology and semantics
Khelifi, Hadria. "Didactique du discours : le français langue d’écrit universitaire en Algérie. Étude contrastive entre filières scientifiques et sciences humaines". Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0282.
Pełny tekst źródłaThis thesis, entitled: ‘‘French as a writing academic language in Algeria. Contrastive study between scientific fields and humanities’’addresses the issue of teaching and/or learning foreign languages in Algeria through the characteristics of scientific genre. The aim of this research is to discover if genre retains its stability in the academic writing when it is a question of practice of a foreign language, as well as French. The French language does not exist for itself. It is the language of studying at the Algerian university, and poses, among other factors, an obstacle to success. Based on a heterogeneous corpus which is made up of twelve dissertations presented in Algeria, we would like to contrast, among these writings, the writing of the human and social sciences with the writing of the hard and natural sciences.Adopting an automatic language processing through the software Hyperbase is necessary because the corpus is very large. This tool continues to develop new techniques for scientific research. Also, in order to acquaint ourselves with the context of using French language in Algeria, we conducted a questionnaire survey with Algerian students and teachers at the university. The main result obtained from this research shows that genre is always dominant even in a specific context like the use of a foreign language
Ortiz, Suarez Pedro. "A Data-driven Approach to Natural Language Processing for Contemporary and Historical French". Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS155.
Pełny tekst źródłaIn recent years, neural methods for Natural Language Processing (NLP) have consistently and repeatedly improved the state of the art in a wide variety of NLP tasks. One of the main contributing reasons for this steady improvement is the increased use of transfer learning techniques. These methods consist in taking a pre-trained model and reusing it, with little to no further training, to solve other tasks. Even though these models have clear advantages, their main drawback is the amount of data that is needed to pre-train them. The lack of availability of large-scale data previously hindered the development of such models for contemporary French, and even more so for its historical states.In this thesis, we focus on developing corpora for the pre-training of these transfer learning architectures. This approach proves to be extremely effective, as we are able to establish a new state of the art for a wide range of tasks in NLP for contemporary, medieval and early modern French as well as for six other contemporary languages. Furthermore, we are able to determine, not only that these models are extremely sensitive to pre-training data quality, heterogeneity and balance, but we also show that these three features are better predictors of the pre-trained models' performance in downstream tasks than the pre-training data size itself. In fact, we determine that the importance of the pre-training dataset size was largely overestimated, as we are able to repeatedly show that such models can be pre-trained with corpora of a modest size
Carrasco-Ortiz, Haydee. "Morphosyntactic learning of french as a second language". Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM3039.
Pełny tekst źródłaThis thesis investigates morphosyntactic learning in adult second language (L2) learners of French. It examines the assumption posited by linguistic and neurocognitive models according to which L2 learners' difficulty in fully mastering morphosyntactic knowledge is due to a failure to mentally represent and process morphosyntactic information in a native-like manner. The series of experiments presented in this thesis use ERPs to investigate whether the difficulties that late L2 learners encounter in processing morphosyntactic agreement can be explained by (a) the phonological realization of inflectional morphology in the target language and (b) interference from the learners' native language (L1). The findings demonstrate that late L2 learners can achieve native-like processing of morphosyntactic knowledge at high levels of proficiency, regardless of the status of the morphosyntactic system in their L1. In addition, we provide evidence that phonological information contained in inflectional morphology plays an important role in the acquisition and processing of morphosyntactic agreement in L2. It is thus argued that L2 learners' processing of morphosyntactic agreement is less influenced by the L1 at high levels of proficiency, while still being potentially affected by the specific morphosyntactic properties of the target language. These findings give further support to linguistic and neurocognitive models positing that morphosyntactic processing in adult L2 learners involves mental representations and cognitive mechanisms similar to those used by native speakers
Douzon, Thibault. "Language models for document understanding". Electronic Thesis or Diss., Lyon, INSA, 2023. http://www.theses.fr/2023ISAL0075.
Pełny tekst źródłaEvery day, an uncountable amount of documents are received and processed by companies worldwide. In an effort to reduce the cost of processing each document, the largest companies have resorted to document automation technologies. In an ideal world, a document can be automatically processed without any human intervention: its content is read, and information is extracted and forwarded to the relevant service. The state-of-the-art techniques have quickly evolved in the last decades, from rule-based algorithms to statistical models. This thesis focuses on machine learning models for document information extraction. Recent advances in model architecture for natural language processing have shown the importance of the attention mechanism. Transformers have revolutionized the field by generalizing the use of attention and by pushing self-supervised pre-training to the next level. In the first part, we confirm that transformers with appropriate pre-training were able to perform document understanding tasks with high performance. We show that, when used as a token classifier for information extraction, transformers are able to exceptionally efficiently learn the task compared to recurrent networks. Transformers only need a small proportion of the training data to reach close to maximum performance. This highlights the importance of self-supervised pre-training for future fine-tuning. In the following part, we design specialized pre-training tasks, to better prepare the model for specific data distributions such as business documents. By acknowledging the specificities of business documents such as their table structure and their over-representation of numeric figures, we are able to target specific skills useful for the model in its future tasks. We show that those new tasks improve the model's downstream performances, even with small models. Using this pre-training approach, we are able to reach the performances of significantly bigger models without any additional cost during finetuning or inference. Finally, in the last part, we address one drawback of the transformer architecture which is its computational cost when used on long sequences. We show that efficient architectures derived from the classic transformer require fewer resources and perform better on long sequences. However, due to how they approximate the attention computation, efficient models suffer from a small but significant performance drop on short sequences compared to classical architectures. This incentivizes the use of different models depending on the input length and enables concatenating multimodal inputs into a single sequence
Samson, Juan Sarah Flora. "Exploiting resources from closely-related languages for automatic speech recognition in low-resource languages from Malaysia". Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM061/document.
Pełny tekst źródłaLanguages in Malaysia are dying in an alarming rate. As of today, 15 languages are in danger while two languages are extinct. One of the methods to save languages is by documenting languages, but it is a tedious task when performed manually.Automatic Speech Recognition (ASR) system could be a tool to help speed up the process of documenting speeches from the native speakers. However, building ASR systems for a target language requires a large amount of training data as current state-of-the-art techniques are based on empirical approach. Hence, there are many challenges in building ASR for languages that have limited data available.The main aim of this thesis is to investigate the effects of using data from closely-related languages to build ASR for low-resource languages in Malaysia. Past studies have shown that cross-lingual and multilingual methods could improve performance of low-resource ASR. In this thesis, we try to answer several questions concerning these approaches: How do we know which language is beneficial for our low-resource language? How does the relationship between source and target languages influence speech recognition performance? Is pooling language data an optimal approach for multilingual strategy?Our case study is Iban, an under-resourced language spoken in Borneo island. We study the effects of using data from Malay, a local dominant language which is close to Iban, for developing Iban ASR under different resource constraints. We have proposed several approaches to adapt Malay data to obtain pronunciation and acoustic models for Iban speech.Building a pronunciation dictionary from scratch is time consuming, as one needs to properly define the sound units of each word in a vocabulary. We developed a semi-supervised approach to quickly build a pronunciation dictionary for Iban. It was based on bootstrapping techniques for improving Malay data to match Iban pronunciations.To increase the performance of low-resource acoustic models we explored two acoustic modelling techniques, the Subspace Gaussian Mixture Models (SGMM) and Deep Neural Networks (DNN). We performed cross-lingual strategies using both frameworks for adapting out-of-language data to Iban speech. Results show that using Malay data is beneficial for increasing the performance of Iban ASR. We also tested SGMM and DNN to improve low-resource non-native ASR. We proposed a fine merging strategy for obtaining an optimal multi-accent SGMM. In addition, we developed an accent-specific DNN using native speech data. After applying both methods, we obtained significant improvements in ASR accuracy. From our study, we observe that using SGMM and DNN for cross-lingual strategy is effective when training data is very limited
Nilsson, Anna. "Lire et comprendre en français langue étrangère : Les pratiques de lecture et le traitement des similitudes intra- et interlexicales". Doctoral thesis, Stockholms universitet, Institutionen för franska, italienska och klassiska språk, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-7048.
Pełny tekst źródłaKessler, Rémy. "Traitement automatique d’informations appliqué aux ressources humaines". Thesis, Avignon, 2009. http://www.theses.fr/2009AVIG0167/document.
Pełny tekst źródłaSince the 90s, Internet is at the heart of the labor market. First mobilized on specific expertise, its use spreads as increase the number of Internet users in the population. Seeking employment through "electronic employment bursary" has become a banality and e-recruitment something current. This information explosion poses various problems in their treatment with the large amount of information difficult to manage quickly and effectively for companies. We present in this PhD thesis, the work we have developed under the E-Gen project, which aims to create tools to automate the flow of information during a recruitment process.We interested first to the problems posed by the routing of emails. The ability of a companie to manage efficiently and at lower cost this information flows becomes today a major issue for customer satisfaction. We propose the application of learning methods to perform automatic classification of emails to their routing, combining technical and probabilistic vector machines support. After, we present work that was conducted as part of the analysis and integration of a job ads via Internet. We present a solution capable of integrating a job ad from an automatic or assisted in order to broadcast it quickly. Based on a combination of classifiers systems driven by a Markov automate, the system gets very good results. Thereafter, we present several strategies based on vectorial and probabilistic models to solve the problem of profiling candidates according to a specific job offer to assist recruiters. We have evaluated a range of measures of similarity to rank candidatures by using ROC curves. Relevance feedback approach allows to surpass our previous results on this task, difficult, diverse and higly subjective
Delyfer, Annie. "Le rôle de l'hémisphère droit dans le traitement des mots connotant une émotion et des mots dénotant une émotion". Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=23389.
Pełny tekst źródłaLinhares, Pontes Elvys. "Compressive Cross-Language Text Summarization". Thesis, Avignon, 2018. http://www.theses.fr/2018AVIG0232/document.
Pełny tekst źródłaThe popularization of social networks and digital documents increased quickly the informationavailable on the Internet. However, this huge amount of data cannot be analyzedmanually. Natural Language Processing (NLP) analyzes the interactions betweencomputers and human languages in order to process and to analyze natural languagedata. NLP techniques incorporate a variety of methods, including linguistics, semanticsand statistics to extract entities, relationships and understand a document. Amongseveral NLP applications, we are interested, in this thesis, in the cross-language textsummarization which produces a summary in a language different from the languageof the source documents. We also analyzed other NLP tasks (word encoding representation,semantic similarity, sentence and multi-sentence compression) to generate morestable and informative cross-lingual summaries.Most of NLP applications (including all types of text summarization) use a kind ofsimilarity measure to analyze and to compare the meaning of words, chunks, sentencesand texts in their approaches. A way to analyze this similarity is to generate a representationfor these sentences that contains the meaning of them. The meaning of sentencesis defined by several elements, such as the context of words and expressions, the orderof words and the previous information. Simple metrics, such as cosine metric andEuclidean distance, provide a measure of similarity between two sentences; however,they do not analyze the order of words or multi-words. Analyzing these problems,we propose a neural network model that combines recurrent and convolutional neuralnetworks to estimate the semantic similarity of a pair of sentences (or texts) based onthe local and general contexts of words. Our model predicted better similarity scoresthan baselines by analyzing better the local and the general meanings of words andmulti-word expressions.In order to remove redundancies and non-relevant information of similar sentences,we propose a multi-sentence compression method that compresses similar sentencesby fusing them in correct and short compressions that contain the main information ofthese similar sentences. We model clusters of similar sentences as word graphs. Then,we apply an integer linear programming model that guides the compression of theseclusters based on a list of keywords. We look for a path in the word graph that has goodcohesion and contains the maximum of keywords. Our approach outperformed baselinesby generating more informative and correct compressions for French, Portugueseand Spanish languages. Finally, we combine these previous methods to build a cross-language text summarizationsystem. Our system is an {English, French, Portuguese, Spanish}-to-{English,French} cross-language text summarization framework that analyzes the informationin both languages to identify the most relevant sentences. Inspired by the compressivetext summarization methods in monolingual analysis, we adapt our multi-sentencecompression method for this problem to just keep the main information. Our systemproves to be a good alternative to compress redundant information and to preserve relevantinformation. Our system improves informativeness scores without losing grammaticalquality for French-to-English cross-lingual summaries. Analyzing {English,French, Portuguese, Spanish}-to-{English, French} cross-lingual summaries, our systemsignificantly outperforms extractive baselines in the state of the art for all these languages.In addition, we analyze the cross-language text summarization of transcriptdocuments. Our approach achieved better and more stable scores even for these documentsthat have grammatical errors and missing information
Labeau, Matthieu. "Neural language models : Dealing with large vocabularies". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS313/document.
Pełny tekst źródłaThis work investigates practical methods to ease training and improve performances of neural language models with large vocabularies. The main limitation of neural language models is their expensive computational cost: it depends on the size of the vocabulary, with which it grows linearly. Despite several training tricks, the most straightforward way to limit computation time is to limit the vocabulary size, which is not a satisfactory solution for numerous tasks. Most of the existing methods used to train large-vocabulary language models revolve around avoiding the computation of the partition function, ensuring that output scores are normalized into a probability distribution. Here, we focus on sampling-based approaches, including importance sampling and noise contrastive estimation. These methods allow an approximate computation of the partition function. After examining the mechanism of self-normalization in noise-contrastive estimation, we first propose to improve its efficiency with solutions that are adapted to the inner workings of the method and experimentally show that they considerably ease training. Our second contribution is to expand on a generalization of several sampling based objectives as Bregman divergences, in order to experiment with new objectives. We use Beta divergences to derive a set of objectives from which noise contrastive estimation is a particular case. Finally, we aim at improving performances on full vocabulary language models, by augmenting output words representation with subwords. We experiment on a Czech dataset and show that using character-based representations besides word embeddings for output representations gives better results. We also show that reducing the size of the output look-up table improves results even more
Piat, Guilhem Xavier. "Incorporating expert knowledge in deep neural networks for domain adaptation in natural language processing". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG087.
Pełny tekst źródłaCurrent state-of-the-art Language Models (LMs) are able to converse, summarize, translate, solve novel problems, reason, and use abstract concepts at a near-human level. However, to achieve such abilities, and in particular to acquire ``common sense'' and domain-specific knowledge, they require vast amounts of text, which are not available in all languages or domains. Additionally, their computational requirements are out of reach for most organizations, limiting their potential for specificity and their applicability in the context of sensitive data.Knowledge Graphs (KGs) are sources of structured knowledge which associate linguistic concepts through semantic relations. These graphs are sources of high quality knowledge which pre-exist in a variety of otherwise low-resource domains, and are denser in information than typical text. By allowing LMs to leverage these information structures, we could remove the burden of memorizing facts from LMs, reducing the amount of text and computation required to train them and allowing us to update their knowledge with little to no additional training by updating the KGs, therefore broadening their scope of applicability and making them more democratizable.Various approaches have succeeded in improving Transformer-based LMs using KGs. However, most of them unrealistically assume the problem of Entity Linking (EL), i.e. determining which KG concepts are present in the text, is solved upstream. This thesis covers the limitations of handling EL as an upstream task. It goes on to examine the possibility of learning EL jointly with language modeling, and finds that while this is a viable strategy, it does little to decrease the LM's reliance on in-domain text. Lastly, this thesis covers the strategy of using KGs to generate text in order to leverage LMs' linguistic abilities and finds that even naïve implementations of this approach can result in measurable improvements on in-domain language processing
Planchou, Clément. "Traitement auditifs non verbaux et troubles du développement du langage oral : perception et production musicales". Thesis, Lille 3, 2014. http://www.theses.fr/2014LIL30034.
Pełny tekst źródłaThe aim of this thesis is to determine whether the auditory deficit of children with Specific Language Impairment (SLI) is specific to verbal stimuli, and to examine the relation between language and musical abilities in these children. We tested 18 children with SLI and groups of children with Typical Language Development (TLD) aged from 7 to 12 years. In the first study, we examined syllable detection in sung and spoken sentences. Results confirmed the detection syllable deficit in children with SLI. However, we did not observe a facilitation effect of sung over spoken stimuli. In the second study, we explored musical perception abilities in the same children with the MBEMA (Peretz et al. 2013). Our results showed that a large proportion of the children with SLI present deficits in melodic and rhythmic perception. A positive correlation was found between scores in Rhythm and phonological awareness tasks, documenting a link between language and temporal processing in children with SLI. In the third study, we assessed singing abilities in children with SLI: we created a singing reproduction task and tested the pitch matching condition and the melodic reproduction condition. The SLI showed deficits for both conditions. These results suggested deficits in music perception and production in children with SLI for most of them and that development of phonological awareness abilities seems related to the auditory temporal processing in music. The findings seem to support the existence a more general auditory dysfunction in a majority of children with SLI emphasizing the relevance of systematically assessing nonverbal abilities for the diagnostic and rehabilitation of SLI
Knyazeva, Elena. "Apprendre par imitation : applications à quelques problèmes d'apprentissage structuré en traitement des langues". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS134/document.
Pełny tekst źródłaStructured learning has become ubiquitousin Natural Language Processing; a multitude ofapplications, such as personal assistants, machinetranslation and speech recognition, to name just afew, rely on such techniques. The structured learningproblems that must now be solved are becomingincreasingly more complex and require an increasingamount of information at different linguisticlevels (morphological, syntactic, etc.). It is thereforecrucial to find the best trade-off between the degreeof modelling detail and the exactitude of the inferencealgorithm. Imitation learning aims to perform approximatelearning and inference in order to better exploitricher dependency structures. In this thesis, we explorethe use of this specific learning setting, in particularusing the SEARN algorithm, both from a theoreticalperspective and in terms of the practical applicationsto Natural Language Processing tasks, especiallyto complex tasks such as machine translation.Concerning the theoretical aspects, we introduce aunified framework for different imitation learning algorithmfamilies, allowing us to review and simplifythe convergence properties of the algorithms. With regardsto the more practical application of our work, weuse imitation learning first to experiment with free ordersequence labelling and secondly to explore twostepdecoding strategies for machine translation
Gouvard, Paul. "Explaining the Variability of Audiences’ Valuations : An Approach Based on Market Categories and Natural Language Processing". Thesis, Jouy-en Josas, HEC, 2020. http://www.theses.fr/2020EHEC0007.
Pełny tekst źródłaThis dissertation examines whether the different categorization processes shaping audiences’ valuations in markets bring stability or variability to audiences’ valuations. While seminal research on categorization emphasized the stabilizing role of market categories, recent research suggests that audiences’ valuations can vary substantially even in markets which are well-structured by pre-existing categories. This variability notably results from audiences’ heterogeneous preferences for typical offerings, from shifts in categories’ meanings or from audiences’ reliance on multiple models of valuation. Taking stock of these new results, this dissertation asks why audiences’ valuations are so variable and explores in more details the role that market categories play in this phenomenon.This dissertation proposes that i) ambiguous categories, ii) the influence of temporary attractions among audiences alongside more stable categories and iii) the co-existence of different types of evaluators all contribute to produce variability in audiences’ valuations. The first two empirical essays use data from publicly listed firms in the U.S. In these essays, firms’ similarity to existing category prototypes or audiences’ temporary attractions toward certain features are measured using semantics extracted from large corpora of annual reports and IPO prospectuses. The third essay is a theoretical model. This dissertation contributes to the literature on market categories, to the burgeoning research on optimal distinctiveness and to computational approaches to the study of organizations
Gonzalez, Preciado Matilde. "Computer vision methods for unconstrained gesture recognition in the context of sign language annotation". Toulouse 3, 2012. http://thesesups.ups-tlse.fr/1798/.
Pełny tekst źródłaThis PhD thesis concerns the study of computer vision methods for the automatic recognition of unconstrained gestures in the context of sign language annotation. Sign Language (SL) is a visual-gestural language developed by deaf communities. Continuous SL consists on a sequence of signs performed one after another involving manual and non-manual features conveying simultaneous information. Even though standard signs are defined in dictionaries, we find a huge variability caused by the context-dependency of signs. In addition signs are often linked by movement epenthesis which consists on the meaningless gesture between signs. The huge variability and the co-articulation effect represent a challenging problem during automatic SL processing. It is necessary to have numerous annotated video corpus in order to train statistical machine translators and study this language. Generally the annotation of SL video corpus is manually performed by linguists or computer scientists experienced in SL. However manual annotation is error-prone, unreproducible and time consuming. In addition de quality of the results depends on the SL annotators knowledge. Associating annotator knowledge to image processing techniques facilitates the annotation task increasing robustness and speeding up the required time. The goal of this research concerns on the study and development of image processing technique in order to assist the annotation of SL video corpus: body tracking, hand segmentation, temporal segmentation, gloss recognition. Along this PhD thesis we address the problem of gloss annotation of SL video corpus. First of all we intend to detect the limits corresponding to the beginning and end of a sign. This annotation method requires several low level approaches for performing temporal segmentation and for extracting motion and hand shape features. First we propose a particle filter based approach for robustly tracking hand and face robust to occlusions. Then a segmentation method for extracting hand when it is in front of the face has been developed. Motion is used for segmenting signs and later hand shape is used to improve the results. Indeed hand shape allows to delete limits detected in the middle of a sign. Once signs have been segmented we proceed to the gloss recognition using lexical description of signs. We have evaluated our algorithms using international corpus, in order to show their advantages and limitations. The evaluation has shown the robustness of the proposed methods with respect to high dynamics and numerous occlusions between body parts. Resulting annotation is independent on the annotator and represents a gain on annotation consistency
Franco, Ana. "Impact de l'expertise linguistique sur le traitement statistique de la parole". Doctoral thesis, Universite Libre de Bruxelles, 2012. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209565.
Pełny tekst źródłaDans un premier temps, la question de la disponibilité des connaissances acquises à la conscience a été traitée (Etude 1 et 2). L'étude 1 présente une adaptation d’une méthode largement utilisée dans le domaine de l’apprentissage implicite pour rendre compte du caractère conscient ou inconscient des connaissances acquises lors d’un apprentissage, la procédure de dissociation des processus (Jacoby, 1991). Nous avons adapté cette méthode à une situation de traitement des probabilités transitionnelles entre des syllabes afin de déterminer si les représentations acquises suite à l’exposition à un langage artificiel sont disponibles à la conscience. Nous nous sommes ensuite intéressés à la question de savoir comment le caractère conscient des connaissances acquises peut être modulé par l’expertise linguistique. Les résultats suggèrent que bien que les sujets apprennent de manière semblable, les connaissances acquises semblent être moins disponibles à la conscience chez les sujets bilingues.
Dans un deuxième temps nous nous sommes intéressés au décours temporel de l’apprentissage statistique (Etude 3 et 4). L'étude 3 présente une adaptation de la Click location task (Fodor & Bever, 1965) comme mesure online du traitement des probabilités transitionnelles lors de la segmentation de la parole. Nous nous sommes ensuite intéressés à comment le traitement des régularités du langage pouvait être modulé par l’expertise linguistique (Etude 4) et les résultats suggèrent que les deux groupes ne diffèrent pas en termes de décours temporel du traitement statistique.
Dans un troisième temps, nous avons posé la question de ce qui est appris dans une situation d’apprentissage statistique. Est-ce que le produit de cet apprentissage correspond à des fragments d’information, des « candidats mots » ?Ou est-ce que, au contraire, l’apprentissage résulte en une sensibilité aux probabilités de transition entre les éléments ?L’Etude 5 propose une méthode pour déterminer la nature des représentations formées lors de l’apprentissage statistique. Le but de cette étude était d’opposer deux modèles d’apprentissage de régularités statistiques afin de déterminer lequel rend mieux compte des résultats observés lors d’une situation d’apprentissage statistique. Dans l’étude 6, nous nous sommes intéressés à l’influence de l’expertise linguistique sur la nature des représentations formées. Les résultats suggèrent que les sujets bilingues forment des représentations plus fidèles à la réalité du matériel, comparé aux monolingues.
Enfin l'étude 7 avait pour but d'explorer une situation d'apprentissage statistique plus complexe, à savoir l'apprentissage d'une grammaire artificielle. La comparaison entre des sujets monolingues et bilingues suggère que les sujets ne diffèrent pas en termes de décours temporel de l'apprentissage. Par contre, les sujets bilingues semblent former de meilleures représentations du matériel présenté et posséder des connaissances non disponibles à la conscience, alors que les monolingues se basent sur des connaissances conscientes pour effectuer la tâche.
Ainsi, les études présentées dans ce travail suggèrent que l'expertise linguistique ne module pas la vitesse de traitement de l'information statistique. Par contre, dans certaines situations, le fait d'être bilingue pourrait constituer un avantage en termes d'acquisition de connaissances sur base d'un traitement statistique et aurait également un impact sur la disponibilité des connaissances à la conscience. / The aim of this thesis was to determine whether linguistic expertise can modulate learning abilities, and more specifically statistical learning abilities. The regular use of two languages by bilingual individuals has been shown to have a broad impact on language and cognitive functioning. However, little is known about the effect of bilingualism on learning abilities. Language acquisition is a complex process that depends substantially on the processing of statistical regularities contained in speech. Because statistical information is language-specific, this information must be learned from scratch when one learns a new language. Unlike monolinguals, individuals who know more than one language, such as bilinguals or multilinguals, therefore face the challenge of having to master more than one set of statistical contingencies. Does bilingualism and increased experience with statistical processing of speech confer an advantage in terms of learning abilities? In this thesis, we address these questions at three different levels. We compared monolinguals and bilinguals in terms of (1) the nature of the representations formed during learning, (2) the time course of statistical processing, and (3) the availability of statistical knowledge to consciousness. Exploring how linguistic expertise modulates statistical learning will contribute to a better understanding of the cognitive consequences of bilingualism, but could also provide clues regarding the link between statistical learning and language.
First, the present work aimed to determine whether knowledge acquired based on statistical regularities is amenable to conscious control (Study 1 and 2). Study 1 presents an adaptation of the Process Dissociation Procedure (PDP, Jacoby, 1991), a widely used method in the field of implicit learning to account for the conscious nature of knowledge acquired during a learning situation. We adapted this method to a statistical learning paradigm in which participants had to extract artificial words from a continuous speech stream. In Study 2, we used the PDP to explore the extent to which conscious access to the acquired knowledge is modulated by linguistic expertise. Our results suggest that although monolinguals and bilinguals learned the words similarly, knowledge seems to be less available to consciousness for bilingual participants.
Second, in Studies 3 & 4, we investigated the time course of statistical learning. Study 3 introduces a novel online measure of transitional probabilities processing during speech segmentation, — an adaptation of the Click Localizaton Task (Fodor & Bever, 1965) as. In Study 4, explored whether processing of statistical regularities of speech could be modulated by linguistic expertise. The results suggest that the two groups did not differ in terms of time course of statistical processing.
Third, we aimed at exploring what is learned in a statistical learning situation. Two different kinds of mechanisms may account for performance. Participants may either parse the material into smaller chunks that correspond to the words of the artificial language, or they may become progressively sensitive to the actual values of the transitional probabilities between syllables. Study 5 proposes a method to determine the nature of the representations formed during learning. The purpose of this study was to compare two models of statistical learning (PARSER vs. SRN) in order to determine which better reflects the representations formed as a result of statistical learning. In study 6, we investigated the influence of linguistic expertise on the nature of the representations formed. The results suggests that bilinguals tend to form representations of the learned sequences that are more faithful to the reality of the material, compared to monolinguals.
Finally, Study 7 investigates how linguistic expertise influences a more complex statistical learning situation, namely artificial grammar learning. Comparison between monolingual and bilingual subjects suggests that subjects did not differ in terms of the time course of learning. However, bilinguals outperformed monolinguals in learning the grammar and seem to possess both conscious and unconscious knowledge, whereas monolinguals’ performance was only based on conscious knowledge.
To sum up, the studies presented in the present work suggest that linguistic expertise does not modulate the speed of processing of statistical information. However, bilinguals seem have make better use of the learned regularities and outperformed monolinguals in some specific situations. Moreover, linguistic expertise also seems to have an impact on the availability of knowledge to consciousness.
Doctorat en Sciences Psychologiques et de l'éducation
info:eu-repo/semantics/nonPublished
Lepage, Yves. "Un système de grammaires correspondancielles d'identification". Grenoble 1, 1989. http://www.theses.fr/1989GRE10059.
Pełny tekst źródłaLe, Hai Son. "Continuous space models with neural networks in natural language processing". Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00776704.
Pełny tekst źródłaSasa, Yuko. "Intelligence Socio-Affective pour un Robot : primitives langagières pour une interaction évolutive d'un robot de l’habitat intelligent". Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM041/document.
Pełny tekst źródłaThe Natural Language Processing (NLP) has technically improved regarding human speech vocabulary extension, morphosyntax scope, style and aesthetic. Affective Computing also tends to integrate an “emotional” dimension with a common goal shared with NLP which is to disambiguate the natural language and increase the human-machine interaction naturalness. Within social robotics, the interaction is modelled in dialogue systems trying to reach out an attachment dimension which effects need to an ethical and collective control. However, the situated natural language dynamics is undermining the automated system’s efficiency, which is trying to respond with useful and suitable feedbacks. This thesis hypothesis supposes the existence of a “socio-affective glue” in every interaction, set up in between two individuals, each with a social role depending on a communication context. This glue is so the consequence of dynamics generated by a process which mechanisms rely on an altruistic dimension, but independent of dominance dimension as seen in emotions studies. This glue would allow the exchange of the language events between interlocutors, by regularly modifying their relation and their role, which is changing themselves this glue, to ensure the communication continuity. The second hypothesis proposes the glue as built by “socio-affective pure prosody” forms that enable this relational construction. These cues are supposed to be carried by hearable and visible micro-expressions. The interaction events effect would also be gradual following the degree of the communication’s intentionality control. The graduation will be continuous through language primitives as 1) mouth noises (neither phonetics nor phonological sounds), 2) pre-lexicalised sounds, 3) interjections and onomatopoeias, 4) controlled command-based imitations with the same socio-affective prosody supposed to create and modify the glue. Within the Domus platform, we developed an almost living-lab methodology. It functions on agile and iterative loops co-constructed with industrial and societal partners. A wizard of oz approach – EmOz – is used to control the vocal primitives proposed as the only language tools of a Smart Home butler robot interacting with relationally isolated elderly. The relational isolation allows the dimensions the socio-affective glue in a contrastive situation where it is damaged. We could thus observe the primitives’ effects through multimodal language cues. One of the gerontechnology social motivation showed the isolation to be a phenomenon amplifying the frailty so can attest the emergence of assistive robotics. A vicious circle leads by the elderly communicational characteristics convey them to some difficulties to maintain their relational tissue while their bonds are beneficial for their health and well-being. If the proposed primitives could have a real effect on the glue, the automated system will be able to train the persons to regain some unfit mechanisms underlying their relational construction, and so possibly increase their desire to communicate with their human social surroundings. The results from the collected EEE corpus show the relation changes through various interactional cues, temporally organised. These denoted parameters tend to build an incremental dialogue system in perspectives – SASI. The first steps moving towards this system reside on a speech recognition prototype which robustness is not based on the accuracy of the recognised language content but on the possibility to identify the glue degree (i.e. the relational state) between the interlocutors. Thus, the recognition errors avoid the system to be rejected by the user, by tempting to be balanced by this system’s adaptive socio-affective intelligence
Moritz-Gasser, Sylvie. "Les bases neurales du traitement sémantique : un nouvel éclairage : études en électrostimulations cérébrales directes". Thesis, Montpellier 1, 2012. http://www.theses.fr/2012MON1T007/document.
Pełny tekst źródłaSemantic processing is the mental process by which we access to meaning. Therefore, it takes a central place in language comprehension and production, but also in the whole human functioning, since it allows conceptualizing and giving a meaning to the world, by confronting it consciously with the knowledge we store over our experiences. If the neural bases of semantic processing are well known at the cortical level, thanks to numerous studies based particularly on functional neuroimaging data, the analysis of the subcortical connectivity underlying this processing received so far less attention. Nevertheless, the authors agree on the existence of a semantic ventral stream, parallel to a phonological dorsal stream.The present work mean to bring a new highlight on the knowledge of the neural bases of semantic processing at the level of the single word, in connection with the wider setting of non verbal semantic processing, by the study of semantic skills in patients presenting with WHO grade 2 glioma, and for which they undergo a surgery in awaken conditions, with cortico-subcortical intraoperative mapping. Thus, this work highlights the crucial role of the inferior fronto-occipital fascicle, in this ventral semantic route, within a functional brain organization in parallel and distributed networks of cortical areas interconnected by white matter association fibers.it underlines also the interactive feature of cognitive functioning, and the significance of control mechanisms in language processing, as well as the measuring of mental chronometry when assessing it. These considerations lead us to propose a general hodotopical model of language anatomo-functional organization.The results presented in this work may thus have important clinical and scientific implications, regarding the comprehension of language brain functional organization, of its dysfunctioning, of functional reorganization mechanisms in case of brain lesion, and the elaboration of rehabilitation programs
Chan, Shih-Han. "COLLADA Audio : A Formal Representation of Sound in Virtual Cities by a Scene Description Language". Electronic Thesis or Diss., Paris, CNAM, 2012. http://www.theses.fr/2012CNAM0872.
Pełny tekst źródłaStandardized file formats has been conceived since many years to write, read, and exchange 3D scene descriptions. These descriptions are mainly for visual contents whereas options given for audio compositions of virtual scenes are either lacking or poor. Therefore, we propose to include a rich sound description in the COLLADA, which is a standard format for exchanging digital assets. Most scene description languages with a sound description factorize common elements needed by the graphical and auditory information. Both aspects are, for example, described with the same coordinate system. However, as soon as a dynamic description or external data are required, all the glue must be done by a programming approach. In this thesis, we address this problem and propose to give more creative power in the hands of sound designers even when the scene is dynamic or based on procedural synthesizers. This solution is based on the COLLADA schema in which we add the sound support, scripting capabilities and external extensions. The use of the augmented COLLADA language is illustrated through the creation of dynamic urban soundscape
Shang, Guokan. "Spoken Language Understanding for Abstractive Meeting Summarization Unsupervised Abstractive Meeting Summarization with Multi-Sentence Compression and Budgeted Submodular Maximization. Energy-based Self-attentive Learning of Abstractive Communities for Spoken Language Understanding Speaker-change Aware CRF for Dialogue Act Classification". Thesis, Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAX011.
Pełny tekst źródłaWith the impressive progress that has been made in transcribing spoken language, it is becoming increasingly possible to exploit transcribed data for tasks that require comprehension of what is said in a conversation. The work in this dissertation, carried out in the context of a project devoted to the development of a meeting assistant, contributes to ongoing efforts to teach machines to understand multi-party meeting speech. We have focused on the challenge of automatically generating abstractive meeting summaries.We first present our results on Abstractive Meeting Summarization (AMS), which aims to take a meeting transcription as input and produce an abstractive summary as output. We introduce a fully unsupervised framework for this task based on multi-sentence compression and budgeted submodular maximization. We also leverage recent advances in word embeddings and graph degeneracy applied to NLP, to take exterior semantic knowledge into account and to design custom diversity and informativeness measures.Next, we discuss our work on Dialogue Act Classification (DAC), whose goal is to assign each utterance in a discourse a label that represents its communicative intention. DAC yields annotations that are useful for a wide variety of tasks, including AMS. We propose a modified neural Conditional Random Field (CRF) layer that takes into account not only the sequence of utterances in a discourse, but also speaker information and in particular, whether there has been a change of speaker from one utterance to the next.The third part of the dissertation focuses on Abstractive Community Detection (ACD), a sub-task of AMS, in which utterances in a conversation are grouped according to whether they can be jointly summarized by a common abstractive sentence. We provide a novel approach to ACD in which we first introduce a neural contextual utterance encoder featuring three types of self-attention mechanisms and then train it using the siamese and triplet energy-based meta-architectures. We further propose a general sampling scheme that enables the triplet architecture to capture subtle patterns (e.g., overlapping and nested clusters)
Wang, Ilaine. "Syntactic Similarity Measures in Annotated Corpora for Language Learning : application to Korean Grammar". Thesis, Paris 10, 2017. http://www.theses.fr/2017PA100092/document.
Pełny tekst źródłaUsing queries to explore corpora is today part of the routine of not only researchers of various fields with an empirical approach to discourse, but also of non-specialists who use search engines or concordancers for language learning purposes. If keyword-based queries are quite common, non-specialists still seem to be less likely to explore syntactic constructions. Indeed, syntax-based queries usually require the use of regular expressions with grammatical words combined with morphosyntactic tags, which imply that users master both the query language of the tool and the tagset of the annotated corpus. However, non-specialists like language learners might want to focus on the output rather than spend time and efforts on mastering a query language.To address this shortcoming, we propose a methodology including a syntactic parser and using common similarity measures to compare sequences of morphosyntactic tags automatically provided
Tafforeau, Jérémie. "Modèle joint pour le traitement automatique de la langue : perspectives au travers des réseaux de neurones". Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0430/document.
Pełny tekst źródłaNLP researchers has identified different levels of linguistic analysis. This lead to a hierarchical division of the various tasks performed in order to analyze a text statement. The traditional approach considers task-specific models which are subsequently arranged in cascade within processing chains (pipelines). This approach has a number of limitations: the empirical selection of models features, the errors accumulation in the pipeline and the lack of robusteness to domain changes. These limitations lead to particularly high performance losses in the case of non-canonical language with limited data available such as transcriptions of conversations over phone. Disfluencies and speech-specific syntactic schemes, as well as transcription errors in automatic speech recognition systems, lead to a significant drop of performances. It is therefore necessary to develop robust and flexible systems. We intend to perform a syntactic and semantic analysis using a deep neural network multitask model while taking into account the variations of domain and/or language registers within the data
Zhang, Zheng. "Explorations in Word Embeddings : graph-based word embedding learning and cross-lingual contextual word embedding learning". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS369/document.
Pełny tekst źródłaWord embeddings are a standard component of modern natural language processing architectures. Every time there is a breakthrough in word embedding learning, the vast majority of natural language processing tasks, such as POS-tagging, named entity recognition (NER), question answering, natural language inference, can benefit from it. This work addresses the question of how to improve the quality of monolingual word embeddings learned by prediction-based models and how to map contextual word embeddings generated by pretrained language representation models like ELMo or BERT across different languages.For monolingual word embedding learning, I take into account global, corpus-level information and generate a different noise distribution for negative sampling in word2vec. In this purpose I pre-compute word co-occurrence statistics with corpus2graph, an open-source NLP-application-oriented Python package that I developed: it efficiently generates a word co-occurrence network from a large corpus, and applies to it network algorithms such as random walks. For cross-lingual contextual word embedding mapping, I link contextual word embeddings to word sense embeddings. The improved anchor generation algorithm that I propose also expands the scope of word embedding mapping algorithms from context independent to contextual word embeddings
Falco, Mathieu-Henri. "Répondre à des questions à réponses multiples sur le Web". Phd thesis, Université Paris Sud - Paris XI, 2014. http://tel.archives-ouvertes.fr/tel-01015869.
Pełny tekst źródła