Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Réseaux neuronaux (informatique) – Traitement automatique du langage naturel“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Réseaux neuronaux (informatique) – Traitement automatique du langage naturel" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Dissertationen zum Thema "Réseaux neuronaux (informatique) – Traitement automatique du langage naturel"
Jodouin, Jean-François. „Réseaux de neurones et traitement du langage naturel : étude des réseaux de neurones récurrents et de leurs représentations“. Paris 11, 1993. http://www.theses.fr/1993PA112079.
Der volle Inhalt der QuelleBardet, Adrien. „Architectures neuronales multilingues pour le traitement automatique des langues naturelles“. Thesis, Le Mans, 2021. http://www.theses.fr/2021LEMA1002.
Der volle Inhalt der QuelleThe translation of languages has become an essential need for communication between humans in a world where the possibilities of communication are expanding. Machine translation is a response to this evolving need. More recently, neural machine translation has come to the fore with the great performance of neural systems, opening up a new area of machine learning. Neural systems use large amounts of data to learn how to perform a task automatically. In the context of machine translation, the sometimes large amounts of data needed to learn efficient systems are not always available for all languages.The use of multilingual systems is one solution to this problem. Multilingual machine translation systems make it possible to translate several languages within the same system. They allow languages with little data to be learned alongside languages with more data, thus improving the performance of the translation system. This thesis focuses on multilingual machine translation approaches to improve performance for languages with limited data. I have worked on several multilingual translation approaches based on different transfer techniques between languages. The different approaches proposed, as well as additional analyses, have revealed the impact of the relevant criteria for transfer. They also show the importance, sometimes neglected, of the balance of languages within multilingual approaches
Kodelja, Bonan Dorian. „Prise en compte du contexte inter-phrastique pour l'extraction d'événements supervisée“. Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASS005.
Der volle Inhalt der QuelleThe extraction of structured information from a document is one of the main parts of natural language processing (NLP). This extraction usually consists in three steps: named entities recognition relation extraction and event extraction. This last step is considered to be the most challenging. The notion of event covers a broad list of different phenomena which are characterized through a varying number of roles. Thereupon, Event extraction consists in detecting the occurrence of an event then determining its argument, that is, the different entities filling specific roles. These two steps are usually done one after the other. In this case, the first step revolves around detecting triggers indicating the occurrence of events.The current best approaches, based on neural networks, focus on the direct neighborhood of the target word in the sentence. Information in the rest of the document is then usually ignored. This thesis presents different approaches aiming at exploiting this document-level context.We begin by reproducing a state of the art convolutional neural network and analyze some of its parameters. We then present an experiment showing that, despite its good performances, our model only exploit a narrow context at the intra-sentential level.Subsequently, we present two methods to generate and integrate a representation of the inter-sentential context in a neural network operating on an intra-sentential context.The first contribution consists in producing a task-specific representation of the inter-sentential context through the aggregation of the predictions of a first intra-sentential model. This representation is then integrated in a second model, allowing it to use the document level distribution of event to improve its performances. We also show that this task-specific representation is better than an existing generic representation of the inter-sentential context.Our second contribution, in response to the limitations of the first one, allows for the dynamic generation of a specific context for each target word. This method yields the best performances for a single model on multiples datasets.Finally, we take a different tack on the exploitation of the inter-sentential context. We try a more direct modelisation of the dependencies between multiple event instances inside a document in order to produce a joint prediction. To do so, we use the PSL (Probabilistic Soft Logic) framework which allows to model such dependencies through logic formula
Ramachandra, Rao Sanjay Kamath. „Question Answering with Hybrid Data and Models“. Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASS024.
Der volle Inhalt der QuelleQuestion Answering is a discipline which lies in between natural language processing and information retrieval domains. Emergence of deep learning approaches in several fields of research such as computer vision, natural language processing, speech recognition etc. has led to the rise of end-to-end models.In the context of GoASQ project, we investigate, compare and combine different approaches for answering questions formulated in natural language over textual data on open domain and biomedical domain data. The thesis work mainly focuses on 1) Building models for small scale and large scale datasets, and 2) Leveraging structured and semantic information into question answering models. Hybrid data in our research context is fusion of knowledge from free text, ontologies, entity information etc. applied towards free text question answering.The current state-of-the-art models for question answering use deep learning based models. In order to facilitate using them on small scale datasets on closed domain data, we propose to use domain adaptation. We model the BIOASQ biomedical question answering task dataset into two different QA task models and show how the Open Domain Question Answering task suits better than the Reading Comprehension task by comparing experimental results. We pre-train the Reading Comprehension model with different datasets to show the variability in performance when these models are adapted to biomedical domain. We find that using one particular dataset (SQUAD v2.0 dataset) for pre-training performs the best on single dataset pre-training and a combination of four Reading Comprehension datasets performed the best towards the biomedical domain adaptation. We perform some of the above experiments using large scale pre-trained language models like BERT which are fine-tuned to the question answering task. The performance varies based on the type of data used to pre-train BERT. For BERT pre-training on the language modelling task, we find the biomedical data trained BIOBERT to be the best choice for biomedical QA.Since deep learning models tend to function in an end-to-end fashion, semantic and structured information coming from expert annotated information sources are not explicitly used. We highlight the necessity for using Lexical and Expected Answer Types in open domain and biomedical domain question answering by performing several verification experiments. These types are used to highlight entities in two QA tasks which shows improvements while using entity embeddings based on the answer type annotations. We manually annotated an answer variant dataset for BIOASQ and show the importance of learning a QA model with answer variants present in the paragraphs.Our hypothesis is that the results obtained from deep learning models can further be improved using semantic features and collective features from different paragraphs for a question. We propose to use ranking models based on binary classification methods to better rank Top-1 prediction among Top-K predictions using these features, leading to an hybrid model that outperforms state-of-art-results on several datasets. We experiment with several overall Open Domain Question Answering models on QA sub-task datasets built for Reading Comprehension and Answer Sentence Selection tasks. We show the difference in performance when these are modelled as overall QA task and highlight the wide gap in building end-to-end models for overall question answering task
Janod, Killian. „La représentation des documents par réseaux de neurones pour la compréhension de documents parlés“. Thesis, Avignon, 2017. http://www.theses.fr/2017AVIG0222/document.
Der volle Inhalt der QuelleApplication of spoken language understanding aim to extract relevant items of meaning from spoken signal. There is two distinct types of spoken language understanding : understanding of human/human dialogue and understanding in human/machine dialogue. Given a type of conversation, the structure of dialogues and the goal of the understanding process varies. However, in both cases, most of the time, automatic systems have a step of speech recognition to generate the textual transcript of the spoken signal. Speech recognition systems in adverse conditions, even the most advanced one, produce erroneous or partly erroneous transcript of speech. Those errors can be explained by the presence of information of various natures and functions such as speaker and ambience specificities. They can have an important adverse impact on the performance of the understanding process. The first part of the contribution in this thesis shows that using deep autoencoders produce a more abstract latent representation of the transcript. This latent representation allow spoken language understanding system to be more robust to automatic transcription mistakes. In the other part, we propose two different approaches to generate more robust representation by combining multiple views of a given dialogue in order to improve the results of the spoken language understanding system. The first approach combine multiple thematic spaces to produce a better representation. The second one introduce new autoencoders architectures that use supervision in the denoising autoencoders. These contributions show that these architectures reduce the difference in performance between a spoken language understanding using automatic transcript and one using manual transcript
Petit, Alban. „Structured prediction methods for semantic parsing“. Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG002.
Der volle Inhalt der QuelleSemantic parsing is the task of mapping a natural language utterance into a formal representation that can be manipulated by a computer program. It is a major task in Natural Language Processing with several applications, including the development of questions answers systems or code generation among others.In recent years, neural-based approaches and particularly sequence-to-sequence architectures have demonstrated strong performances on this task. However, several works have put forward the limitations of neural-based parsers on out-of-distribution examples. In particular, they fail when compositional generalization is required. It is thus essential to develop parsers that exhibit better compositional abilities.The representation of the semantic content is another concern when tackling semantic parsing. As different syntactic structures can be used to represent the same semantic content, one should focus on structures that can both accurately represent the semantic content and align well with natural language. In that regard, this thesis relies on graph-based representations for semantic parsing and focuses on two tasks.The first one deals with the training of graph-based semantic parsers. They need to learn a correspondence between the parts of the semantic graph and the natural language utterance. As this information is usually absent in the training data, we propose training algorithms that treat this correspondence as a latent variable.The second task focuses on improving the compositional abilities of graph-based semantic parsers in two different settings. Note that in graph prediction, the traditional pipeline is to first predict the nodes and then the arcs of the graph. In the first setting, we assume that the graphs that must be predicted are trees and propose an optimization algorithm based on constraint smoothing and conditional gradient that allows to predict the entire graph jointly. In the second setting, we do not make any assumption regarding the nature of the semantic graphs. In that case, we propose to introduce an intermediate supertagging step in the inference pipeline that constrains the arc prediction step. In both settings, our contributions can be viewed as introducing additional local constraints to ensure the well-formedness the overall prediction. Experimentally, our contributions significantly improve the compositional abilities of graph-based semantic parsers and outperform comparable baselines on several datasets designed to evaluate compositional generalization
Ngo, Ho Anh Khoa. „Generative Probabilistic Alignment Models for Words and Subwords : a Systematic Exploration of the Limits and Potentials of Neural Parametrizations“. Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG014.
Der volle Inhalt der QuelleAlignment consists of establishing a mapping between units in a bitext, combining a text in a source language and its translation in a target language. Alignments can be computed at several levels: between documents, between sentences, between phrases, between words, or even between smaller units end when one of the languages is morphologically complex, which implies to align fragments of words (morphemes). Alignments can also be considered between more complex linguistic structures such as trees or graphs. This is a complex, under-specified task that humans accomplish with difficulty. Its automation is a notoriously difficult problem in natural language processing, historically associated with the first probabilistic word-based translation models. The design of new models for natural language processing, based on distributed representations computed by neural networks, allows us to question and revisit the computation of these alignments. This research project, therefore, aims to comprehensively understand the limitations of existing statistical alignment models and to design neural models that can be learned without supervision to overcome these drawbacks and to improve the state of art in terms of alignment accuracy
Parcollet, Titouan. „Quaternion neural networks A survey of quaternion neural networks - Chapter 2 Real to H-space Autoencoders for Theme Identification in Telephone Conversations - Chapter 7“. Thesis, Avignon, 2019. http://www.theses.fr/2019AVIG0233.
Der volle Inhalt der QuelleIn the recent years, deep learning has become the leading approach to modern artificial intelligence (AI). The important improvement in terms of processing time required for learning AI based models alongside with the growing amount of available data made of deep neural networks (DNN) the strongest solution to solve complex real-world problems. However, a major challenge of artificial neural architectures lies on better considering the high-dimensionality of the data.To alleviate this issue, neural networks (NN) based on complex and hypercomplex algebras have been developped. The natural multidimensionality of the data is elegantly embedded within complex and hypercomplex neurons composing the model. In particular, quaternion neural networks (QNN) have been proposed to deal with up to four dimensional features, based on the quaternion representation of rotations and orientations. Unfortunately, and conversely to complex-valued neural networks that are nowadays known as a strong alternative to real-valued neural networks, QNNs suffer from numerous limitations that are carrefuly addressed in the different parts detailled in this thesis.The thesis consists in three parts that gradually introduce the missing concepts of QNNs, to make them a strong alternative to real-valued NNs. The first part introduces and list previous findings on quaternion numbers and quaternion neural networks to define the context and strong basics for building elaborated QNNs.The second part introduces state-of-the-art quaternion neural networks for a fair comparison with real-valued neural architectures. More precisely, QNNs were limited by their simple architectures that were mostly composed of a single and shallow hidden layer. In this part, we propose to bridge the gap between quaternion and real-valued models by presenting different quaternion architectures. First, basic paradigms such as autoencoders and deep fully-connected neural networks are introduced. Then, more elaborated convolutional and recurrent neural networks are extended to the quaternion domain. Experiments to compare QNNs over equivalents NNs have been conducted on real-world tasks across various domains, including computer vision, spoken language understanding and speech recognition. QNNs increase performances while reducing the needed number of neural parameters compared to real-valued neural networks.Then, QNNs are extended to unconventional settings. In a conventional QNN scenario, input features are manually segmented into three or four components, enabling further quaternion processing. Unfortunately, there is no evidence that such manual segmentation is the representation that suits the most to solve the considered task. Morevover, a manual segmentation drastically reduces the field of application of QNNs to four dimensional use-cases. Therefore the third part introduces a supervised and an unsupervised model to extract meaningful and disantengled quaternion input features, from any real-valued input signal, enabling the use of QNNs regardless of the dimensionality of the considered task. Conducted experiments on speech recognition and document classification show that the proposed approaches outperform traditional quaternion features
Tafforeau, Jérémie. „Modèle joint pour le traitement automatique de la langue : perspectives au travers des réseaux de neurones“. Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0430/document.
Der volle Inhalt der QuelleNLP researchers has identified different levels of linguistic analysis. This lead to a hierarchical division of the various tasks performed in order to analyze a text statement. The traditional approach considers task-specific models which are subsequently arranged in cascade within processing chains (pipelines). This approach has a number of limitations: the empirical selection of models features, the errors accumulation in the pipeline and the lack of robusteness to domain changes. These limitations lead to particularly high performance losses in the case of non-canonical language with limited data available such as transcriptions of conversations over phone. Disfluencies and speech-specific syntactic schemes, as well as transcription errors in automatic speech recognition systems, lead to a significant drop of performances. It is therefore necessary to develop robust and flexible systems. We intend to perform a syntactic and semantic analysis using a deep neural network multitask model while taking into account the variations of domain and/or language registers within the data
Piat, Guilhem Xavier. „Incorporating expert knowledge in deep neural networks for domain adaptation in natural language processing“. Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG087.
Der volle Inhalt der QuelleCurrent state-of-the-art Language Models (LMs) are able to converse, summarize, translate, solve novel problems, reason, and use abstract concepts at a near-human level. However, to achieve such abilities, and in particular to acquire ``common sense'' and domain-specific knowledge, they require vast amounts of text, which are not available in all languages or domains. Additionally, their computational requirements are out of reach for most organizations, limiting their potential for specificity and their applicability in the context of sensitive data.Knowledge Graphs (KGs) are sources of structured knowledge which associate linguistic concepts through semantic relations. These graphs are sources of high quality knowledge which pre-exist in a variety of otherwise low-resource domains, and are denser in information than typical text. By allowing LMs to leverage these information structures, we could remove the burden of memorizing facts from LMs, reducing the amount of text and computation required to train them and allowing us to update their knowledge with little to no additional training by updating the KGs, therefore broadening their scope of applicability and making them more democratizable.Various approaches have succeeded in improving Transformer-based LMs using KGs. However, most of them unrealistically assume the problem of Entity Linking (EL), i.e. determining which KG concepts are present in the text, is solved upstream. This thesis covers the limitations of handling EL as an upstream task. It goes on to examine the possibility of learning EL jointly with language modeling, and finds that while this is a viable strategy, it does little to decrease the LM's reliance on in-domain text. Lastly, this thesis covers the strategy of using KGs to generate text in order to leverage LMs' linguistic abilities and finds that even naïve implementations of this approach can result in measurable improvements on in-domain language processing
Bücher zum Thema "Réseaux neuronaux (informatique) – Traitement automatique du langage naturel"
Miikkulainen, Risto. Subsymbolic natural language processing: An integrated model of scripts, lexicon, and memory. Cambridge, Mass: MIT Press, 1993.
Den vollen Inhalt der Quelle findenArtificial Vision and Language Processing for Robotics: Create End-To-end Systems That Can Power Robots with Artificial Vision and Deep Learning Techniques. Packt Publishing, Limited, 2019.
Den vollen Inhalt der Quelle finden