Tesis sobre el tema "Compréhension automatique"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Compréhension automatique".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Jamoussi, Selma. "Méthodes statistiques pour la compréhension automatique de la parole". Nancy 1, 2004. http://www.theses.fr/2004NAN10170.
Texto completoThe work presented in this manuscript aims to carry out an understanding system for the spontaneous speech. We are interested in specific domain systems that concern the oral interrogation of data bases. Our work is based on statistical approach which considers the understanding problem as a translation process between words and semantic concepts. The idea we defend in this thesis is the possibility to obtain significant semantic concepts using clustering methods. We start by defining some semantic measures to quantify the semantic relations between words. Then, we use triggers to build up concepts in an automatic way. In order to improve we test two well known methods : the K-means algorithm and the Kohonen maps. We also propose the use of the Oja and Sanger neural networks. The latter proved to be ineffective in our case. Lastly, we use a Bayesian network conceived for clustering and called AutoClass. AutoClass provides clear and significant concepts
Brison, Eric. "Stratégies de compréhension dans l'interaction multimodale". Toulouse 3, 1997. http://www.theses.fr/1997TOU30075.
Texto completoPham, Thi Nhung. "Résolution des anaphores nominales pour la compréhension automatique des textes". Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCD049/document.
Texto completoIn order to facilitate the interpretation of texts, this thesis is devoted to the development of a system to identify and resolve the indirect nominal anaphora and the associative anaphora. Resolution of the indirect nominal anaphora is based on calculating salience weights of candidate antecedents with the purpose of associating these antecedents with the anaphoric expressions identified. It is processed by twoAnnexe317different methods based on a linguistic approach: the first method uses lexical and morphological parameters; the second method uses morphological and syntactical parameters. The resolution of associative anaphora is based on syntactical and semantic parameters.The results obtained are encouraging: 90.6% for the indirect anaphora resolution with the first method, 75.7% for the indirect anaphora resolution with the second method and 68.7% for the associative anaphora resolution. These results show the contribution of each parameter used and the utility of this system in the automatic interpretation of the texts
Goulian, Jerome. "Stratégie d'analyse détaillée pour la compréhension automatique robuste de la parole". Lorient, 2002. http://www.theses.fr/2002LORIS021.
Texto completoThis PHD focusses on speech understanding in man-machine communication. We discuss the issue of how a speech understanding system can be made robust against spontaneous speech phenomena as well as achieving a detailed analysis of spoken French. We argue that a detailed linguistic analysis (with both syntax and semantics) is essential for correctly process spoken utterances and is also a necesary condition to develop applications that are not entirely dedicated to a very specific task but present sufficient genericity. The system presented (ROMUS) implements speech understanding in a two-satge process. The first one achieves a finite-state shallow parsing consists in segmenting the utterance into basic units (spoken adaptated chunks). This stage is generic and is motivated by the regularities observed in spoken French. The second one, a Link Grammar parser, looks for inter-chunks dependencies in order to build a rich representation of the semantic structure of the utterance
Sileo, Damien. "Représentations sémantiques et discursives pour la compréhension automatique du langage naturel". Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30201.
Texto completoComputational models for automatic text understanding have gained a lot of interest due to unusual performance gains over the last few years, some of them leading to super-human scores. This success reignited some grandeur claims about artificial intelligence, such as universal sentence representation. In this thesis, we question these claims through two complementary angles. Firstly, are neural networks and vector representations expressive enough to process text and perform a wide array of complex tasks? In this thesis, we will present currently used computational neural models and their training techniques. We propose a criterion for expressive compositions and show that a popular evaluation suite and sentence encoders (SentEval/InferSent) have an expressivity bottleneck; minor changes can yield new compositions that are expressive and insightful, but might not be sufficient, which may justify the paradigm shift towards newer Transformers-based models. Secondly, we will discuss the question of universality in sentence representation: what actually lies behind these universality claims? We delineate a few theories of meaning, and in a subsequent part of this thesis, we argue that semantics (unsituated, literal content) as opposed to pragmatics (meaning as use) is preponderant in the current training and evaluation data of natural language understanding models. To alleviate that problem, we show that discourse marker prediction (classification of hidden discourse markers between sentences) can be seen as a pragmatics-centered training signal for text understanding. We build a new discourse marker prediction dataset that yields significantly better results than previous work. In addition, we propose a new discourse-based evaluation suite that could incentivize researchers to take into account pragmatic considerations when evaluating text understanding models
Bueno, Steve. "L'activation automatique de la mémoire sémantique". Aix-Marseille 1, 2002. http://www.theses.fr/2002AIX10068.
Texto completoMouret, Pascal. "Etude et intégration de contraintes contextuelles dans la compréhension automatique du français". Aix-Marseille 2, 2000. http://www.theses.fr/2000AIX22044.
Texto completoDelecraz, Sébastien. "Approches jointes texte/image pour la compréhension multimodale de documents". Electronic Thesis or Diss., Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0634.
Texto completoThe human faculties of understanding are essentially multimodal. To understand the world around them, human beings fuse the information coming from all of their sensory receptors. Most of the documents used in automatic information processing contain multimodal information, for example text and image in textual documents or image and sound in video documents, however the processings used are most often monomodal. The aim of this thesis is to propose joint processes applying mainly to text and image for the processing of multimodal documents through two studies: one on multimodal fusion for the speaker role recognition in television broadcasts, the other on the complementarity of modalities for a task of linguistic analysis on corpora of images with captions. In the first part of this study, we interested in audiovisual documents analysis from news television channels. We propose an approach that uses in particular deep neural networks for representation and fusion of modalities. In the second part of this thesis, we are interested in approaches allowing to use several sources of multimodal information for a monomodal task of natural language processing in order to study their complementarity. We propose a complete system of correction of prepositional attachments using visual information, trained on a multimodal corpus of images with captions
Delecraz, Sébastien. "Approches jointes texte/image pour la compréhension multimodale de documents". Thesis, Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0634/document.
Texto completoThe human faculties of understanding are essentially multimodal. To understand the world around them, human beings fuse the information coming from all of their sensory receptors. Most of the documents used in automatic information processing contain multimodal information, for example text and image in textual documents or image and sound in video documents, however the processings used are most often monomodal. The aim of this thesis is to propose joint processes applying mainly to text and image for the processing of multimodal documents through two studies: one on multimodal fusion for the speaker role recognition in television broadcasts, the other on the complementarity of modalities for a task of linguistic analysis on corpora of images with captions. In the first part of this study, we interested in audiovisual documents analysis from news television channels. We propose an approach that uses in particular deep neural networks for representation and fusion of modalities. In the second part of this thesis, we are interested in approaches allowing to use several sources of multimodal information for a monomodal task of natural language processing in order to study their complementarity. We propose a complete system of correction of prepositional attachments using visual information, trained on a multimodal corpus of images with captions
Camelin, Nathalie. "Stratégies robustes de compréhension de la parole basées sur des méthodes de classification automatique". Avignon, 2007. http://www.theses.fr/2007AVIG0149.
Texto completoThe work presented in this PhD thesis deals with the automatic Spoken Language Understanding (SLU) problem in multiple speaker applications which accept spontaneous speech. The study consists in integrating automatic classification methods in the speech decoding and understanding processes. My work consists in adapting methods, wich have already shown good performance in text domain, to the particularities of an Automatic Speech Recognition system outputs. The main difficulty of the process of this type of data is due to the uncertainty in the input parameters for the classifiers. Among all existing automatic classification methods, we choose to use three of them. The first is based on Semantic Classification Trees, the two others classification methods, considered among the most performant in the scientific community of machine learning, are large margin ones based on boosting and support vector machines. A sequence labelling method, Conditional Random Fields (CRF), is also studied and used. Two applicative frameworks are investigated : -PlanResto is a tourism application of human-computer dialogue. It enables users to ask information about a restaurant in Paris in natural language. The real-time speech understanding process consists in building a request for a database. Within this framework, the consensual agreement of the different classifiers, considered as semantic experts, is used as a confidence measure ; -SCOrange is a spoken telephone survey corpus. The purpose is to collect messages of mobile users expressing their opinion about the customer service. The off-line speech understanding process consists in evaluating proportions of opinions about a topic and a polarity. Classifiers enable the extraction of user's opinions in a strategy that can reliably evalute the distribution of opinions and their temporal evolution
Veloz, Guerrero Arturo. "Un système de compréhension de parole continue sur microprocesseur". Paris 11, 1985. http://www.theses.fr/1985PA112240.
Texto completoThis thesis describes the implementation of a speech understanding system on a microprocessor. The system is designed to accept continuous speech from one speaker and to work within the context of a limited task situation and small vocabularies. The system utilizes phonetic recognition at the phonetic level and an optimal one-pass dynamic programming algorithm at the lexical and syntactic levels. The system has an interactive program for the definition of grammars for a given specific task language and a program of orthographic-phonetic translation that takes into account some phonological variations of words
Aguilar, Louche Nathalie. "La production automatique et délibérée des inférences de la conséquence des événements et des actions : expérimentations et simulations". Université de Provence. Faculté des lettres et sciences humaines (1969-2011), 1999. http://www.theses.fr/1999AIX10047.
Texto completoDoré, Laurent. "Intégration de récit : Application à la compréhension de textes médicaux". Paris 13, 1992. http://www.theses.fr/1992PA132022.
Texto completoServan, Christophe. "Apprentissage automatique et compréhension dans le cadre d'un dialogue homme-machine téléphonique à initiative mixte". Phd thesis, Université d'Avignon, 2008. http://tel.archives-ouvertes.fr/tel-00591997.
Texto completoNazarenko, Adeline. "Compréhension du langage naturel : le problème de la causalité". Paris 13, 1994. http://www.theses.fr/1994PA132007.
Texto completoDemko, Christophe. "Contribution à la gestion du contexte pour un système de compréhension automatique de la langue". Compiègne, 1992. http://www.theses.fr/1992COMPD542.
Texto completoKobus, Catherine. "Exploitation d'informations d'ordre sémantique pour la reconnaissance vocale dans le contexte d'une application de compréhension de la parole". Avignon, 2006. http://www.theses.fr/2006AVIG0141.
Texto completoDiakité, Mohamed Lamine. "Relations entre les phrases : contribution à la représentation sémantique des textes pour la compréhension automatique du langage naturel". Dijon, 2005. http://www.theses.fr/2005DIJOS025.
Texto completoThe work described in this thesis presents an approach of semantic representation of texts to contribute to an automatic comprehension of the natural language. The proposed approach is based on the evidence of the need for knowledge on the analyzed texts in order to discover their meaning. We thus proposed a semi-automatic approach of knowledge acquisition from texts. This acquisition is guided by a hierarchy of classes of entities organized in an ontology. Based on the principle of compositional semantic, we propose to identify relations between different entities of the text. We were interested in particular in the problem of pronominal anaphora for which we proposed a resolution method
Laurent, Antoine. "Auto-adaptation et reconnaissance automatique de la parole". Le Mans, 2010. http://cyberdoc.univ-lemans.fr/theses/2010/2010LEMA1009.pdf.
Texto completoThe first part of this thesis presents a computer assisted transcription of speech method. Every time the user corrects a word in the automatic transcription, this correction is immediately taken into account to re-evaluate the transcription of the words following it. The latter is obtained from a reordering of the confusion networks hypothesis generated by the ASR. The use of the reordering method allows an absolute gain of 3. 4 points (19. 2% to 15. 8%) in term of word stroke ratio (WSR) on the ESTER 2 corpus. In order to decrease the proper nouns error rate, an acoustic-based phonetic transcription method is proposed in this manuscript. The use of SMT [Laurent 2009] associated with the proposed method allows a significant reduce in term of word error rate (WER) and in term of proper nouns error rate (PNER)
Raymond, Christian. "Décodage conceptuel : co-articulation des processus de transcription et compréhension dans les systèmes de dialogue". Avignon, 2005. http://www.theses.fr/2005AVIG0140.
Texto completoWe propose in this document a SLU (Spoken Language Understanding) module. First we introduce a conceptual language model for the detection and the extraction of semantic basic concepts from a speech signal. A decoding process is described with a simple example. This decoding process extracts, from a word lattice generated by an Automatic Speech Recognition (ASR) module, structured n-best list of interpretations (set of concepts). This list contains all the interpretations that can be found in the word lattice, with their posterior probabilities, and the n-best values for each interpretation. Then we introduce some confidence measures used to estimate the quality of the result of the previous decoding process. Finally, we describe the integration of the proposed SLU module in a dialogue application, involving a decision strategy based on the confidence measures introduced before
Gotab, Pierre. "Classification automatique pour la compréhension de la parole : vers des systèmes semi-supervisés et auto-évolutifs". Phd thesis, Université d'Avignon, 2012. http://tel.archives-ouvertes.fr/tel-00858980.
Texto completoKrit, Hatem. "Locadelane : un langage objet d'aide à la compréhension automatique du discours exprimé en langage naturel et écri". Toulouse 3, 1990. http://www.theses.fr/1990TOU30008.
Texto completoDesot, Thierry. "Apport des modèles neuronaux de bout-en-bout pour la compréhension automatique de la parole dans l'habitat intelligent". Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM069.
Texto completoSmart speakers offer the possibility of interacting with smart home systems, and make it possible to issue a range of requests about various subjects. They represent the first ambient voice interfaces that are frequently available in home environments. Very often they are only capable of inferring voice commands of a simple syntax in short utterances in the realm of smart homes that promote home care for senior adults. They support them during everyday situations by improving their quality of life, and also providing assistance in situations of distress. The design of these smart homes mainly focuses on the safety and comfort of its habitants. As a result, these research projects frequently concentrate on human activity detection, resulting in a lack of attention for the communicative aspects in a smart home design. Consequently, there are insufficient speech corpora, specific to the home automation field, in particular for languages other than English. However the availability of these corpora are crucial for developing interactive communication systems between the smart home and its inhabitants. Such corpora at one’s disposal could also contribute to the development of a generation of smart speakers capable of extracting more complex voice commands. As a consequence, part of our work consisted in developing a corpus generator, producing home automation domain specific voice commands, automatically annotated with intent and concept labels. The extraction of intents and concepts from these commands, by a Spoken Language Understanding (SLU) system is necessary to provide the decision-making module with the information, necessary for their execution. In order to react to speech, the natural language understanding (NLU) module is typically preceded by an automatic speech recognition (ASR) module, automatically converting speech into transcriptions. As several studies have shown, the interaction between ASR and NLU in a sequential SLU approach accumulates errors. Therefore, one of the main motivations of our work is the development of an end-to-end SLU module, extracting concepts and intents directly from speech. To achieve this goal, we first develop a sequential SLU approach as our baseline approach, in which a classic ASR method generates transcriptions that are passed to the NLU module, before continuing with the development of an End-to-end SLU module. These two SLU systems were evaluated on a corpus recorded in the home automation domain. We investigate whether the prosodic information that the end-to-end SLU system has access to, contributes to SLU performance. We position the two approaches also by comparing their robustness, facing speech with more semantic and syntactic variation.The context of this thesis is the ANR VocADom project
Veilex, Florence. "Approche expérimentale des processus humains de compréhension en vue d'une indexation automatique des résumés scientifiques : application à un corpus de géologie". Grenoble 2, 1985. http://www.theses.fr/1985GRE2A005.
Texto completoLe, Tallec Marc. "Compréhension de parole et détection des émotions pour robot compagnon". Thesis, Tours, 2012. http://www.theses.fr/2012TOUR4044.
Texto completoArias, Aguilar José Anibal. "Méthodes spectrales pour le traitement automatique de documents audio". Toulouse 3, 2008. http://thesesups.ups-tlse.fr/436/.
Texto completoThe disfluencies are a frequently occurring phenomenon in any spontaneous speech production; it consists of the interruption of the normal flow of speech. They have given rise to numerous studies in Natural Language Processing. Indeed, their study and precise identification are essential, both from a theoretical and applicative perspective. However, most of the researches about the subject relate to everyday uses of language: "small talk" dialogs, requests for schedule, speeches, etc. But what about spontaneous speech production made in a restrained framework? To our knowledge, no study has ever been carried out in this context. However, we know that using a "language specialty" in the framework of a given task leads to specific behaviours. Our thesis work is devoted to the linguistic and computational study of disfluencies within such a framework. These dialogs concern air traffic control, which entails both pragmatic and linguistic constraints. We carry out an exhaustive study of disfluencies phenomena in this context. At first we conduct a subtle analysis of these phenomena. Then we model them to a level of abstraction, which allows us to obtain the patterns corresponding to the different configurations observed. Finally we propose a methodology for automatic processing. It consists of several algorithms to identify the different phenomena, even in the absence of explicit markers. It is integrated into a system of automatic processing of speech. Eventually, the methodology is validated on a corpus of 400 sentences
Lemaçon, Audrey. "Développement d'outils bioinformatiques et de méthodologies d'apprentissage machine pour une meilleure compréhension des éléments génétiques sous-jacents à la susceptibilité au cancer du sein". Doctoral thesis, Université Laval, 2019. http://hdl.handle.net/20.500.11794/35418.
Texto completoBreast cancer is one of the leading causes of death from cancer among Canadian women (about 1 in 8 Canadian women will develop breast cancer during her lifetime and 1 in 31 will die from the disease). Evidence suggests that most breast cancer cases develop in a small proportion of women with a genetic susceptibility to the disease. Since the personalized assessment of this risk is based on the certainty that women can be divided into several groups according to their inherent genetic risk, it is essential to identify the actors responsible for this genetic susceptibility to breast cancer in order to offer these at-risk women, personalized preventive measures. Thus, since the discovery of the associated genes BRCA1 in 1994 and BRCA2 in 1995, tremendous efforts have been made to identify the genetic components underlying breast cancer risk and many other deleterious mutations have been uncovered in susceptibility genes such as PTEN, PALB2 or CHEK2. Unfortunately, despite these efforts, the susceptibility genes/loci known to date only explain about half of the genetic risk associated with this disease. Acknowledging the challenges, many international groups have partnered in consortia such as the Breast Cancer Consortium (BCAC) or the Consortium of Investigators of Modiers of BRCA1/2 (CIMBA) to join their resources for the identication of what has been called breast cancer "missing heritability". Several hypotheses have been formulated as to the sources of this missing heritability and, among these hypotheses, we have explored two. First, we tested the hypothesis of many common low penetrance genetic variants still to be discovered through a large genome-wide association study conducted within the OncoArray Network. In a second step, we tested the hypothesis according to which rarer variants of higher penetrance, could be discovered in the coding regions of the genome, through the evaluation of the predictive power of these variants by an innovative approach of exomes data analysis. Thus, we were able to demonstrate the veracity of the rst hypothesis by the discovery of 65 new loci associated with overall breast cancer susceptibility. In addition, these studies having highlighted the need for assistance tools for prioritization analysis, we developed two softwares to help prioritize human genetic variants. Finally, we developed a new multi-step methodology, combining the analysis of genotypes and haplotypes in order to assess the predictive power of coding variants. This approach, taking advantage of the power of machine learning, enabled the identication of new credible coding markers (variants alone or combined into haplotypes), signicantly associated with the phenotype. For susceptibility loci as well as for candidate genes identied during the analysis of exome data, it will be essential to conrm their involvement and effect size on large external sample sets and then perform their functional characterization. If they are validated, their integration into current risk prediction tools could help promote early management and well-calibrated therapeutic interventions for at-risk women.
Laurent, Mario. "Recherche et développement du Logiciel Intelligent de Cartographie Inversée, pour l’aide à la compréhension de texte par un public dyslexique". Thesis, Université Clermont Auvergne (2017-2020), 2017. http://www.theses.fr/2017CLFAL016/document.
Texto completoChildren with language impairment, such as dyslexia, are often faced with important difficulties when learning to read and during any subsequent reading tasks. These difficulties tend to compromise the understanding of the texts they must read during their time at school. This implies learning difficulties and may lead to academic failure. Over the past fifteen years, general tools developed in the field of Natural Language Processing have been transformed into specific tools for that help with and compensate for language impaired students' difficulties. At the same time, the use of concept maps or heuristic maps to encourage dyslexic children express their thoughts, or retain certain knowledge, has become popular. This thesis aims to identify and explore knowledge about the dyslexic public, how society takes care of them and what difficulties they face; the pedagogical possibilities opened up by the use of maps; and the opportunities created by automatic summarization and Information Retrieval fields. The aim of this doctoral research project was to create an innovative piece of software that automatically transforms a given text into a map. It was important that this piece of software facilitate reading comprehension while including functionalities that are adapted to dyslexic teenagers. The project involved carrying out an exploratory experiment on reading comprehension aid, thanks to heuristic maps, that make the identification of new research topics possible, and implementing an automatic mapping software prototype that is presented at the end of this thesis
Meurs, Marie-Jean. "Approche stochastique bayésienne de la composition sémantique pour les modules de compréhension automatique de la parole dans les systèmes de dialogue homme-machine". Phd thesis, Université d'Avignon, 2009. http://tel.archives-ouvertes.fr/tel-00634269.
Texto completoAntoine, Jean-Yves. "Coopération syntaxe-sémantique pour la compréhension de la parole spontanée". Grenoble INPG, 1994. http://www.theses.fr/1994INPG0154.
Texto completoGayral, Françoise. "Sémantique du langage naturel et profondeur variable : Une première approche". Paris 13, 1992. http://www.theses.fr/1992PA132004.
Texto completoSimonnet, Edwin. "Réseaux de neurones profonds appliqués à la compréhension de la parole". Thesis, Le Mans, 2019. http://www.theses.fr/2019LEMA1006/document.
Texto completoThis thesis is a part of the emergence of deep learning and focuses on spoken language understanding assimilated to the automatic extraction and representation of the meaning supported by the words in a spoken utterance. We study a semantic concept tagging task used in a spoken dialogue system and evaluated with the French corpus MEDIA. For the past decade, neural models have emerged in many natural language processing tasks through algorithmic advances or powerful computing tools such as graphics processors. Many obstacles make the understanding task complex, such as the difficult interpretation of automatic speech transcriptions, as many errors are introduced by the automatic recognition process upstream of the comprehension module. We present a state of the art describing spoken language understanding and then supervised automatic learning methods to solve it, starting with classical systems and finishing with deep learning techniques. The contributions are then presented along three axes. First, we develop an efficient neural architecture consisting of a bidirectional recurrent network encoder-decoder with attention mechanism. Then we study the management of automatic recognition errors and solutions to limit their impact on our performances. Finally, we envisage a disambiguation of the comprehension task making the systems more efficient
Janod, Killian. "La représentation des documents par réseaux de neurones pour la compréhension de documents parlés". Thesis, Avignon, 2017. http://www.theses.fr/2017AVIG0222/document.
Texto completoApplication of spoken language understanding aim to extract relevant items of meaning from spoken signal. There is two distinct types of spoken language understanding : understanding of human/human dialogue and understanding in human/machine dialogue. Given a type of conversation, the structure of dialogues and the goal of the understanding process varies. However, in both cases, most of the time, automatic systems have a step of speech recognition to generate the textual transcript of the spoken signal. Speech recognition systems in adverse conditions, even the most advanced one, produce erroneous or partly erroneous transcript of speech. Those errors can be explained by the presence of information of various natures and functions such as speaker and ambience specificities. They can have an important adverse impact on the performance of the understanding process. The first part of the contribution in this thesis shows that using deep autoencoders produce a more abstract latent representation of the transcript. This latent representation allow spoken language understanding system to be more robust to automatic transcription mistakes. In the other part, we propose two different approaches to generate more robust representation by combining multiple views of a given dialogue in order to improve the results of the spoken language understanding system. The first approach combine multiple thematic spaces to produce a better representation. The second one introduce new autoencoders architectures that use supervision in the denoising autoencoders. These contributions show that these architectures reduce the difference in performance between a spoken language understanding using automatic transcript and one using manual transcript
Reboud, Alison. "Towards automatic understanding of narrative audiovisual content". Electronic Thesis or Diss., Sorbonne université, 2022. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2022SORUS398.pdf.
Texto completoModern storytelling is digital and video-based. Understanding the stories contained in videos remains a challenge for automatic systems. Having multimodality as a transversal theme, this research thesis breaks down the "understanding" task into the following challenges: Predicting memorability, summarising and modelling stories from audiovisual content
Godin, Christophe. "Proposition d'un cadre algorithmique unifié pour la compréhension de la parole continue". Compiègne, 1990. http://www.theses.fr/1990COMPD260.
Texto completoChanier, Thierry. "Compréhension de textes dans un domaine technique : le système Actes ; application des grammaires d'unification et de la théorie du discours". Paris 13, 1989. http://www.theses.fr/1989PA132015.
Texto completoBouraoui, Jean-Léon Mehdi. "Analyse, modélisation et détection automatique des disfluences dans le dialogue oral spontané contraint : le cas du contrôle aérien". Phd thesis, Université Paul Sabatier - Toulouse III, 2008. http://tel.archives-ouvertes.fr/tel-00354772.
Texto completoCependant, la majorité des travaux de recherche sur le sujet portent sur des usages de langage quotidien : dialogues « à bâtons rompus », demandes d'horaire, discours, etc. Mais qu'en est-il des productions orales spontanées produites dans un cadre contraint ? Aucune étude n'a à notre connaissance été menée dans ce contexte. Or, on sait que l'utilisation d'une « langue de spécialité » dans le cadre d'une tâche donnée entraîne des comportements spécifiques.
Notre travail de thèse est consacré à l'étude linguistique et informatique des disfluences dans un tel cadre. Il s'agit de dialogues de contrôle de trafic aérien, aux contraintes pragmatiques et linguistiques. Nous effectuons une étude exhaustive des phénomènes de disfluences dans ce contexte. Dans un premier temps nous procédons à l'analyse fine de ces phénomènes. Ensuite, nous les modélisons à un niveau de représentation abstrait, ce qui nous permet d'obtenir les patrons correspondant aux différentes configurations observées. Enfin nous proposons une méthodologie de traitement automatique. Celle-ci consiste en plusieurs algorithmes pour identifier les différents phénomènes, même en l'absence de marqueurs explicites. Elle est intégrée dans un système de traitement automatique de la parole. Enfin, la méthodologie est validée sur un corpus de 400 énoncés.
Minescu, Bogdan. "Construction et stratégie d'exploitation des réseaux de confusion en lien avec le contexte applicatif de la compréhension de la parole". Phd thesis, Université d'Avignon, 2008. http://tel.archives-ouvertes.fr/tel-00629195.
Texto completoLermuzeaux, Jean-Marc. "Contribution à l'intégration des niveaux de traitement automatique de la langue écrite : ANAEL : un environnement de compréhension basé sur les objets, les actions et les grammaires d'événements". Caen, 1988. http://www.theses.fr/1988CAEN2029.
Texto completoJabaian, Bassam. "Systèmes de compréhension et de traduction de la parole : vers une approche unifiée dans le cadre de la portabilité multilingue des systèmes de dialogue". Phd thesis, Université d'Avignon, 2012. http://tel.archives-ouvertes.fr/tel-00818970.
Texto completoPoulain, d'Andecy Vincent. "Système à connaissance incrémentale pour la compréhension de document et la détection de fraude". Thesis, La Rochelle, 2021. http://www.theses.fr/2021LAROS025.
Texto completoThe Document Understanding is the Artificial Intelligence ability for machines to Read documents. In a global vision, it aims the understanding of the document function, the document class, and in a more local vision, it aims the understanding of some specific details like entities. The scientific challenge is to recognize more than 90% of the data. While the industrial challenge requires this performance with the least human effort to train the machine. This thesis defends that Incremental Learning methods can cope with both challenges. The proposals enable an efficient iterative training with very few document samples. For the classification task, we demonstrate (1) the continue learning of textual descriptors, (2) the benefit of the discourse sequence, (3) the benefit of integrating a Souvenir of few samples in the knowledge model. For the data extraction task, we demonstrate an iterative structural model, based on a star-graph representation, which is enhanced by the embedding of few a priori knowledges. Aware about economic and societal impacts because the document fraud, this thesis deals with this issue too. Our modest contribution is only to study the different fraud categories to open further research. This research work has been done in a non-classic framework, in conjunction of industrial activities for Yooz and collaborative research projects like the FEDER Securdoc project supported by la région Nouvelle Aquitaine, and the Labcom IDEAS supported by the ANR
Durand, Marie. "La découverte et la compréhension des profils d’apprenants : classification semi-supervisée et acquisition d’une langue seconde". Thesis, Paris 8, 2019. http://www.theses.fr/2019PA080029.
Texto completoThis thesis aims to develop an effective methodology for the discovery and description of the learner's profile of an L2 based on acquisition data (perception, understanding and production). We want to detect patterns in the acquisition behaviours of subgroups of learners, taking into account the multidimensional aspect of the L2 learning process. The proposed methodology belongs to the field of artificial intelligence, more specifically to semi supervised clustering techniques.Our algorithm has been applied to the data base of the VILLA project, which includes the performance of learners from 5 different source languages (French, Italian, Dutch, German and English) with Polish as the target language. 156 adult learners were each tested with a variety of tasks in Polish during 14 hours of teaching session, starting from the initial exposure. These tests made it possible to evaluate their performance on the levels of linguistic analysis that are phonology, morphology, morphosyntax and lexicon. The database also includes their sensitivity to input characteristics, such as the frequency and transparency of lexical elements used in linguistic tasks.The similarity measure used in traditional clustering techniques is revisited in this work in order to evaluate the distance between two learners from an acquisitionist point of view. It is based on the identification of the learner's response strategy to a specific language test structure. We show that this measure makes it possible to detect the presence or absence in the learner's responses of a strategy similar to the LC flexional system, and so enables our algorithm to provide a resulting classification consistent with second language acquisition research. As a result, we claim that our algorithm might be relevant in the empirical establishment of learners' profiles and the discovery of new opportunities for reflection or analysis
Trujillo, Morales Noël. "Stratégie de perception pour la compréhension de scènes par une approche focalisante, application à la reconnaissance d'objets". Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2007. http://tel.archives-ouvertes.fr/tel-00926395.
Texto completoIeva, Carlo. "Révéler le contenu latent du code source : à la découverte des topoi de programme". Thesis, Montpellier, 2018. http://www.theses.fr/2018MONTS024/document.
Texto completoDuring the development of long lifespan software systems, specification documents can become outdated or can even disappear due to the turnover of software developers. Implementing new software releases or checking whether some user requirements are still valid thus becomes challenging. The only reliable development artifact in this context is source code but understanding source code of large projects is a time- and effort- consuming activity. This challenging problem can be addressed by extracting high-level (observable) capabilities of software systems. By automatically mining the source code and the available source-level documentation, it becomes possible to provide a significant help to the software developer in his/her program understanding task.This thesis proposes a new method and a tool, called FEAT (FEature As Topoi), to address this problem. Our approach automatically extracts program topoi from source code analysis by using a three steps process: First, FEAT creates a model of a software system capturing both structural and semantic elements of the source code, augmented with code-level comments; Second, it creates groups of closely related functions through hierarchical agglomerative clustering; Third, within the context of every cluster, functions are ranked and selected, according to some structural properties, in order to form program topoi.The contributions of the thesis is three-fold:1) The notion of program topoi is introduced and discussed from a theoretical standpoint with respect to other notions used in program understanding ;2) At the core of the clustering method used in FEAT, we propose a new hybrid distance combining both semantic and structural elements automatically extracted from source code and comments. This distance is parametrized and the impact of the parameter is strongly assessed through a deep experimental evaluation ;3) Our tool FEAT has been assessed in collaboration with Software Heritage (SH), a large-scale ambitious initiative whose aim is to collect, preserve and, share all publicly available source code on earth. We performed a large experimental evaluation of FEAT on 600 open source projects of SH, coming from various domains and amounting to more than 25 MLOC (million lines of code).Our results show that FEAT can handle projects of size up to 4,000 functions and several hundreds of files, which opens the door for its large-scale adoption for program understanding
Bouzekri, Elodie. "Notation et processus outillé pour la description, l'analyse et la compréhension de l'automatisation dans les systèmes de commande et contrôle". Thesis, Toulouse 3, 2021. http://www.theses.fr/2021TOU30003.
Texto completoAutomation enables systems to execute some functions without outside control and to adapt functions they execute to new contexts and goals. Systems with automation are used more and more to help humans in everyday tasks with, for example, the dishwasher. Systems with automation are also used to help humans in their professional life. For example, in the field of aeronautics, automation has gradually reduced the number from 4 pilots to 2 pilots. Automation was first considered as a way to increase performance and reduce effort by migrating tasks previously allocated to humans to systems. This, in the hypothesis that systems would be better than humans would at performing certain tasks and vice-versa. Paul Fitts proposed MABA-MABA (Machine Are Better At - Man Are Better At), a tasks and functions allocation method based on this hypothesis. In line with this hypothesis, various descriptions of levels of automation have been proposed. The 10 levels of Automation (LoA) by Parasuraman, Sheridan et Wickens describe different tasks and functions allocations between the human and the system. The higher the level of automation, the more tasks migrate from human to system. These approaches have been the subject of criticism. " MABA-MABA or Abracadabra? Progress on Human-Automation Coordination " of Dekker and Woods highlights that automation leads to new tasks allocated to humans to manage this automation. Moreover, they recall that these approaches hide the cooperative aspect of the human-system couple. To characterize the human-system cooperation, the importance of considering, at design time, the allocation of authority, responsibility, control and the initiative to modify these allocations during the activity was demonstrated. However, the existing approaches describe a high-level design of automation and cooperation early in the design and development process. These approaches do not provide support for reasoning about the allocation of resources, control transitions, responsibility and authority throughout the design and development process. The purpose of this thesis is to demonstrate the possibility to analyze and describe at a low-level tasks and functions as well as the cooperation between humans and the system with automation. This analysis and this description enable to characterize tasks, functions and the cooperation in terms of authority, responsibility, resource sharing and control transition initiation. The aim of this work is to provide a framework and a model-based and tool supported process to analyze and understand automation. In order to show the feasibility of this approach, this thesis presents the results of the application of the proposed process to an industrial case study in the field of aeronautics
Ferret, Olivier. "ANTHAPSI : un système d'analyse thématique et d'apprentissage de connaissances pragmatiques fondé sur l'amorçage". Phd thesis, Université Paris Sud - Paris XI, 1998. http://tel.archives-ouvertes.fr/tel-00189116.
Texto completoShang, Guokan. "Spoken Language Understanding for Abstractive Meeting Summarization Unsupervised Abstractive Meeting Summarization with Multi-Sentence Compression and Budgeted Submodular Maximization. Energy-based Self-attentive Learning of Abstractive Communities for Spoken Language Understanding Speaker-change Aware CRF for Dialogue Act Classification". Thesis, Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAX011.
Texto completoWith the impressive progress that has been made in transcribing spoken language, it is becoming increasingly possible to exploit transcribed data for tasks that require comprehension of what is said in a conversation. The work in this dissertation, carried out in the context of a project devoted to the development of a meeting assistant, contributes to ongoing efforts to teach machines to understand multi-party meeting speech. We have focused on the challenge of automatically generating abstractive meeting summaries.We first present our results on Abstractive Meeting Summarization (AMS), which aims to take a meeting transcription as input and produce an abstractive summary as output. We introduce a fully unsupervised framework for this task based on multi-sentence compression and budgeted submodular maximization. We also leverage recent advances in word embeddings and graph degeneracy applied to NLP, to take exterior semantic knowledge into account and to design custom diversity and informativeness measures.Next, we discuss our work on Dialogue Act Classification (DAC), whose goal is to assign each utterance in a discourse a label that represents its communicative intention. DAC yields annotations that are useful for a wide variety of tasks, including AMS. We propose a modified neural Conditional Random Field (CRF) layer that takes into account not only the sequence of utterances in a discourse, but also speaker information and in particular, whether there has been a change of speaker from one utterance to the next.The third part of the dissertation focuses on Abstractive Community Detection (ACD), a sub-task of AMS, in which utterances in a conversation are grouped according to whether they can be jointly summarized by a common abstractive sentence. We provide a novel approach to ACD in which we first introduce a neural contextual utterance encoder featuring three types of self-attention mechanisms and then train it using the siamese and triplet energy-based meta-architectures. We further propose a general sampling scheme that enables the triplet architecture to capture subtle patterns (e.g., overlapping and nested clusters)
Dumont, Maxime. "Apports de la modélisation des interactions pour une compréhension fonctionnelle d'un écosystème : application à des bactéries nitrifiantes en chémostat". Montpellier 2, 2008. http://www.theses.fr/2008MON20199.
Texto completoThe characteristics of microbial ecosystems make them appropriate models for studying certain important issues in general ecology using both theoretical and experimental approaches. However, their use remains marginal due to the difficulty in extracting key information. In this thesis, we have worked to such difficulties in the study of the link between biodiversity and ecosystem functioning by the use of molecular and automatic tools during the monitoring of nitrifying chemostats. First, we propose a generic method for allocating one of the studied functions performed by an ecosystem to each of the phylotypes (i. E. Molecular species) detected by molecular fingerprinting. This method constitutes an essential step in devising a structured mass-balance model taking account of inter- and intra-specific interactions which may occur between the different assembled species. Then, we accomplished such modeling by considering the dynamics of four phylotypes: two AOB (i. E. Oxidizing ammonium into nitrite) and two NOB (i. E. Oxidizing nitrites into nitrates). The comparison of the experimental data for these four phylotypes with the simulation given by the model shows that microbial interactions can lead to coexistence and play an important role in the functions expressed by the ecosystem. Finally, we have also studied the influence of different biotic disturbances on the resilience of the nitrifying systems and as a result, confirm studies which have been shown that the macroscopic functioning of an ecosystem can be stable despite significant variations at the populational scale
Caubriere, Antoine. "Du signal au concept : réseaux de neurones profonds appliqués à la compréhension de la parole". Thesis, Le Mans, 2021. https://tel.archives-ouvertes.fr/tel-03177996.
Texto completoThis thesis is part of the deep learning applied to spoken language understanding. Until now, this task was performed through a pipeline of components implementing, for example, a speech recognition system, then different natural language processing, before involving a language understanding system on enriched automatic transcriptions. Recently, work in the field of speech recognition has shown that it is possible to produce a sequence of words directly from the acoustic signal. Within the framework of this thesis, the aim is to exploit these advances and extend them to design a system composed of a single neural model fully optimized for the spoken language understanding task, from signal to concept. First, we present a state of the art describing the principles of deep learning, speech recognition, and speech understanding. Then, we describe the contributions made along three main axes. We propose a first system answering the problematic posed and apply it to a task of named entities recognition. Then, we propose a transfer learning strategy guided by a curriculum learning approach. This strategy is based on the generic knowledge learned to improve the performance of a neural system on a semantic concept extraction task. Then, we perform an analysis of the errors produced by our approach, while studying the functioning of the proposed neural architecture. Finally, we set up a confidence measure to evaluate the reliability of a hypothesis produced by our system
Popesco, Liana. "Analyse et génération de textes à partir d'un seul ensemble de connaissances pour chaque langue naturelle et de meta-règles de structuration". Paris 6, 1986. http://www.theses.fr/1986PA066138.
Texto completo