Teses / dissertações sobre o tema "Grands modèles de langage"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Grands modèles de langage".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.
Barbier, Guillaume. "Contribution de l'ingénierie dirigée par les modèles à la conception de modèles grande culture". Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2013. http://tel.archives-ouvertes.fr/tel-00914318.
Texto completo da fonteLabeau, Matthieu. "Neural language models : Dealing with large vocabularies". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS313/document.
Texto completo da fonteThis work investigates practical methods to ease training and improve performances of neural language models with large vocabularies. The main limitation of neural language models is their expensive computational cost: it depends on the size of the vocabulary, with which it grows linearly. Despite several training tricks, the most straightforward way to limit computation time is to limit the vocabulary size, which is not a satisfactory solution for numerous tasks. Most of the existing methods used to train large-vocabulary language models revolve around avoiding the computation of the partition function, ensuring that output scores are normalized into a probability distribution. Here, we focus on sampling-based approaches, including importance sampling and noise contrastive estimation. These methods allow an approximate computation of the partition function. After examining the mechanism of self-normalization in noise-contrastive estimation, we first propose to improve its efficiency with solutions that are adapted to the inner workings of the method and experimentally show that they considerably ease training. Our second contribution is to expand on a generalization of several sampling based objectives as Bregman divergences, in order to experiment with new objectives. We use Beta divergences to derive a set of objectives from which noise contrastive estimation is a particular case. Finally, we aim at improving performances on full vocabulary language models, by augmenting output words representation with subwords. We experiment on a Czech dataset and show that using character-based representations besides word embeddings for output representations gives better results. We also show that reducing the size of the output look-up table improves results even more
Constum, Thomas. "Extractiοn d'infοrmatiοn dans des dοcuments histοriques à l'aide de grands mοdèles multimοdaux". Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMR083.
Texto completo da fonteThis thesis focuses on automatic information extraction from historical handwritten documents, within the framework of the POPP and EXO-POPP projects. The POPP project focuses on handwritten census tables from Paris (1921-1946), while EXO-POPP deals with marriage records from the Seine department (1880-1940). The main objective is to develop an end-to-end architecture for information extraction from complete documents, avoiding explicit segmentation steps.Initially, a sequential processing pipeline was developed for the POPP project, enabling the automatic extraction of information for 9 million individuals across 300,000 pages. Then, an end-to-end architecture for information extraction was implemented for EXO-POPP, based on a convolutional encoder and a Transformer decoder, with the insertion of special symbols encoding the information to be extracted.Subsequently, the integration of large language models based on the Transformer architecture led to the creation of the DANIEL model, which achieved a new state-of-the-art on several public datasets (RIMES 2009 and M-POPP for handwriting recognition, IAM NER for information extraction), while offering faster inference compared to existing approaches. Finally, two public datasets from the POPP and EXO-POPP projects were made available, along with the code and weights of the DANIEL model
Krzesaj, Michel. "Modélisation et résolution de problèmes d'optimisation non linéaire de grande taille". Lille 1, 1985. http://www.theses.fr/1985LIL10070.
Texto completo da fonteBerthod, Christophe. "Identification paramétrique de grandes structures : réanalyse et méthode évolutionnaire". Phd thesis, Université de Franche-Comté, 1998. http://tel.archives-ouvertes.fr/tel-00011640.
Texto completo da fontePremière partie : Étude de méthodes de réanalyse approchée de structures mécaniques modifiées
Lorsque les paramètres de conception du modèle varient, il est nécessaire d'effectuer une réanalyse afin d'obtenir les solutions propres (modes et fréquences) du système modifié. Une stratégie de réanalyse approchée de type Rayleigh-Ritz est présentée : elle est plus rapide et moins coûteuse qu'une réanalyse exacte, tout en offrant une précision satisfaisante grâce à l'apport des vecteurs de résidus statiques.
Deuxième partie : Application d'une méthode évolutionnaire d'optimisation au recalage de modèles
Dans cette partie, on propose d'adapter une méthode évolutionnaire au problème de l'identification paramétrique. Inspiré par les principes d'évolution des algorithmes génétiques, son fonctionnement repose sur l'information fournie par une fonction coût représentant la distance entre un modèle recalé et la structure réelle. Des opérateurs heuristiques sont introduits afin de favoriser la recherche des solutions qui minimisent la fonction.
Troisième partie : Logiciel Proto–Dynamique
Cette partie vise à présenter l'environnement de travail qui a servi à programmer les techniques formulées dans le mémoire et à réaliser les tests numériques. Proto, écrit en langage Matlab, est une plate-forme de développement regroupant des outils d'analyse et des méthodes de recalage.
Pontes, Miranda James William. "Federation of heterogeneous models with machine learning-assisted model views". Electronic Thesis or Diss., Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2025. http://www.theses.fr/2025IMTA0454.
Texto completo da fonteModel-driven engineering (MDE) promotes models as a key element in addressing the increasing complexity of the software systems’ lifecycle. Engineering systems with MDE involves various models representing different system aspects. This heterogeneity requires model federation capabilities to integrate viewpoints specific to multiple domains. Model View solutions address this challenge but still lack more automation support. This thesis explores the integration of Machine Learning (ML), notably Graph Neural Networks (GNNs) and Large Language Models (LLMs), in order to improve the definition and building of such views. The proposed solution introduces a twofold approach within the EMF Views technical solution. This allowed to partially automate the definition of model views at design time, and to dynamically compute inter-model links at runtime. Our results indicate that the application of Deep Learning (DL) techniques, in this particular MDE context, already allows to achieve a first relevant level of automation. More globally, this research effort contributes to the ongoing development of more intelligent MDE solutions
Federici, Dominique. "Simulation de fautes comportementales de systèmes digitaux décrits à haut niveau d'abstraction en VHDL". Corte, 1999. http://www.theses.fr/1999CORT3039.
Texto completo da fonteHuot, Jean-Claude. "La Dynamique des grands projets". Lyon, INSA, 1990. http://www.theses.fr/1990ISAL0027.
Texto completo da fonte[The design of large projects is never completed until construction itself has been achieved, i. E. We start their construction and then resume the design. Technology is in continuous if reluctant evolution during the long life cycle of a major project The owner wants to take advantage of the latest innovations. This cause scope changes. Any design change often construction starts, will cause undiscovered rework that will eventually affect the interrelated parameters of the project (resources, productivity, quality, time and cast). Because major building projects are "fast-tracked" they have behaviour modes similar to other major projects. From the theory of systems we are proposing a model and developing a paradigm of organizational management of the realization of a major building that establishes the proper time to start construction and minimizes delays and impacts on costs. ]
Zervakis, Georgios. "Enriching large language models with semantic lexicons and analogies". Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0039.
Texto completo da fonteRecent advances in deep learning and neural networks have made it possible to address complex natural language processing tasks, which find application in a plethora of real-world problems ranging from smart assistants in mobile devices to the prediction of cancer. Nonetheless, modern systems based on these frameworks exhibit various limitations that may compromise their performance and trustworthiness, render them unfair towards minorities, or subject them to privacy leakage. It is our belief that integrating symbolic knowledge and reasoning into the deep learning framework is a necessary step towards addressing the aforementioned limitations. For example, lexical resources can enrich deep neural networks with semantic or syntactic knowledge, and logical rules can provide learning and reasoning mechanisms. Therefore, the scope of this thesis is to develop and evaluate ways of integrating different types of symbolic knowledge and reasoning into a widely used language model, Bidirectional Encoder Representations from Transformers (BERT). ln a first stage, we consider retrofitting, a simple and popular technique for refining distributional word embeddings based on relations coming from a semantic lexicon. Inspired by this technique, we present two methods for incorporating this knowledge into BERT contextualized embeddings. We evaluate these methods on three biomedical datasets for relation extraction and one movie review dataset for sentiment analysis, and show that they do not substantially impact the performance for these tasks. Furthermore, we conduct a qualitative analysis to provide further insights on this negative result. ln a second stage, we integrate analogical reasoning with BERT as a means to improve its performance on the target sense verification task, and make it more robust. To do so, we reformulate target sense verification as an analogy detection task. We present a hybrid model that combines BERT to encode the input data into quadruples and a convolutional neural classifier to decide whether they constitute valid analogies. We test our system on a benchmark dataset, and show that it can outperform existing approaches. Our empirical study shows the importance of the input encoding for BERT, and how this dependence gets alleviated by integrating the axiomatic properties of analogies during training, while preserving performance and improving robustness
Bond, Ioan. "Grands réseaux d'interconnexion". Paris 11, 1987. http://www.theses.fr/1987PA112371.
Texto completo da fonteThis thesis deals with problems related to interconnection networks, which can be multiprocessor or telecommunication networks. These networks are modeled by graphs in case of node-to-node connections and by hypergraphs in case of connection by buses. An important problem is the construction of large networks having a limited number of links per processor and a short message transmission rime. This corresponds in the associated graph to bound the maximum degree and diameter. In part one the case of networks modeled by graphs is discussed. We construct some new large families of networks with given maximum degree and diameter. The radius and related properties of these networks are given. We also study how one can add vertices to existing networks without changing their properties. Final/y we construct large fault tolerant networks (not vulnerable), in the sense that the diameter does not increase too much in case of node or link failures. Part two deals with bus interconnection networks. As result of the limited capacity of the buses, the number of processors per bus is bounded. We give constructions of such networks, especially in the case where any two nodes belong to a common bus, and the case where a node belongs to only two buses. This study gives rise to some interesting problems in combinatorial design theory. We give new results on decompositions, and on packings and coverings of complete graphs
Alain, Pierre. "Contributions à l'évaluation des modèles de langage". Rennes 1, 2007. http://www.theses.fr/2007REN1S003.
Texto completo da fonteThis work deals with the evaluation of language models independently of any applicative task. A comparative study between several language models is generally related to the role that a model has into a complete system. Our objective consists in being independant of the applicative system, and thus to provide a true comparison of language models. Perplexity is a widely used criterion as to comparing language models without any task assumptions. However, the main drawback is that perplexity supposes probability distributions and hence cannot compare heterogeneous models. As an evaluation framework, we went back to the definition of the Shannon's game which is based on model prediction performance using rank based statistics. Our methodology is able to predict joint word sequences that are independent of the task or model assumptions. Experiments are carried out on French and English modeling with large vocabularies, and compare different kinds of language models
Delot, Thierry. "Interrogation d'annuaires étendus : modèles, langage et optimisation". Versailles-St Quentin en Yvelines, 2001. http://www.theses.fr/2001VERS0028.
Texto completo da fonteOota, Subba Reddy. "Modèles neurocomputationnels de la compréhension du langage : caractérisation des similarités et des différences entre le traitement cérébral du langage et les modèles de langage". Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0080.
Texto completo da fonteThis thesis explores the synergy between artificial intelligence (AI) and cognitive neuroscience to advance language processing capabilities. It builds on the insight that breakthroughs in AI, such as convolutional neural networks and mechanisms like experience replay 1, often draw inspiration from neuroscientific findings. This interconnection is beneficial in language, where a deeper comprehension of uniquely human cognitive abilities, such as processing complex linguistic structures, can pave the way for more sophisticated language processing systems. The emergence of rich naturalistic neuroimaging datasets (e.g., fMRI, MEG) alongside advanced language models opens new pathways for aligning computational language models with human brain activity. However, the challenge lies in discerning which model features best mirror the language comprehension processes in the brain, underscoring the importance of integrating biologically inspired mechanisms into computational models. In response to this challenge, the thesis introduces a data-driven framework bridging the gap between neurolinguistic processing observed in the human brain and the computational mechanisms of natural language processing (NLP) systems. By establishing a direct link between advanced imaging techniques and NLP processes, it conceptualizes brain information processing as a dynamic interplay of three critical components: "what," "where," and "when", offering insights into how the brain interprets language during engagement with naturalistic narratives. This study provides compelling evidence that enhancing the alignment between brain activity and NLP systems offers mutual benefits to the fields of neurolinguistics and NLP. The research showcases how these computational models can emulate the brain’s natural language processing capabilities by harnessing cutting-edge neural network technologies across various modalities—language, vision, and speech. Specifically, the thesis highlights how modern pretrained language models achieve closer brain alignment during narrative comprehension. It investigates the differential processing of language across brain regions, the timing of responses (Hemodynamic Response Function (HRF) delays), and the balance between syntactic and semantic information processing. Further, the exploration of how different linguistic features align with MEG brain responses over time and find that the alignment depends on the amount of past context, indicating that the brain encodes words slightly behind the current one, awaiting more future context. Furthermore, it highlights grounded language acquisition through noisy supervision and offers a biologically plausible architecture for investigating cross-situational learning, providing interpretability, generalizability, and computational efficiency in sequence-based models. Ultimately, this research contributes valuable insights into neurolinguistics, cognitive neuroscience, and NLP
Chauveau, Dominique. "Étude d'une extension du langage synchrone SIGNAL aux modèles probabilistes : le langage SIGNalea". Rennes 1, 1996. http://www.theses.fr/1996REN10110.
Texto completo da fonteFleurey, Franck. "Langage et méthode pour une ingénierie des modèles fiable". Phd thesis, Université Rennes 1, 2006. http://tel.archives-ouvertes.fr/tel-00538288.
Texto completo da fonteAdda, Gilles. "Reconnaissance de grands vocabulaires : une étude syntaxique et lexicale". Paris 11, 1987. http://www.theses.fr/1987PA112386.
Texto completo da fonteRodolakis, Georgios. "Modèles analytiques et évaluation de performances dans les grands réseaux mobiles ad hoc". Phd thesis, Ecole Polytechnique X, 2006. http://pastel.archives-ouvertes.fr/pastel-00002950.
Texto completo da fonteLopes, Marcos. "Modèles inductifs de la sémiotique textuelle". Paris 10, 2002. http://www.theses.fr/2002PA100145.
Texto completo da fonteEyssautier-Bavay, Carole. "Modèles, langage et outils pour la réutilisation de profils d'apprenants". Phd thesis, Université Joseph Fourier (Grenoble), 2008. http://tel.archives-ouvertes.fr/tel-00327198.
Texto completo da fonteIl n'existe pas à l'heure actuelle de solution technique permettant de réutiliser ces profils hétérogènes. Cette thèse cherche donc à proposer des modèles et des outils permettant la réutilisation pour les différents acteurs de profils d'apprenants créés par d'autres.
Dans nos travaux, nous proposons le modèle de processus de gestion de profils REPro (Reuse of External Profiles). Pour permettre la réutilisation de profils hétérogènes, nous proposons de les réécrire selon un formalisme commun qui prend la forme d'un langage de modélisation de profils, le langage PMDL (Profiles MoDeling Language). Nous définissons ensuite un ensemble d'opérateurs permettant la transformation des profils ainsi harmonisés, ou de leur structure, tels que l'ajout d'éléments dans le profil, ou la création d'un profil de groupe à partir de profils individuels. Ces propositions ont été mises en œuvre au sein de l'environnement EPROFILEA du projet PERLEA (Profils d'Élèves Réutilisés pour L'Enseignant et l'Apprenant), avant d'être mises à l'essai auprès d'enseignants en laboratoire.
Swaileh, Wassim. "Des modèles de langage pour la reconnaissance de l'écriture manuscrite". Thesis, Normandie, 2017. http://www.theses.fr/2017NORMR024/document.
Texto completo da fonteThis thesis is about the design of a complete processing chain dedicated to unconstrained handwriting recognition. Three main difficulties are adressed: pre-processing, optical modeling and language modeling. The pre-processing stage is related to extracting properly the text lines to be recognized from the document image. An iterative text line segmentation method using oriented steerable filters was developed for this purpose. The difficulty in the optical modeling stage lies in style diversity of the handwriting scripts. Statistical optical models are traditionally used to tackle this problem such as Hidden Markov models (HMM-GMM) and more recently recurrent neural networks (BLSTM-CTC). Using BLSTM we achieve state of the art performance on the RIMES (for French) and IAM (for English) datasets. The language modeling stage implies the integration of a lexicon and a statistical language model to the recognition processing chain in order to constrain the recognition hypotheses to the most probable sequence of words (sentence) from the language point of view. The difficulty at this stage is related to the finding the optimal vocabulary with minimum Out-Of-Vocabulary words rate (OOV). Enhanced language modeling approaches has been introduced by using sub-lexical units made of syllables or multigrams. The sub-lexical units cover an important portion of the OOV words. Then the language coverage depends on the domain of the language model training corpus, thus the need to train the language model with in domain data. The recognition system performance with the sub-lexical units outperformes the traditional recognition systems that use words or characters language models, in case of high OOV rates. Otherwise equivalent performances are obtained with a compact sub-lexical language model. Thanks to the compact lexicon size of the sub-lexical units, a unified multilingual recognition system has been designed. The unified system performance have been evaluated on the RIMES and IAM datasets. The unified multilingual system shows enhanced recognition performance over the specialized systems, especially when a unified optical model is used
Yeo, Ténan. "Modèles stochastiques d'épidémies en espace discret et continu : loi des grands nombres et fluctuations". Thesis, Aix-Marseille, 2019. http://www.theses.fr/2019AIXM0617.
Texto completo da fonteThe aim of this thesis is to study stochastic epidemic models taking into account the spatial structure of the environment. Firstly, we consider a deterministic and a stochastic SIR model on a regular grid of [0,1]^d, d=1, 2 or 3. On the one hand, by letting first the size of the population on each node go to infinity and the mesh size of the grid is kept fixed, we prove that the stochastic model converges to the deterministic model on the spatial grid. This system of ordinary differential equations converges to a system of partial differential equations as the mesh size of the grid goes to zero. On the other hand, we let both the population size go to infinity and the mesh size of the grid go to zero with a restriction on the the speed of convergence between the two parameters. In this case, we show that the stochastic model converges to the deterministic model in the continuous space. Next, we study, in the case d=1, the fluctuations of the stochastic model around its deterministic law of large numbers limit, by using a cental limit theorem. Finally, we study the dynamic of infectious disease within a population distribued on a finite number of interconnected patches. We place ourselves in the context of an SIS model. By using the central limit theorem, the moderate deviations and the large deviations, we give an approximation of the time taken by the random pertubations to extinct an endemic situation. We make numerical calculus for the quasi-potential which appear in the expression of the time of extinction. Comparisons are made with that of the homogeneous model
Ameur-Boulifa, Rabéa. "Génération de modèles comportementaux des applications réparties". Nice, 2004. http://www.theses.fr/2004NICE4094.
Texto completo da fonteFrom the formal semantics of ProActive – 100 % Java library for concurrent, distributed, and mobile computing -, we build, in a compositional way, finite models of finite abstract applications. These models are described mathematically and graphically. The procedure for building, of which we guaranty the ending, is described by semantics rules applied to an intermediate form of programs obtained by static analysis. Afterwards, these rules are extended so as to build parameterized models of infinite applications. Practically, a prototype for analysing a core of Java and ProActive library is constructed. Moreover, some realistic examples are studied
Zitouni, Imed. "Modélisation du langage pour les systèmes de reconnaissance de la parole destinés aux grands vocabulaires : application à MAUD". Nancy 1, 2000. http://docnum.univ-lorraine.fr/public/SCD_T_2000_0034_ZITOUNI.pdf.
Texto completo da fonteNguyen, Hong Quang. "Reconnaissance automatique de la parole continue : grand vocabulaire en vietnamien". Avignon, 2008. http://www.theses.fr/2008AVIG0155.
Texto completo da fonteDevelopment of the Vietnamese speech recognition has just started. This is due to the differences between Vietnamese language and Western languages, the speech recognition techniques broadly used for these languages (English, French for example. ) are not enough for developing directly a powerful Vietnamese speech recognition system. Taking into consideration the Vietnamese language characteristics in term of data (lexicon, language model) and model (tone model) representation should allow us to obtain promised results and better performances. The first difference is the semantic entities segmentation of the sentence. In Vietnamese, the word/concept consists of one or several syllables which are systematically separated by spaces (syllabic language). The segmentation of the sentence in words/concepts is an important stage for the isolating languages such as the Mandarin, the Cantonese, and the Thai but also for the Vietnamese. To improve the performance of automatic recognition system for Vietnamese, we built a polysyllabic word segmentation module for syllabic sentences. Two approaches were used: the first one uses a Vietnamese polysyllabic word dictionary whereas the second builds automatically this dictionary using the mutual information of the words as the grouping criterion, and a dynamic programming algorithm to simplify the treatments. The second difference is the crucial role of the tone in the Vietnamese language. The tone recognition is thus a fundamental aspect of the tonal language processing. In this thesis, we studied various methods to represent, in an optimal way, the fundamental frequency and the energy. We also were interested in finding a method to reduce the influence of the co-articulation phenomenon between tones. We furthermore used two approaches: an approach in frames by using hidden Markov models and a more general method based on the multi-level perceptrons. By integrating the processing of the linguistic (polysyllabic word lexicon) and acoustic (tone recognition) characteristics, the results were improved by practically 50 % (compared to the baseline system). These results prove that the addition of supplementary information, characteristics of Vietnamese language, improves considerably the performances of the speech recognition system
Trojet, Mohamed Wassim. "Approche de vérification formelle des modèles DEVS à base du langage Z". Aix-Marseille 3, 2010. http://www.theses.fr/2010AIX30040.
Texto completo da fonteThe general framework of the thesis consists in improving the verification and the validation of simulation models through the integration of formal methods. We offered an approach of formal verification of DEVS models based on Z language. DEVS is a formalism that allows the description and analysis of the behavior of discrete event systems, ie systems whose state change depends on the occurrence of an event. A DEVS model is essentially validated by the simulation which permits to verify if it correctly describes the behavior of the system. However, the simulation does not detect the presence of a possible inconsistency in the model (conflict, ambiguity or incompleteness). For this reason, we have integrated a formal specification language, known as Z, in the DEVS formalism. This integration consists in: (1) transforming a DEVS model into an equivalent Z specification and (2) verifying the consistency of the resulting specification using the tools developed by the Z community. Thus, a DEVS model is subjected to an automatic formal verification before its simulation
Janiszek, David. "Adaptation des modèles de langage dans le cadre du dialogue homme-machine". Avignon, 2005. http://www.theses.fr/2005AVIG0144.
Texto completo da fonteCurrently, most of the automatic speech recognition (ASR) systems are based on statistical language models (SLM). These models are estimated from sets of observations. So, the implementation of an ASR system requires having a corpus in adequacy with the aimed application. Because of the difficulties occurring while collecting these data, the available corpora may be insufficient to estimate SLM correctly. To raise this insufficiency, one may wish to use other data and to adapt them to the application context. The main objective is to improve the performances of the corresponding dialogue system. Within this framework, we've defined and implemented a new paradigm: the matrix representation of the linguistic data. This approach is the basis of our work; it allows a new linguistic data processing thanks to the use of the linear algebra. For example, we've defined a semantic and functional similarity between words. Moreover, we have studied and developed several techniques of adaptation based on the matrix representation. During our study, we've investigated several research orientations: Filtering the data: we've used the technique of the minimal blocks. The linear transformation: this technique consists in defining an algebraic operator to transform the linguistic data. The data augmentation: this technique consists in reestimating the occurrences of a word observed according to its functional similarity with other words. The selective combination of histories: this technique is a generalization of the linear interpolation of language models. Combining techniques: each technique having advantages and drawbacks, we've sought the best combinations. The experimental results obtained within our framework of study give us relative improvements in term of word error rate. In particular, our experiments show that associating the data augmentation and the selective combination of histories gives interesting results
Oger, Stanislas. "Modèles de langage ad hoc pour la reconnaissance automatique de la parole". Phd thesis, Université d'Avignon, 2011. http://tel.archives-ouvertes.fr/tel-00954220.
Texto completo da fonteFichot, Jean. "Langage et signification : le cas des mathématiques constructives". Paris 1, 2002. http://www.theses.fr/2002PA010653.
Texto completo da fonteLesur, Benoît. "Validations de modèles numériques de grands réseaux pour l'optimisation d'antennes à pointage électronique en bande Ka". Thesis, Limoges, 2017. http://www.theses.fr/2017LIMO0111/document.
Texto completo da fonteThe rapid expansion of satellite communications and information and communications technology led to an increasing demand from end-users. Hence, services offering In-Flight Connectivity for airlines passengers are emerging. This work is focused on the implementation of accurate numerical models of large antenna arrays meant for this scope. After having put things into context and recalled issues linked to antenna arrays, numerical and experimental test vehicles are made, allowing to validate the modelling methodologies. Finally, the modelling of a large, dual circular polarization and wide-angle scanning radiating panel is addressed. This study then allows to estimate the performance of the panel function of steering requirements and possible dispersions from the active channels
Nguyen, Thi Viet Ha. "Problèmes de graphes motivés par des modèles basse et haute résolution de grands assemblages de protéines". Thesis, Université Côte d'Azur, 2021. http://www.theses.fr/2021COAZ4107.
Texto completo da fonteTo explain the biological function of a molecular assembly (MA), one has toknow its structural description. It may be ascribed to two levels of resolution: low resolution (i.e. molecular interactions) and high resolution (i.e. relative position and orientation of each molecular subunit, called conformation). Our thesis aims to address the two problems from graph aspects.The first part focuses on low resolution problem. Assume that the composition (complexes) of a MA is known, we want to determine all interactions of subunits in the MA which satisfies some property. It can be modeled as a graphproblem by representing a subunit as a vertex, then a subunit-interaction is anedge, and a complex is an induced subgraph. In our work, we use the fact thata subunit has a bounded number of interactions. It leads to overlaying graph with bounded maximum degree. For a graph family F and a fixed integer k, given a hypergraph H = (V (H), E(H)) (whose edges are subsets of vertices) and an integer s, MAX (∆ ≤ k)-F -OVERLAY consists in deciding whether there exists a graph with degree at most k such that there are at least s hyperedges in which the subgraph induced by each hyperedge (complex) contains an element of F. When s = |E(H)|, it is called (∆ ≤ k)-F –OVERLAY. We present complexity dichotomy results (P vs. NP-complete) for MAX (∆ ≤ k)-F -OVERLAY and (∆ ≤ k)-F -OVERLAY depending on pairs (F, k).The second part presents our works motivated by high resolution problem.Assume that we are given a graph representing the interactions of subunits, afinite set of conformations for each subunit and a weight function assessing thequality of the contact between two subunits positioned in the assembly. Discrete Optimization of Multiple INteracting Objects (DOMINO) aims to find conformations for the subunits maximizing a global utility function. We propose a new approach based on this problem in which the weight function is relaxed, CONFLICT COLORING. We present studies from both theoretical and experimental points of view. Regarding the theory, we provide a complexity dichotomy result and also algorithmic methods (approximation and fixed parameter tracktability). Regarding the experiments, we build instances of CONFLICT COLORING associated with Voronoi diagrams in the plane. The obtained statistics provide information on the dependencies of the existences of a solution, to parameters used in our experimental setup
Yun, Mi-Ran. "Echantillonnage des petits et grands déplacements atomiques dans les protéines et complexes moléculaires". Paris 7, 2007. http://www.theses.fr/2007PA077128.
Texto completo da fonteThe knowledge of protein conformational space is a major importance in biology, while protein binding (beyond of the notion "lock and key" the conformational changes play an important role. The several experimental (NMR, X-ray crystallography. . . ) and theoretical studies (Molecular Dynamics, Monte Carlo method. . . ), are used for describe molecular conformational space. The side chain flexibility is well characterized, however main chain flexibility rest in problem. We propose an activated method, ARTIST (Activation-Relaxation Technique for Internal coordinate Space Trajectories) fused and adapted from two programs, (ART and LIGAND) capable to sample, in internal coordinates, local or collective displacements on proteins involving protein backbone. We show the capacity of ARTIST to sample conformational changes from small proteins to complexes using AMBER and FLEX ail atom force fields. The ARTIST is adapted with the coarse-grained force field (OPEP), first tests were performed on a small protein
Sourty, Raphael. "Apprentissage de représentation de graphes de connaissances et enrichissement de modèles de langue pré-entraînés par les graphes de connaissances : approches basées sur les modèles de distillation". Electronic Thesis or Diss., Toulouse 3, 2023. http://www.theses.fr/2023TOU30337.
Texto completo da fonteNatural language processing (NLP) is a rapidly growing field focusing on developing algorithms and systems to understand and manipulate natural language data. The ability to effectively process and analyze natural language data has become increasingly important in recent years as the volume of textual data generated by individuals, organizations, and society as a whole continues to grow significantly. One of the main challenges in NLP is the ability to represent and process knowledge about the world. Knowledge graphs are structures that encode information about entities and the relationships between them, they are a powerful tool that allows to represent knowledge in a structured and formalized way, and provide a holistic understanding of the underlying concepts and their relationships. The ability to learn knowledge graph representations has the potential to transform NLP and other domains that rely on large amounts of structured data. The work conducted in this thesis aims to explore the concept of knowledge distillation and, more specifically, mutual learning for learning distinct and complementary space representations. Our first contribution is proposing a new framework for learning entities and relations on multiple knowledge bases called KD-MKB. The key objective of multi-graph representation learning is to empower the entity and relation models with different graph contexts that potentially bridge distinct semantic contexts. Our approach is based on the theoretical framework of knowledge distillation and mutual learning. It allows for efficient knowledge transfer between KBs while preserving the relational structure of each knowledge graph. We formalize entity and relation inference between KBs as a distillation loss over posterior probability distributions on aligned knowledge. Grounded on this finding, we propose and formalize a cooperative distillation framework where a set of KB models are jointly learned by using hard labels from their own context and soft labels provided by peers. Our second contribution is a method for incorporating rich entity information from knowledge bases into pre-trained language models (PLM). We propose an original cooperative knowledge distillation framework to align the masked language modeling pre-training task of language models and the link prediction objective of KB embedding models. By leveraging the information encoded in knowledge bases, our proposed approach provides a new direction to improve the ability of PLM-based slot-filling systems to handle entities
Nguyen, Thi Thanh Tam. "Codèle : Une Approche de Composition de Modèles pour la Construction de Systèmes à Grande Échelle". Phd thesis, Université Joseph Fourier (Grenoble), 2008. http://tel.archives-ouvertes.fr/tel-00399655.
Texto completo da fonteGrange, Sophie. "Le grand dilemme des équidés sauvages : coexister avec les bovidés et éviter les grands prédateurs". Poitiers, 2006. http://www.theses.fr/2006POIT2319.
Texto completo da fonteThe Plains zebra is currently the most widespread wild equid; however there is still little information on the regulation/limitation of their populations. Comparative studies on the relative abundance and the population dynamics of Plains zebras and grazing bovids support the hypothesis that predation has a greater impact on the number of zebras in African ecosystems, and probably also play an important role in the limitation of some zebra populations. Given these findings, it will be necessary to link population models of zebra and their main predators. However a major problem is the lack of accurate data on zebra survival rates. The study on the population dynamics of Plains zebra in Hwange National Park (Zimbabwe) is the first one to use a capture-mark-recapture method based on photo-identification. After only one year and a half, this method already proves to be promising to study zebra population dynamics. This thesis also shows that the feralization of domestic horses leads to an unnatural population dynamics, which means that Camargue horses cannot be used as surrogates of wild equids to restore natural ecosystems. In terms of species conservation it is therefore now important to acquire a good knowledge on the regulating/limiting factors acting on current wild equid populations in order to facilitate translocations and reintroductions in their natural ecosystems
Nini, Robert. "Cartographie de la susceptibilité aux "Grands Glissements de Terrain" au Liban". Châtenay-Malabry, Ecole centrale de Paris, 2004. http://www.theses.fr/2004ECAP0964.
Texto completo da fonteMany impressive landslides have recently occured Lebanon. A prediction based on the susceptibility mapping will be of such importance in order to reduce their damages. This work constitutes a first attempt of mapping the susceptibility of landslides in Lebanon by a method which is a compromise between the two known approaches: the expert method and the analytical method. A first data on these landslides and their permanent causes is presented based on the existing documents, investigations, and soil investigation campaign. These landslides will be analysed by Talren software in order to calculate their factor of safety against sliding. Our study is based on the analysis of different causal factors of these landslides, such the geomorphology, geology, hydrogeology, tectonic, soil, pluviometry, and vegetation. For each one, their ground model and sliding model will be evokated with their possible mecanism of failure. This study permits to map the critical modalities of different causal factors. The superposition of these maps of different factors will be beneficial to localize the zones presenting high risk of instability. The probabilistic approach will be applied on these cases by Phimeca software. The Phimeca results such the reliability index and the probability of failure permits to compare the safety factor obtained by Talren with these two values
Declerck, Philippe. "Analyse structurale et fonctionnelle des grands systèmes : applications à une centrale PWR 900 MW". Lille 1, 1991. http://www.theses.fr/1991LIL10153.
Texto completo da fonteBoyarm, Aristide. "Contribution à l'élaboration d'un langage de simulation à événements discrets pour modèles continus". Aix-Marseille 3, 1999. http://www.theses.fr/1999AIX30050.
Texto completo da fonteNogier, Jean-François. "Un système de production de langage fondé sur le modèles des graphes conceptuels". Paris 7, 1990. http://www.theses.fr/1990PA077157.
Texto completo da fonteStrub, Florian. "Développement de modèles multimodaux interactifs pour l'apprentissage du langage dans des environnements visuels". Thesis, Lille 1, 2020. http://www.theses.fr/2020LIL1I030.
Texto completo da fonteWhile our representation of the world is shaped by our perceptions, our languages, and our interactions, they have traditionally been distinct fields of study in machine learning. Fortunately, this partitioning started opening up with the recent advents of deep learning methods, which standardized raw feature extraction across communities. However, multimodal neural architectures are still at their beginning, and deep reinforcement learning is often limited to constrained environments. Yet, we ideally aim to develop large-scale multimodal and interactive models towards correctly apprehending the complexity of the world. As a first milestone, this thesis focuses on visually grounded language learning for three reasons (i) they are both well-studied modalities across different scientific fields (ii) it builds upon deep learning breakthroughs in natural language processing and computer vision (ii) the interplay between language and vision has been acknowledged in cognitive science. More precisely, we first designed the GuessWhat?! game for assessing visually grounded language understanding of the models: two players collaborate to locate a hidden object in an image by asking a sequence of questions. We then introduce modulation as a novel deep multimodal mechanism, and we show that it successfully fuses visual and linguistic representations by taking advantage of the hierarchical structure of neural networks. Finally, we investigate how reinforcement learning can support visually grounded language learning and cement the underlying multimodal representation. We show that such interactive learning leads to consistent language strategies but gives raise to new research issues
Roque, Matthieu. "Contribution à la définition d'un langage générique de modélisation d'entreprise". Bordeaux 1, 2005. http://www.theses.fr/2005BOR13059.
Texto completo da fonteBenaid, Brahim. "Convergence en loi d'intégrales stochastiques et estimateurs des moindres carrés de certains modèles statistiques instables". Toulouse, INSA, 2001. http://www.theses.fr/2001ISAT0030.
Texto completo da fonteIn many recent applications, statistics are under the form of discrete stochastic integrals. In this work, we establish a basic theorem on the convergence in distribution of a sequence of discrete stochastic integrals. This result extends earlier corresponding theorems in Chan & Wei (1988) and in Truong-van & Larramendy (1996). Its proof is not based on the classical martingale approximation technique, but from a derivation of Kurtz & Protter's theorem (1991) on the convergence in distribution of sequences of Itô stochastic integrals relative to two semi-martigales and another approximation technique. Furthermore, various applications to asymptotic statistics are also given, mainly those concerning least squares estimators for ARMAX(p,r,q) models and purely unstable integrated ARCH models
Scala, Paolo Maria. "Implémentations d'optimisation-simulation pour l'harmonisation des opérations dans les grands aéroports". Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30120.
Texto completo da fonteThe constant growth of air traffic, especially in Europe, is putting pressure on airports, which, in turn, are suffering congestion problems. The airspace surrounding airport, terminal manoeuvring area (TMA), is particularly congested, since it accommodates all the converging traffic to and from airports. Besides airspace, airport ground capacity is also facing congestion problems, as the inefficiencies coming from airspace operations are transferred to airport ground and vice versa. The main consequences of congestion at airport airspace and ground, is given by the amount of delay generated, which is, in turn, transferred to other airports within the network. Congestion problems affect also the workload of air traffic controllers that need to handle this big amount of traffic. This thesis deals with the optimization of the integrated airport operations, considering the airport from a holistic point of view, by including operations such as airspace and ground together. Unlike other studies in this field of research, this thesis contributes by supporting the decisions of air traffic controllers regarding aircraft sequencing and by mitigating congestion on the airport ground area. The airport ground operations and airspace operations can be tackled with two different levels of abstractions, macroscopic or microscopic, based on the time-frame for decision-making purposes. In this thesis, the airport operations are modeled at a macroscopic level. The problem is formulated as an optimization model by identifying an objective function that considers the amount of conflicts in the airspace and capacity overload on the airport ground; constraints given by regulations on separation minima between consecutive aircraft in the airspace and on the runway; decision variables related to aircraft entry time and entry speed in the airspace, landing runway and departing runway choice and pushback time. The optimization model is solved by implementing a sliding window approach and an adapted version of the metaheuristic simulated annealing. Uncertainty is included in the operations by developing a simulation model and by including stochastic variables that represent the most significant sources of uncertainty when considering operations at a macroscopic level, such as deviation from the entry time in the airspace, deviation in the average taxi time and deviation in the pushback time
Tron, Cécile. "Modèles quantitatifs de machines parallèles : les réseaux d'interconnexion". Grenoble INPG, 1994. http://www.theses.fr/1994INPG0179.
Texto completo da fonteKettani, Omar. "Modèles du calcul sans changement d'état : quelques développements et résultats". Aix-Marseille 2, 1989. http://www.theses.fr/1989AIX24005.
Texto completo da fonteTuring machine takes his information in a box marked between severl others. Taking informationat once on parts of two contiguous boxes makes state desappear from algorithm. Correspondent information is else noted in each contiguous part of the two boxes. It becomes so necessary to employ large set of symbols. Here is proved equivalence of such machines with turing machines. Author imagines now parallel machines in whic a plenty of contiguous boxes can be marked simultaneously, and the travelling of marks scan part of box all at once. He applie it to several classical problems. In marking at once two contiguous boxes, it is possible to take at once half part of the first and half part of the second one, third part of the first and two thrid part of the second one, and so on. On the bound, at the utmost shift, it rests two values in common, and in this case author proves again equivalence of such machine with turing machine he proves so existence of an unviersal machine and establishes relation with cellular automata
Fauthoux, David. "Des grains aux aspects, proposition pour un modèle de programmation orientée-aspect". Toulouse 3, 2004. http://www.theses.fr/2004TOU30100.
Texto completo da fonteCurrent programming technologies do not able to clearly separate crosscutting concerns. The code of a concern is scattered into the program components. After having detailed and analysed four main aspect-oriented systems, this report presents a fine-grained model. These grains, the "lenses", are grouped to create more abstract components. The first step of the report describes a "flow" as a chain of lenses. A program can be defined as a set of intersecting flows. The second step of the report comes to the "aspect" concept, applied onto specified points of the program. These abstract groups (flows and aspects) are exactly shaped like lenses. Thus the model is consistent from the bottom level (classes) to the more abstract ones (groups, and groups of groups). The main goal of this report is to enable to express as brightly as possible the structure of the program. The model walks on the way which aims at splitting the program architecture building phase from the component writing phase. Architect is a job which requires composition skills and tools. It is to be separated from the developer job which uses and manipulates the program language to write components
Le, Gloahec Vincent. "Un langage et une plateforme pour la définition et l’exécution de bonnes pratiques de modélisation". Lorient, 2011. http://www.theses.fr/2011LORIS239.
Texto completo da fonteThe most valuable asset of an IT company lies in knowledge and know-how acquired over the years by its employees. Unfortunately, lacking means they deem appropriate, most companies do not streamline the management of such knowledge. In the field of software engineering, this knowledge is usually collected in the form of best practices documented in an informal way, rather unfavorable to the effective and adequate use of these practices. In this area, the modeling activities have become predominant, favoring the reduction of effort and development costs. The effective implementation of best practices related to modeling activities would help improve developer productivity and the final quality of software. The objective of this thesis, as part of a collaboration between the Alkante company and the VALORIA laboratory, is to provide a both theoretical and practical framework favoring the capitalization of best modeling practices. An approach for the management of good modeling practices (GMPs) is proposed. This approach relies on the principles of model-driven engineering (MDE), by proposing a division into two levels of abstraction: a PIM level (Platform Independent Model) dedicated to the capitalization of GMPs independently of any specific platform, ensuring the sustainability of knowledge, and a PSM level (Platform Specific Model) dedicated to the verification of compliance of GMPs in modeling tools. To ensure the capitalization of good practices (GPs), a specific language dedicated to the description of GMPs has been developed on the basis of common characteristics identified by a detailed study of two types of GPs : those focusing on process aspects, and others more focused on style or shape of models. This langage, called GooMod, is defined by its abstract syntax, represented as a MOF compliant metamodel (MOF stands for Meta Object Facility), a description of its semantics, and a graphical concrete syntax. A platform provides the two necessary tools for both the definition of GMPs (that conforms to the GooMod language) and their effective application in modeling tools. The GMP Definition Tool is a graphical editor that facilitates the description of GMPs targeting any modeling language (e. G. GMPs for the UML), but independently of modeling tools. The GMP Execution Tool is a PSM level implementation specifically targeting modeling tools based on the Graphical Modeling Framework (GMF) of the Eclipse integrated development environment. During modeling activities performed by designers, this tool automates the verification of compliance of GMPs originally described in GooMod. This work has been validated on two aspects of the proposed approach. An industrial case study illustrates the definition, using the GooMod language, of a GMP specific to the modeling of Web applications developed by the Alkante company. An experiment on the evaluation of the effectiveness and usability of the GMP Execution Tool was conducted among students
Sidaoui, Assann. "Contribution à l'optimisation hiérarchisée des grands systèmes complexes : poursuite d'objectifs et prise en compte de l'imprécision des modèles". Grenoble INPG, 1992. http://www.theses.fr/1992INPG0037.
Texto completo da fonteGuihal, David. "Modélisation en langage VHDL-AMS des systèmes pluridisciplinaires". Phd thesis, Université Paul Sabatier - Toulouse III, 2007. http://tel.archives-ouvertes.fr/tel-00157570.
Texto completo da fonteRamadour, Philippe. "Modèles et langage pour la conception et la manipulation de composants réutilisables de domaine". Aix-Marseille 3, 2001. http://www.theses.fr/2001AIX30092.
Texto completo da fonteWoehrling, Cécile. "Accents régionaux en français : perception, analyse et modélisation à partir de grands corpus". Phd thesis, Université Paris Sud - Paris XI, 2009. http://tel.archives-ouvertes.fr/tel-00617248.
Texto completo da fonte