Tesis sobre el tema "Modèles de langage protéique"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Modèles de langage protéique".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Vander, Meersche Yann. "Étude de la flexibilité des protéines : analyse à grande échelle de simulations de dynamique moléculaire et prédiction par apprentissage profond". Electronic Thesis or Diss., Université Paris Cité, 2024. http://www.theses.fr/2024UNIP5147.
Texto completoProteins are essential to biological processes. Understanding their dynamics is crucial for elucidating their biological functions and interactions. However, experimentally measuring protein flexibility remains challenging due to technical limitations and associated costs. This thesis aims to deepen the understanding of protein dynamic properties and to propose computational methods for predicting their flexibility directly from their sequence. This work is organised in four main contributions: 1) Protein flexibility prediction in terms of B-factors. We have developed MEDUSA, a flexibility prediction method based on deep learning, which leverages the physicochemical and evolutionary information of amino acids to predict experimental flexibility classes from protein sequences. MEDUSA has outperformed previously available tools but shows limitations due to the variability of experimental data. 2) Large-scale analysis of in silico protein dynamics. We have released ATLAS, a database of standardised all-atom molecular dynamics simulations providing detailed information on protein flexibility for over 1.5k representative protein structures. ATLAS enables interactive analysis of protein dynamics at different levels and offers valuable insights into proteins exhibiting atypical dynamical behaviour, such as dual personality fragments. 3) An in-depth analysis of AlphaFold 2's pLDDT score and its relation to protein flexibility. We have assessed pLDDT correlation with different flexibility descriptors derived from molecular dynamics simulations and from NMR ensembles and demonstrated that confidence in 3D structure prediction does not necessarily reflect expected flexibility of the protein region, in particular, for protein fragments involved in molecular interaction. 4) Prediction of MD-derived flexibility descriptors using protein language embeddings. We introduce PEGASUS, a novel flexibility prediction tool developed using ATLAS database. Using protein sequence encoding by protein language models and a simple deep learning model, PEGASUS provides precise predictions of flexibility metrics and effectively captures the impact of mutations on protein dynamics. The perspectives of this work include enriching simulations with varied environments and integrating membrane proteins to enhance PEGASUS and enable new analyses. We also highlight the emergence of methods capable of predicting conformational ensembles, offering promising advances for better capturing protein dynamics. This thesis offers new perspectives for the prediction and analysis of protein flexibility, paving the way for advances in areas such as biomedical research, mutation studies, and drug design
Hladiš, Matej. "Réseaux de neurones en graphes et modèle de langage des protéines pour révéler le code combinatoire de l'olfaction". Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ5024.
Texto completoMammals identify and interpret a myriad of olfactory stimuli using a complex coding mechanism involving interactions between odorant molecules and hundreds of olfactory receptors (ORs). These interactions generate unique combinations of activated receptors, called the combinatorial code, which the human brain interprets as the sensation we call smell. Until now, the vast number of possible receptor-molecule combinations have prevented a large-scale experimental study of this code and its link to odor perception. Therefore, revealing this code is crucial to answering the long-term question of how we perceive our intricate chemical environment. ORs belong to the class A of G protein-coupled receptors (GPCRs) and constitute the largest known multigene family. To systematically study olfactory coding, we develop M2OR, a comprehensive database compiling the last 25 years of OR bioassays. Using this dataset, a tailored deep learning model is designed and trained. It combines the [CLS] token embedding from a protein language model with graph neural networks and multi-head attention. This model predicts the activation of ORs by odorants and reveals the resulting combinatorial code for any odorous molecule. This approach is refined by developing a novel model capable of predicting the activity of an odorant at a specific concentration, subsequently allowing the estimation of the EC50 value for any OR-odorant pair. Finally, the combinatorial codes derived from both models are used to predict the odor perception of molecules. By incorporating inductive biases inspired by olfactory coding theory, a machine learning model based on these codes outperforms the current state-of-the-art in smell prediction. To the best of our knowledge, this is the most comprehensive and successful application of combinatorial coding to odor quality prediction. Overall, this work provides a link between the complex molecule-receptor interactions and human perception
Alain, Pierre. "Contributions à l'évaluation des modèles de langage". Rennes 1, 2007. http://www.theses.fr/2007REN1S003.
Texto completoThis work deals with the evaluation of language models independently of any applicative task. A comparative study between several language models is generally related to the role that a model has into a complete system. Our objective consists in being independant of the applicative system, and thus to provide a true comparison of language models. Perplexity is a widely used criterion as to comparing language models without any task assumptions. However, the main drawback is that perplexity supposes probability distributions and hence cannot compare heterogeneous models. As an evaluation framework, we went back to the definition of the Shannon's game which is based on model prediction performance using rank based statistics. Our methodology is able to predict joint word sequences that are independent of the task or model assumptions. Experiments are carried out on French and English modeling with large vocabularies, and compare different kinds of language models
Delot, Thierry. "Interrogation d'annuaires étendus : modèles, langage et optimisation". Versailles-St Quentin en Yvelines, 2001. http://www.theses.fr/2001VERS0028.
Texto completoOota, Subba Reddy. "Modèles neurocomputationnels de la compréhension du langage : caractérisation des similarités et des différences entre le traitement cérébral du langage et les modèles de langage". Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0080.
Texto completoThis thesis explores the synergy between artificial intelligence (AI) and cognitive neuroscience to advance language processing capabilities. It builds on the insight that breakthroughs in AI, such as convolutional neural networks and mechanisms like experience replay 1, often draw inspiration from neuroscientific findings. This interconnection is beneficial in language, where a deeper comprehension of uniquely human cognitive abilities, such as processing complex linguistic structures, can pave the way for more sophisticated language processing systems. The emergence of rich naturalistic neuroimaging datasets (e.g., fMRI, MEG) alongside advanced language models opens new pathways for aligning computational language models with human brain activity. However, the challenge lies in discerning which model features best mirror the language comprehension processes in the brain, underscoring the importance of integrating biologically inspired mechanisms into computational models. In response to this challenge, the thesis introduces a data-driven framework bridging the gap between neurolinguistic processing observed in the human brain and the computational mechanisms of natural language processing (NLP) systems. By establishing a direct link between advanced imaging techniques and NLP processes, it conceptualizes brain information processing as a dynamic interplay of three critical components: "what," "where," and "when", offering insights into how the brain interprets language during engagement with naturalistic narratives. This study provides compelling evidence that enhancing the alignment between brain activity and NLP systems offers mutual benefits to the fields of neurolinguistics and NLP. The research showcases how these computational models can emulate the brain’s natural language processing capabilities by harnessing cutting-edge neural network technologies across various modalities—language, vision, and speech. Specifically, the thesis highlights how modern pretrained language models achieve closer brain alignment during narrative comprehension. It investigates the differential processing of language across brain regions, the timing of responses (Hemodynamic Response Function (HRF) delays), and the balance between syntactic and semantic information processing. Further, the exploration of how different linguistic features align with MEG brain responses over time and find that the alignment depends on the amount of past context, indicating that the brain encodes words slightly behind the current one, awaiting more future context. Furthermore, it highlights grounded language acquisition through noisy supervision and offers a biologically plausible architecture for investigating cross-situational learning, providing interpretability, generalizability, and computational efficiency in sequence-based models. Ultimately, this research contributes valuable insights into neurolinguistics, cognitive neuroscience, and NLP
Chauveau, Dominique. "Étude d'une extension du langage synchrone SIGNAL aux modèles probabilistes : le langage SIGNalea". Rennes 1, 1996. http://www.theses.fr/1996REN10110.
Texto completoFleurey, Franck. "Langage et méthode pour une ingénierie des modèles fiable". Phd thesis, Université Rennes 1, 2006. http://tel.archives-ouvertes.fr/tel-00538288.
Texto completoLaborde-Huguet, Bénédicte. "Recherche sur les mécanismes moléculaires de l'instabilité protéique des vins blancs". Bordeaux 2, 2006. http://www.theses.fr/2006BOR21381.
Texto completoSoluble proteins of white wines are heat unstable and can precepitate during conservation. We propose a new reactional model for protein haze formation, which implies not only protein denaturation but essentially non-proteinaceous compounds involvement. These molecules, probably localized in grape skin, are present in must and wine as precursors. Heat transformes these molecules into active factors able to react with proteins probably through ionic interactions. A purification chain showed that these molecules do not seem to belong to the following molecular families : phenolic compounds, aldehydes and ketones, and probably peptides
LABAT, GILLES. "Modélisation d'hémoprotéines, cytochrome P-450, chloroperoxydase et lignine peroxydase : modèles efficaces de la lignine peroxydase et développement de procédés d'oxydation par catalyse biomimétique". Toulouse 3, 1989. http://www.theses.fr/1989TOU30175.
Texto completoLopes, Marcos. "Modèles inductifs de la sémiotique textuelle". Paris 10, 2002. http://www.theses.fr/2002PA100145.
Texto completoEyssautier-Bavay, Carole. "Modèles, langage et outils pour la réutilisation de profils d'apprenants". Phd thesis, Université Joseph Fourier (Grenoble), 2008. http://tel.archives-ouvertes.fr/tel-00327198.
Texto completoIl n'existe pas à l'heure actuelle de solution technique permettant de réutiliser ces profils hétérogènes. Cette thèse cherche donc à proposer des modèles et des outils permettant la réutilisation pour les différents acteurs de profils d'apprenants créés par d'autres.
Dans nos travaux, nous proposons le modèle de processus de gestion de profils REPro (Reuse of External Profiles). Pour permettre la réutilisation de profils hétérogènes, nous proposons de les réécrire selon un formalisme commun qui prend la forme d'un langage de modélisation de profils, le langage PMDL (Profiles MoDeling Language). Nous définissons ensuite un ensemble d'opérateurs permettant la transformation des profils ainsi harmonisés, ou de leur structure, tels que l'ajout d'éléments dans le profil, ou la création d'un profil de groupe à partir de profils individuels. Ces propositions ont été mises en œuvre au sein de l'environnement EPROFILEA du projet PERLEA (Profils d'Élèves Réutilisés pour L'Enseignant et l'Apprenant), avant d'être mises à l'essai auprès d'enseignants en laboratoire.
Swaileh, Wassim. "Des modèles de langage pour la reconnaissance de l'écriture manuscrite". Thesis, Normandie, 2017. http://www.theses.fr/2017NORMR024/document.
Texto completoThis thesis is about the design of a complete processing chain dedicated to unconstrained handwriting recognition. Three main difficulties are adressed: pre-processing, optical modeling and language modeling. The pre-processing stage is related to extracting properly the text lines to be recognized from the document image. An iterative text line segmentation method using oriented steerable filters was developed for this purpose. The difficulty in the optical modeling stage lies in style diversity of the handwriting scripts. Statistical optical models are traditionally used to tackle this problem such as Hidden Markov models (HMM-GMM) and more recently recurrent neural networks (BLSTM-CTC). Using BLSTM we achieve state of the art performance on the RIMES (for French) and IAM (for English) datasets. The language modeling stage implies the integration of a lexicon and a statistical language model to the recognition processing chain in order to constrain the recognition hypotheses to the most probable sequence of words (sentence) from the language point of view. The difficulty at this stage is related to the finding the optimal vocabulary with minimum Out-Of-Vocabulary words rate (OOV). Enhanced language modeling approaches has been introduced by using sub-lexical units made of syllables or multigrams. The sub-lexical units cover an important portion of the OOV words. Then the language coverage depends on the domain of the language model training corpus, thus the need to train the language model with in domain data. The recognition system performance with the sub-lexical units outperformes the traditional recognition systems that use words or characters language models, in case of high OOV rates. Otherwise equivalent performances are obtained with a compact sub-lexical language model. Thanks to the compact lexicon size of the sub-lexical units, a unified multilingual recognition system has been designed. The unified system performance have been evaluated on the RIMES and IAM datasets. The unified multilingual system shows enhanced recognition performance over the specialized systems, especially when a unified optical model is used
Beaufrere, Bernard. "Modèles d'étude du métabolisme protéique in vivo à l'aide de leucine marquée aux isotopes stables et radioactifs". Lyon 1, 1990. http://www.theses.fr/1990LYO1T074.
Texto completoAmeur-Boulifa, Rabéa. "Génération de modèles comportementaux des applications réparties". Nice, 2004. http://www.theses.fr/2004NICE4094.
Texto completoFrom the formal semantics of ProActive – 100 % Java library for concurrent, distributed, and mobile computing -, we build, in a compositional way, finite models of finite abstract applications. These models are described mathematically and graphically. The procedure for building, of which we guaranty the ending, is described by semantics rules applied to an intermediate form of programs obtained by static analysis. Afterwards, these rules are extended so as to build parameterized models of infinite applications. Practically, a prototype for analysing a core of Java and ProActive library is constructed. Moreover, some realistic examples are studied
Vantelon, Nadine. "Effet d'une acidose lactique sur la phase d'initiation de la synthèse protéique dans des primocultures d'astrocytes de rats". Poitiers, 2007. http://www.theses.fr/2007POIT1801.
Texto completoTrojet, Mohamed Wassim. "Approche de vérification formelle des modèles DEVS à base du langage Z". Aix-Marseille 3, 2010. http://www.theses.fr/2010AIX30040.
Texto completoThe general framework of the thesis consists in improving the verification and the validation of simulation models through the integration of formal methods. We offered an approach of formal verification of DEVS models based on Z language. DEVS is a formalism that allows the description and analysis of the behavior of discrete event systems, ie systems whose state change depends on the occurrence of an event. A DEVS model is essentially validated by the simulation which permits to verify if it correctly describes the behavior of the system. However, the simulation does not detect the presence of a possible inconsistency in the model (conflict, ambiguity or incompleteness). For this reason, we have integrated a formal specification language, known as Z, in the DEVS formalism. This integration consists in: (1) transforming a DEVS model into an equivalent Z specification and (2) verifying the consistency of the resulting specification using the tools developed by the Z community. Thus, a DEVS model is subjected to an automatic formal verification before its simulation
Janiszek, David. "Adaptation des modèles de langage dans le cadre du dialogue homme-machine". Avignon, 2005. http://www.theses.fr/2005AVIG0144.
Texto completoCurrently, most of the automatic speech recognition (ASR) systems are based on statistical language models (SLM). These models are estimated from sets of observations. So, the implementation of an ASR system requires having a corpus in adequacy with the aimed application. Because of the difficulties occurring while collecting these data, the available corpora may be insufficient to estimate SLM correctly. To raise this insufficiency, one may wish to use other data and to adapt them to the application context. The main objective is to improve the performances of the corresponding dialogue system. Within this framework, we've defined and implemented a new paradigm: the matrix representation of the linguistic data. This approach is the basis of our work; it allows a new linguistic data processing thanks to the use of the linear algebra. For example, we've defined a semantic and functional similarity between words. Moreover, we have studied and developed several techniques of adaptation based on the matrix representation. During our study, we've investigated several research orientations: Filtering the data: we've used the technique of the minimal blocks. The linear transformation: this technique consists in defining an algebraic operator to transform the linguistic data. The data augmentation: this technique consists in reestimating the occurrences of a word observed according to its functional similarity with other words. The selective combination of histories: this technique is a generalization of the linear interpolation of language models. Combining techniques: each technique having advantages and drawbacks, we've sought the best combinations. The experimental results obtained within our framework of study give us relative improvements in term of word error rate. In particular, our experiments show that associating the data augmentation and the selective combination of histories gives interesting results
Oger, Stanislas. "Modèles de langage ad hoc pour la reconnaissance automatique de la parole". Phd thesis, Université d'Avignon, 2011. http://tel.archives-ouvertes.fr/tel-00954220.
Texto completoFichot, Jean. "Langage et signification : le cas des mathématiques constructives". Paris 1, 2002. http://www.theses.fr/2002PA010653.
Texto completoSourty, Raphael. "Apprentissage de représentation de graphes de connaissances et enrichissement de modèles de langue pré-entraînés par les graphes de connaissances : approches basées sur les modèles de distillation". Electronic Thesis or Diss., Toulouse 3, 2023. http://www.theses.fr/2023TOU30337.
Texto completoNatural language processing (NLP) is a rapidly growing field focusing on developing algorithms and systems to understand and manipulate natural language data. The ability to effectively process and analyze natural language data has become increasingly important in recent years as the volume of textual data generated by individuals, organizations, and society as a whole continues to grow significantly. One of the main challenges in NLP is the ability to represent and process knowledge about the world. Knowledge graphs are structures that encode information about entities and the relationships between them, they are a powerful tool that allows to represent knowledge in a structured and formalized way, and provide a holistic understanding of the underlying concepts and their relationships. The ability to learn knowledge graph representations has the potential to transform NLP and other domains that rely on large amounts of structured data. The work conducted in this thesis aims to explore the concept of knowledge distillation and, more specifically, mutual learning for learning distinct and complementary space representations. Our first contribution is proposing a new framework for learning entities and relations on multiple knowledge bases called KD-MKB. The key objective of multi-graph representation learning is to empower the entity and relation models with different graph contexts that potentially bridge distinct semantic contexts. Our approach is based on the theoretical framework of knowledge distillation and mutual learning. It allows for efficient knowledge transfer between KBs while preserving the relational structure of each knowledge graph. We formalize entity and relation inference between KBs as a distillation loss over posterior probability distributions on aligned knowledge. Grounded on this finding, we propose and formalize a cooperative distillation framework where a set of KB models are jointly learned by using hard labels from their own context and soft labels provided by peers. Our second contribution is a method for incorporating rich entity information from knowledge bases into pre-trained language models (PLM). We propose an original cooperative knowledge distillation framework to align the masked language modeling pre-training task of language models and the link prediction objective of KB embedding models. By leveraging the information encoded in knowledge bases, our proposed approach provides a new direction to improve the ability of PLM-based slot-filling systems to handle entities
Halabi, Amira. "Formules infantiles modèles : relation entre structures protéiques et comportement en digestion". Thesis, Rennes, Agrocampus Ouest, 2020. http://www.theses.fr/2020NSARB340.
Texto completoThe heat treatments applied during the manufacture of infant milk formulas (IMFs) may alter the protein structures and so their behaviour during digestion. The aim of this PhD project was to study the relationship between protein structure within model IMFs and their behaviour during in vitro digestion.Three model IMFs were formulated, differing only in their whey protein (WP) profile to be as close as possible to the protein profile of human milk. The IMFs, with different dry matter contents and therefore protein concentration (1.3% or 5.5%), were heat-treated between 67.5°C and 80°C. The kinetics of heat-induced WP denaturation were studied, then the protein structures generated were characterised for an identical extent of WP denaturation (65%). The kinetics of protein hydrolysis were evaluated using static then dynamic in vitro digestion methods at the infant stage.The results showed that the denaturation kinetics of WPs were slowed down for IMF close to human milk, due to the absence of ß-LG, regardless of the dry matter content. For an identical extent of WP denaturation, the heat-induced protein structures varied according to the protein profile, the dry matter of IMFs, and the heating conditions, which ultimately impacted the protein behaviour during in vitro digestion.The protein structure could therefore be a lever for the IMF optimisation. These results must be complemented by the evaluation of the physiological impact of these different structures
Fourty, Guillaume. "Recherche de contraintes structurales pour la modélisation ab initio du repliement protéique". Paris 7, 2006. http://www.theses.fr/2006PA077101.
Texto completoUnderstanding the protein folding process and predicting protein structures from sequence data only remain two challenging questions for structural biologists. In this work, we first observe highly frequent proximities between N- and C-termini of protein domain, probably reflecting early stages of folding. Then we address the problem of polymer folding on regular lattices. We enumerate Hamiltonian Orbits and Cyclic Hamiltonian Orbits on n x n square lattices to evaluate the conformational space reduction associated to the termini contact constraint. Exhaustive Exploration of those maximally compact structures provides a baseline for minimum search algorithm in the HP- folding problem. Finally, we study multiple alignments at low sequence identity and introduce topohydrophobicity, a measure of topohydrophobicity conservation. We use it through decision tree to predict structural features such as Central/Edge position of beta strands in beta sheets and solvent accessibility (RAPT - Relative Accessibility Prediction Tool). These data can be used in ab initio prediction procedures of protein structures
Boyarm, Aristide. "Contribution à l'élaboration d'un langage de simulation à événements discrets pour modèles continus". Aix-Marseille 3, 1999. http://www.theses.fr/1999AIX30050.
Texto completoNogier, Jean-François. "Un système de production de langage fondé sur le modèles des graphes conceptuels". Paris 7, 1990. http://www.theses.fr/1990PA077157.
Texto completoStrub, Florian. "Développement de modèles multimodaux interactifs pour l'apprentissage du langage dans des environnements visuels". Thesis, Lille 1, 2020. http://www.theses.fr/2020LIL1I030.
Texto completoWhile our representation of the world is shaped by our perceptions, our languages, and our interactions, they have traditionally been distinct fields of study in machine learning. Fortunately, this partitioning started opening up with the recent advents of deep learning methods, which standardized raw feature extraction across communities. However, multimodal neural architectures are still at their beginning, and deep reinforcement learning is often limited to constrained environments. Yet, we ideally aim to develop large-scale multimodal and interactive models towards correctly apprehending the complexity of the world. As a first milestone, this thesis focuses on visually grounded language learning for three reasons (i) they are both well-studied modalities across different scientific fields (ii) it builds upon deep learning breakthroughs in natural language processing and computer vision (ii) the interplay between language and vision has been acknowledged in cognitive science. More precisely, we first designed the GuessWhat?! game for assessing visually grounded language understanding of the models: two players collaborate to locate a hidden object in an image by asking a sequence of questions. We then introduce modulation as a novel deep multimodal mechanism, and we show that it successfully fuses visual and linguistic representations by taking advantage of the hierarchical structure of neural networks. Finally, we investigate how reinforcement learning can support visually grounded language learning and cement the underlying multimodal representation. We show that such interactive learning leads to consistent language strategies but gives raise to new research issues
Roque, Matthieu. "Contribution à la définition d'un langage générique de modélisation d'entreprise". Bordeaux 1, 2005. http://www.theses.fr/2005BOR13059.
Texto completoJuillet, Barbara. "Modélisation comportementale du métabolisme interrégional de l'azote alimentaire et des cinétiques de l'urée à l'état nourri non stationnaire chez l'homme". Phd thesis, INAPG (AgroParisTech), 2006. http://pastel.archives-ouvertes.fr/pastel-00002662.
Texto completoBrault, Julie. "Nouveaux modèles d’étude de la Granulomatose Septique Chronique grâce aux cellules souches pluripotentes induites – Application au développement de la thérapie protéique". Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAS020/document.
Texto completoChronic Granulomatous Disease (CGD) is a rare inherited pathology of the innate immune system that affects the phagocytic cells (neutrophils, macrophages). This disease is caused by mutations in the subunits of the NADPH oxidase complex composed of the membrane cytochrome b558 (NOX2 associated with p22phox) and the cytosolic components (p47phox, p67phox et p40phox). Dysfunction in this enzymatic complex leads to the absence of microbicidal reactive oxygen species (ROS) and therefore to the development of recurrent and life-threatening infections in early childhood. Life-long prophylaxis is used to protect these patients but it may be responsible for side effects. Bone marrow transplantation is the only curative treatment but it can not be proposed to all the patients. In addition, gene therapy is not possible up to now. So there is a real lack of new therapies for this disease. However, to develop new therapeutic approaches, relevant physiopathological models must be available. Actually, existing models are imperfect or missing. Thus, the goal of our work is to produce cellular and animal models of CGD to develop a new proteoliposome-based therapy.Induced pluripotent stem cells (iPSCs) are a powerful tool for physiopathologic modeling due to their pluripotency and self-renewal properties. Using CGD patient-specific iPSCs regrogrammed from fibroblasts, we developped an efficient protocol for in vitro hematopoietic differentiation into neutrophils and macrophages. We showed that the phagocytic cells produced are mature and reproduce the ROS-deficient phenotype found in CGD patients. Thus, we obtained relevant cellular models for three genetic forms of CGD: X-linked CGD and the two autosomal recessive forms AR22CGD and AR47CGD.Then, we demonstrated the proof-of-concept of the efficacy of therapeutic proteoliposomes on X-CGD iPS-derived macrophages. Indeed, X-CGD is the main form of the disease (70% of cases) and is caused by the absence of the membrane cytochrome b558 (NOX2/p22phox). Thanks to a collaboration with the start-up Synthelis SAS, liposomes integrating the cytochrome b558 into lipid bilayers were produced in an E. coli-based cell-free protein expression system. These NOX2/p22phox liposomes were able to reconstitute a functional NADPH oxidase enzyme in vitro and to deliver the cytochrome b558 at the plasma membrane of X-CGD macrophages, leading to restore the NADPH oxidase activity with a ROS production.Finally, we proposed to generate « humanized » mice models with a human immune system after transplantation of CD34+ hematopoietic stem cells able to engraft and reconstitute long-term hematopoiesis in immunodeficient mice. Using healthy iPSCs, we successfuly produced CD34+ hematopoietic cells with in vitro hematopoietic potential. However, no in vivo engraftment was really confirmed yet.In conclusion, during this project, we produced cellular models of three genetic forms of CGD using patient-specific iPSCs. Then, X-CGD macrophages were used to demonstrate in vitro the efficacy of a new therapy. This « liposomal replacement enzymotherapy » could, in the future, represents a curative alternative against life-threatening lung infections refractory to conventional antibiotic and antifungal therapy
Tron, Cécile. "Modèles quantitatifs de machines parallèles : les réseaux d'interconnexion". Grenoble INPG, 1994. http://www.theses.fr/1994INPG0179.
Texto completoKettani, Omar. "Modèles du calcul sans changement d'état : quelques développements et résultats". Aix-Marseille 2, 1989. http://www.theses.fr/1989AIX24005.
Texto completoTuring machine takes his information in a box marked between severl others. Taking informationat once on parts of two contiguous boxes makes state desappear from algorithm. Correspondent information is else noted in each contiguous part of the two boxes. It becomes so necessary to employ large set of symbols. Here is proved equivalence of such machines with turing machines. Author imagines now parallel machines in whic a plenty of contiguous boxes can be marked simultaneously, and the travelling of marks scan part of box all at once. He applie it to several classical problems. In marking at once two contiguous boxes, it is possible to take at once half part of the first and half part of the second one, third part of the first and two thrid part of the second one, and so on. On the bound, at the utmost shift, it rests two values in common, and in this case author proves again equivalence of such machine with turing machine he proves so existence of an unviersal machine and establishes relation with cellular automata
Boisson, Jean-Charles. "Modélisation et résolution par métaheuristiques coopératives : de l'atome à la séquence protéique". Electronic Thesis or Diss., Lille 1, 2008. http://www.theses.fr/2008LIL10154.
Texto completoLn this thesis, we show the importance of the modeling and the cooperation of metaheuristics for solving real problems in Bioinformatics. Two problems are studied: the first in the Proteomics domain for the protein identification from spectral data analysis and the second in the domain of the structural analysis of molecules for the flexible molecular docking problem. So, for the first problem, a new model has been designed based on a direct comparison of a raw experimental spectrum with protein from databases. This model has been included in an identification engine by peptide mass fingerprinting called ASCQ_ME. From this model, an approach for the de novo protein sequencing problem has been proposed and validated. ln this problem, a protein sequence has to be found with only spectral information. Our model is a three step approach called SSO for Sequence, Shape and Order. After a study of each step, SSO has been implemented and tested with three metaheuristics collaborating sequentially. For the second problem, a study of new multi-objective models has been made and has allowed to design eight different models tested with parallel multi-objective genetic algorithms. Twelve configurations of genetic operators has been tested in order to prove the efficiency of the hybridizing of genetic algorithms with local searches. For each part of this work, the ParadisEO platform has been used and more particularly the ParadisEO-MO part dedicated to single solution based metaheuristics for which we have substantially contributed. All this work has been funded by the "PPF Bio-Informatique" of the "Université des Sciences et Technologies de Lille" and by the ANR Dock project
Le, Gloahec Vincent. "Un langage et une plateforme pour la définition et l’exécution de bonnes pratiques de modélisation". Lorient, 2011. http://www.theses.fr/2011LORIS239.
Texto completoThe most valuable asset of an IT company lies in knowledge and know-how acquired over the years by its employees. Unfortunately, lacking means they deem appropriate, most companies do not streamline the management of such knowledge. In the field of software engineering, this knowledge is usually collected in the form of best practices documented in an informal way, rather unfavorable to the effective and adequate use of these practices. In this area, the modeling activities have become predominant, favoring the reduction of effort and development costs. The effective implementation of best practices related to modeling activities would help improve developer productivity and the final quality of software. The objective of this thesis, as part of a collaboration between the Alkante company and the VALORIA laboratory, is to provide a both theoretical and practical framework favoring the capitalization of best modeling practices. An approach for the management of good modeling practices (GMPs) is proposed. This approach relies on the principles of model-driven engineering (MDE), by proposing a division into two levels of abstraction: a PIM level (Platform Independent Model) dedicated to the capitalization of GMPs independently of any specific platform, ensuring the sustainability of knowledge, and a PSM level (Platform Specific Model) dedicated to the verification of compliance of GMPs in modeling tools. To ensure the capitalization of good practices (GPs), a specific language dedicated to the description of GMPs has been developed on the basis of common characteristics identified by a detailed study of two types of GPs : those focusing on process aspects, and others more focused on style or shape of models. This langage, called GooMod, is defined by its abstract syntax, represented as a MOF compliant metamodel (MOF stands for Meta Object Facility), a description of its semantics, and a graphical concrete syntax. A platform provides the two necessary tools for both the definition of GMPs (that conforms to the GooMod language) and their effective application in modeling tools. The GMP Definition Tool is a graphical editor that facilitates the description of GMPs targeting any modeling language (e. G. GMPs for the UML), but independently of modeling tools. The GMP Execution Tool is a PSM level implementation specifically targeting modeling tools based on the Graphical Modeling Framework (GMF) of the Eclipse integrated development environment. During modeling activities performed by designers, this tool automates the verification of compliance of GMPs originally described in GooMod. This work has been validated on two aspects of the proposed approach. An industrial case study illustrates the definition, using the GooMod language, of a GMP specific to the modeling of Web applications developed by the Alkante company. An experiment on the evaluation of the effectiveness and usability of the GMP Execution Tool was conducted among students
Boisson, Jean-Charles. "Modélisation et résolution par métaheuristiques coopératives : de l'atome à la séquence protéique". Phd thesis, Lille 1, 2008. http://tel.archives-ouvertes.fr/tel-00842054.
Texto completoGuihal, David. "Modélisation en langage VHDL-AMS des systèmes pluridisciplinaires". Phd thesis, Université Paul Sabatier - Toulouse III, 2007. http://tel.archives-ouvertes.fr/tel-00157570.
Texto completoRamadour, Philippe. "Modèles et langage pour la conception et la manipulation de composants réutilisables de domaine". Aix-Marseille 3, 2001. http://www.theses.fr/2001AIX30092.
Texto completoRaibon, Audrey. "Le facteur d'initiation de la traduction eIF3f dans le muscle squelettique : étude in vitro et obtention de modèles animaux". Thesis, Montpellier 1, 2013. http://www.theses.fr/2013MON1T023/document.
Texto completoThe eukaryotic initiation factor eIF3f is one of the subunits of the translation initiator complex eIF3 required for several steps in the initiation of mRNA translation. In skeletal muscle, recent studies have demonstrated that eIF3f overexpression in myotubes exerts a hypertrophic activity associated to an increase in protein synthesis. This thesis shed light on muscle eIF3f functions by (i) characterizing in vitro the antiproliferative activity of this factor in C2C12 myoblasts and the RNAs recruited by eIF3f on polysomal fractions in hypertrophied myotubes and (ii) generating mice strains inactivated for eIF3f (eIF3f KO mice) and overexpressing eIF3f specifically in muscle (eIF3f K5-10R transgenic mice) to study in vivo the impact of eIF3f modulation on the muscular mass homeostasis
Beaumont, Jean-François. "Adaptation non supervisée des modèles de langage pour le sous-titrage de bulletins de nouvelles". Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=79997.
Texto completoNemo, Clémentine. "Construction et validation de modèles guidées par l'application idempotente de transformations". Nice, 2010. http://www.theses.fr/2010NICE4090.
Texto completoModel transformations play a critical role in Model Driven Development because they automate recurrent software development tasks. Some of these transformations are refinement of models by adding or retracting elements to produce new models conforming to additional constraints. For example, such transformations are used to integrate non functional properties. But modifications of the resulting model can break the conformity to those functional properties. Our challenge is to detect and restore this conformity applying the same transformation again. In this thesis, we defend that model transformation is the key concept to validate and restore models and we establish a system to define idempotent transformations
Lokpo, Brahima. "Étude de la réutilisation dans les modèles parallèles à processus communicants". Toulouse, INPT, 1992. http://www.theses.fr/1992INPT033H.
Texto completoLespagne, Christian. "Traitement statistique de modèles numériques du terrain topographique". Paris 11, 1985. http://www.theses.fr/1985PA112376.
Texto completoTissot, Régis. "Contribution à la génération automatique de tests à partir de modèles et de schémas de test comme critères de sélection dynamiques". Besançon, 2009. http://www.theses.fr/2009BESA2015.
Texto completoThis PhD thesis is a contribution to the conception of an automatic Model Based Testing (MBT) approach for test generation. The framework of our works is the BZ-TT (BZ-Testing Tools) technology, that allows for generating functional tests from models written in B. The test selection criteria implemented in BZ- TT ensure structural coverage of the model of the system to validate. It takes into account the data and control structures of the model. This approach does not allow for generating tests from properties expressing dynamic behaviors of the system such as properties based on operations sequencing. To address this problem, some works propose to involve human expertise to define "dynamic" selection criteria. Such selection criteria make it possible for the validation engineer to define strategies based on properties and aspects of the system thet he wants to validate. Our contributions explore this way, and target the complementarity with respect to the tests generated from the structural coverage of the model, in order to benefit from the resources and technology previously deployed for this goal. Our first contribution is the definition of a language for the formalization of test purposes, that allows for expressing test scenarios inspired by the properties to validate on the system. This language is based on a regular expressions-like formalism, and aims at describing scenarios by means of operation calls and symbolic states. We define a test generation method integrated to BZ- TT, so that these tools can take these new selection criteria into account. This way, we can re-use the technics of symbolic animation and of constraint solving of BZ- TT. We also benefit from the functionalities of export and concretization of the produced tests. With this method, the only additional work for the validation engineer is to define the test schemas used as selection criteria. Our last contribution is to assess the complementarity of our method with the automatic generation of tests by structural coverage of the model. We propose a method to assess the complementarity of two test suites. It is based on the computing of the coverage in terms of states and transitions of an abstraction of the system by th test suites. Finally, we apply this method to three case studies (two smart card applications and the POSIX fil management system), and we show the complementarity brought by the method
Delaunay, Jérôme. "Contribution à l'analyse d'un mécanisme de répression traductionnelle conservé entre le xénope et la drosophile : identification et caractérisation du facteur protéique Bru3 de liaison à l'élément EDEN". Montpellier 1, 2004. http://www.theses.fr/2004MON1T001.
Texto completoBrun, Armelle. "Détection de thème et adaptation des modèles de langage pour la reconnaissance automatique de la parole". Nancy 1, 2003. http://www.theses.fr/2003NAN10003.
Texto completoOne way to improve performance of Automatic Speech Recognition (ASR) systems, consists in adapting language models to the topic treated in data. In this thesis, we propose a new vocabulary selection principle, resulting in a slight improvement of the performance. We also present anew topic identification method, WSIM, based on the similarity between words and topics, reaching performance similar to state of the art one. We have studied the evolution of the performance when methods are combined, reaching more than 93% correct topic identification. In the framework of ASR, adapting language model to the topic results in a large improvement of the perplexity
Lamine, Elyes. "Définition d'un modèle de propriété et proposition d'un langage de spécification associé : LUSP". Montpellier 2, 2001. http://www.theses.fr/2001MON20205.
Texto completoBarbier, Guillaume. "Contribution de l'ingénierie dirigée par les modèles à la conception de modèles grande culture". Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2013. http://tel.archives-ouvertes.fr/tel-00914318.
Texto completoMaran, Abdalhmed. "Une approche formelle pour la transformation de modèles UML-XML". Versailles-St Quentin en Yvelines, 2005. http://www.theses.fr/2005VERS0007.
Texto completoUML (Unified Modeling Language) offre un ensemble de diagrammes permettant de décrire la structure des objets et leurs comportements. XML (eXtensible Markup Language) est un format de données. Il propose un langage pour décrire la structure (schéma) des documents et il distingue et sépare les données de leurs schémas, de leurs descriptions et ontologies et de leurs éventuelles représentations. Il est apparu très rapidement que les deux standards sont complémentaires et que le passage de l'un vers l'autre s'impose. L'objectif de cette thèse et de proposer une approche innovante et formelle pour la transformation entre modèles UML et Schémas XML. Les transformations ne se limitent pas à UML et aux Schémas XML et nous remarquons qu'elles sont antérieures à ces notations. Les bases de données forment un domaine où les conversions entre différents niveaux d'abstraction abondent. La popularité d'un langage ou d'un format (c'est le cas pour UML et XML) est une forte motivation pour développer des conversions de et vers ceux-ci. Notre approche s'appuie sur deux points principaux. Elle consiste en la définition des règles de conversion au niveau des méta-modèles pour transformer les modèles. Et elle adopte les TADs (Types Abstraits de Données) pour écrire les transformations ainsi que pour spécifier les modèles à convertir. Les TADs introduisent une abstraction dans les systèmes qu'ils spécifient et procurent, donc, une indépendance par rapport aux langages et aux plate-formes d'implémentation. Ils sont basés sur des fondements mathématiques et permettent de vérifier les propriétés autant des modèles que nous transformons que celles des conversions. La machine des transformations que nous avons créée consiste en trois bibliothèques de TADs : une première pour spécifier les modèles UML, une deuxième pour spécifier les Schémas XML et une troisième définissant les transformations. Nous avons implémenté cette architecture dans le langage LOTOS. Afin d'intégrer, dans le processus de transformation formelles, des modèles UML générés par les différents AGL disponibles, nous avons adopté le format XMI et implémenté des conversion en XSLT permettant la conversion de modèles UML de XMI vers une représentation LOTOS. Nous avons également programmé, en Java, un analyseur syntaxique permettant de transformer des Schémas XML de leurs représentation LOTOS vers XML. L'approche que nous avons adoptée n'est pas exclusive aux transformations de UML vers les Schémas XML et elle peut être utilisée avec d'autres modèles sources et destinations et en particulier pour la rétro-génération de Schémas XML à partir de modèles UML. L'ingénierie des modèles basée sur les transformations est un champ de recherche actif et les efforts de normalisations sont en cours et sont illustrés par MDA (Model Driven Architecture) et QVT (Query View Transformation). Notre approche des transformations formelles est conforme à la vision de l'OMG des transformations et à son architecture en couches. De plus, par le biais des (TADs), elle introduit un formalisme à la spécifications de modèles et des transformations
Dumery, Jean-Jacques. "Un langage de spécification pour la conception structurée de la commande des systèmes à évènements discrets". Châtenay-Malabry, Ecole centrale de Paris, 1999. http://www.theses.fr/1999ECAP0644.
Texto completoSéguéla, Patrick. "Construction de modèles de connaissances par analyse lingustiques de relations lexicales dans les documents techniques". Toulouse 3, 2001. http://www.theses.fr/2001TOU30210.
Texto completoSardet, Éric. "Intégration des approches modélisation conceptuelle et structuration documentaire pour la saisie, la représentation, l'échange et l'exploitation d'informations ; application aux catalogues de composants industriels". Poitiers, 1999. http://www.theses.fr/1999POIT2311.
Texto completoWatrin, David. "Formalisation des modèles d'information d'administration de réseaux à l'aide de la méthode B : Application au langage GDMO". Ecole Nationale Supérieure des télécommunications, 2001. http://www.theses.fr/2001ENST0039.
Texto completo