Добірка наукової літератури з теми "Modèles de langage protéique"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Modèles de langage protéique".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Modèles de langage protéique"
Delahaye, Jean-Paul. "Derrière les modèles massifs de langage." Pour la Science N° 555 – janvier, no. 1 (December 22, 2023): 80–85. http://dx.doi.org/10.3917/pls.555.0080.
Повний текст джерелаDeschamps, Christophe. "Utiliser les grands modèles de langage au quotidien." Archimag 77, Hors série (September 24, 2024): 29–33. http://dx.doi.org/10.3917/arma.hs77.0029.
Повний текст джерелаGayral, Françoise, Daniel Kayser, and François Levy. "Logique et sémantique du langage naturel : modèles et interprétation." Intellectica. Revue de l'Association pour la Recherche Cognitive 23, no. 2 (1996): 303–25. http://dx.doi.org/10.3406/intel.1996.1539.
Повний текст джерелаFranke, William. "Psychoanalysis as a Hermeneutics of the Subject: Freud, Ricoeur, Lacan." Dialogue 37, no. 1 (1998): 65–82. http://dx.doi.org/10.1017/s0012217300047594.
Повний текст джерелаReid, Wilfrid. "Les formes de l’expérience psychique. Une lecture de Freud revisitée." Filigrane 32, no. 1 (2024): 49–63. https://doi.org/10.7202/1114604ar.
Повний текст джерелаCalin, Rodolphe. "À la charnière de l’image et du langage." Articles 41, no. 2 (November 6, 2014): 253–73. http://dx.doi.org/10.7202/1027218ar.
Повний текст джерелаGodart-Wendling, Béatrice. "La philosophie du langage : une jungle de Calais pour la linguistique ?" Cahiers du Centre de Linguistique et des Sciences du Langage, no. 53 (March 4, 2018): 131–46. http://dx.doi.org/10.26034/la.cdclsl.2018.323.
Повний текст джерелаDevilliers, Hélène, and Michael Tabone. "Bouche à bouche : pratiques intégratives, réanimation et développements." La psychiatrie de l'enfant Vol. 67, no. 2 (November 29, 2024): 111–16. https://doi.org/10.3917/psye.672.0111.
Повний текст джерелаDevillers, Laurence. "Le langage non responsable des systèmes d’intelligence artificielle (IA) générative." Champ lacanien N° 28, no. 1 (October 2, 2024): 133–38. http://dx.doi.org/10.3917/chla.028.0133.
Повний текст джерелаHermet, Marie. "Traduction et Intelligence Artificielle." Raison présente N° 231, no. 3 (October 16, 2024): 65–74. http://dx.doi.org/10.3917/rpre.231.0065.
Повний текст джерелаДисертації з теми "Modèles de langage protéique"
Vander, Meersche Yann. "Étude de la flexibilité des protéines : analyse à grande échelle de simulations de dynamique moléculaire et prédiction par apprentissage profond." Electronic Thesis or Diss., Université Paris Cité, 2024. http://www.theses.fr/2024UNIP5147.
Повний текст джерелаProteins are essential to biological processes. Understanding their dynamics is crucial for elucidating their biological functions and interactions. However, experimentally measuring protein flexibility remains challenging due to technical limitations and associated costs. This thesis aims to deepen the understanding of protein dynamic properties and to propose computational methods for predicting their flexibility directly from their sequence. This work is organised in four main contributions: 1) Protein flexibility prediction in terms of B-factors. We have developed MEDUSA, a flexibility prediction method based on deep learning, which leverages the physicochemical and evolutionary information of amino acids to predict experimental flexibility classes from protein sequences. MEDUSA has outperformed previously available tools but shows limitations due to the variability of experimental data. 2) Large-scale analysis of in silico protein dynamics. We have released ATLAS, a database of standardised all-atom molecular dynamics simulations providing detailed information on protein flexibility for over 1.5k representative protein structures. ATLAS enables interactive analysis of protein dynamics at different levels and offers valuable insights into proteins exhibiting atypical dynamical behaviour, such as dual personality fragments. 3) An in-depth analysis of AlphaFold 2's pLDDT score and its relation to protein flexibility. We have assessed pLDDT correlation with different flexibility descriptors derived from molecular dynamics simulations and from NMR ensembles and demonstrated that confidence in 3D structure prediction does not necessarily reflect expected flexibility of the protein region, in particular, for protein fragments involved in molecular interaction. 4) Prediction of MD-derived flexibility descriptors using protein language embeddings. We introduce PEGASUS, a novel flexibility prediction tool developed using ATLAS database. Using protein sequence encoding by protein language models and a simple deep learning model, PEGASUS provides precise predictions of flexibility metrics and effectively captures the impact of mutations on protein dynamics. The perspectives of this work include enriching simulations with varied environments and integrating membrane proteins to enhance PEGASUS and enable new analyses. We also highlight the emergence of methods capable of predicting conformational ensembles, offering promising advances for better capturing protein dynamics. This thesis offers new perspectives for the prediction and analysis of protein flexibility, paving the way for advances in areas such as biomedical research, mutation studies, and drug design
Hladiš, Matej. "Réseaux de neurones en graphes et modèle de langage des protéines pour révéler le code combinatoire de l'olfaction." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ5024.
Повний текст джерелаMammals identify and interpret a myriad of olfactory stimuli using a complex coding mechanism involving interactions between odorant molecules and hundreds of olfactory receptors (ORs). These interactions generate unique combinations of activated receptors, called the combinatorial code, which the human brain interprets as the sensation we call smell. Until now, the vast number of possible receptor-molecule combinations have prevented a large-scale experimental study of this code and its link to odor perception. Therefore, revealing this code is crucial to answering the long-term question of how we perceive our intricate chemical environment. ORs belong to the class A of G protein-coupled receptors (GPCRs) and constitute the largest known multigene family. To systematically study olfactory coding, we develop M2OR, a comprehensive database compiling the last 25 years of OR bioassays. Using this dataset, a tailored deep learning model is designed and trained. It combines the [CLS] token embedding from a protein language model with graph neural networks and multi-head attention. This model predicts the activation of ORs by odorants and reveals the resulting combinatorial code for any odorous molecule. This approach is refined by developing a novel model capable of predicting the activity of an odorant at a specific concentration, subsequently allowing the estimation of the EC50 value for any OR-odorant pair. Finally, the combinatorial codes derived from both models are used to predict the odor perception of molecules. By incorporating inductive biases inspired by olfactory coding theory, a machine learning model based on these codes outperforms the current state-of-the-art in smell prediction. To the best of our knowledge, this is the most comprehensive and successful application of combinatorial coding to odor quality prediction. Overall, this work provides a link between the complex molecule-receptor interactions and human perception
Alain, Pierre. "Contributions à l'évaluation des modèles de langage." Rennes 1, 2007. http://www.theses.fr/2007REN1S003.
Повний текст джерелаThis work deals with the evaluation of language models independently of any applicative task. A comparative study between several language models is generally related to the role that a model has into a complete system. Our objective consists in being independant of the applicative system, and thus to provide a true comparison of language models. Perplexity is a widely used criterion as to comparing language models without any task assumptions. However, the main drawback is that perplexity supposes probability distributions and hence cannot compare heterogeneous models. As an evaluation framework, we went back to the definition of the Shannon's game which is based on model prediction performance using rank based statistics. Our methodology is able to predict joint word sequences that are independent of the task or model assumptions. Experiments are carried out on French and English modeling with large vocabularies, and compare different kinds of language models
Delot, Thierry. "Interrogation d'annuaires étendus : modèles, langage et optimisation." Versailles-St Quentin en Yvelines, 2001. http://www.theses.fr/2001VERS0028.
Повний текст джерелаOota, Subba Reddy. "Modèles neurocomputationnels de la compréhension du langage : caractérisation des similarités et des différences entre le traitement cérébral du langage et les modèles de langage." Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0080.
Повний текст джерелаThis thesis explores the synergy between artificial intelligence (AI) and cognitive neuroscience to advance language processing capabilities. It builds on the insight that breakthroughs in AI, such as convolutional neural networks and mechanisms like experience replay 1, often draw inspiration from neuroscientific findings. This interconnection is beneficial in language, where a deeper comprehension of uniquely human cognitive abilities, such as processing complex linguistic structures, can pave the way for more sophisticated language processing systems. The emergence of rich naturalistic neuroimaging datasets (e.g., fMRI, MEG) alongside advanced language models opens new pathways for aligning computational language models with human brain activity. However, the challenge lies in discerning which model features best mirror the language comprehension processes in the brain, underscoring the importance of integrating biologically inspired mechanisms into computational models. In response to this challenge, the thesis introduces a data-driven framework bridging the gap between neurolinguistic processing observed in the human brain and the computational mechanisms of natural language processing (NLP) systems. By establishing a direct link between advanced imaging techniques and NLP processes, it conceptualizes brain information processing as a dynamic interplay of three critical components: "what," "where," and "when", offering insights into how the brain interprets language during engagement with naturalistic narratives. This study provides compelling evidence that enhancing the alignment between brain activity and NLP systems offers mutual benefits to the fields of neurolinguistics and NLP. The research showcases how these computational models can emulate the brain’s natural language processing capabilities by harnessing cutting-edge neural network technologies across various modalities—language, vision, and speech. Specifically, the thesis highlights how modern pretrained language models achieve closer brain alignment during narrative comprehension. It investigates the differential processing of language across brain regions, the timing of responses (Hemodynamic Response Function (HRF) delays), and the balance between syntactic and semantic information processing. Further, the exploration of how different linguistic features align with MEG brain responses over time and find that the alignment depends on the amount of past context, indicating that the brain encodes words slightly behind the current one, awaiting more future context. Furthermore, it highlights grounded language acquisition through noisy supervision and offers a biologically plausible architecture for investigating cross-situational learning, providing interpretability, generalizability, and computational efficiency in sequence-based models. Ultimately, this research contributes valuable insights into neurolinguistics, cognitive neuroscience, and NLP
Chauveau, Dominique. "Étude d'une extension du langage synchrone SIGNAL aux modèles probabilistes : le langage SIGNalea." Rennes 1, 1996. http://www.theses.fr/1996REN10110.
Повний текст джерелаFleurey, Franck. "Langage et méthode pour une ingénierie des modèles fiable." Phd thesis, Université Rennes 1, 2006. http://tel.archives-ouvertes.fr/tel-00538288.
Повний текст джерелаLaborde-Huguet, Bénédicte. "Recherche sur les mécanismes moléculaires de l'instabilité protéique des vins blancs." Bordeaux 2, 2006. http://www.theses.fr/2006BOR21381.
Повний текст джерелаSoluble proteins of white wines are heat unstable and can precepitate during conservation. We propose a new reactional model for protein haze formation, which implies not only protein denaturation but essentially non-proteinaceous compounds involvement. These molecules, probably localized in grape skin, are present in must and wine as precursors. Heat transformes these molecules into active factors able to react with proteins probably through ionic interactions. A purification chain showed that these molecules do not seem to belong to the following molecular families : phenolic compounds, aldehydes and ketones, and probably peptides
LABAT, GILLES. "Modélisation d'hémoprotéines, cytochrome P-450, chloroperoxydase et lignine peroxydase : modèles efficaces de la lignine peroxydase et développement de procédés d'oxydation par catalyse biomimétique." Toulouse 3, 1989. http://www.theses.fr/1989TOU30175.
Повний текст джерелаLopes, Marcos. "Modèles inductifs de la sémiotique textuelle." Paris 10, 2002. http://www.theses.fr/2002PA100145.
Повний текст джерелаКниги з теми "Modèles de langage protéique"
Laughton, Stephen. Le Courrier des affaires en anglais: 50 modèles de lettres. Alleur (Belgique): Marabout, 1989.
Знайти повний текст джерелаBarbier, Franck. UML 2 et MDE: Ingénierie des modèles avec études de cas. Paris: Dunod, 2005.
Знайти повний текст джерелаNovák, Vilém. The alternative mathematical model of linguistic semantics and pragmatics. New York: Plenum, 1992.
Знайти повний текст джерелаNovák, Vilém. The alternative mathematical model of linguistic semantics and pragmatics. New York: Plenum Press, 1992.
Знайти повний текст джерелаGrand, Mark. Patterns in Java. New York: John Wiley & Sons, Ltd., 2002.
Знайти повний текст джерелаChristopher, Gardner, ed. Financial modelling in Python. Chichester, West Sussex: John Wiley & Sons, 2009.
Знайти повний текст джерела1971-, Schmid Alexander, and Wolff Eberhard 1972-, eds. Server component patterns: Component infrastructures illustrated with EJB. Hoboken, J.J: J. Wiley, 2002.
Знайти повний текст джерелаJeffrey, Conklin E., and Hill Jane Anne Collins, eds. From schema theory to language. New York: Oxford University Press, 1987.
Знайти повний текст джерелаЧастини книг з теми "Modèles de langage protéique"
Tabet, Emmanuelle. "Un langage «bouleversé comme le cœur»: conversion religieuse et conversion littéraire chez Chateaubriand." In Dynamiques de conversion: modèles et résistances, 151–59. Turnhout: Brepols Publishers, 2012. http://dx.doi.org/10.1484/m.behe-eb.4.00304.
Повний текст джерелаMarot, Patrick. "Deux modèles métaphysiques de la théorie littéraire." In Interactions dans les Sciences du Langage. Interactions disciplinaires dans les Études littéraires, 259–69. Београд: Универзитет у Београду, Филолошки факултет, 2019. http://dx.doi.org/10.18485/efa.2019.11.ch19.
Повний текст джерелаFERET, Jérôme. "Analyses des motifs accessibles dans les modèles Kappa." In Approches symboliques de la modélisation et de l’analyse des systèmes biologiques, 337–98. ISTE Group, 2022. http://dx.doi.org/10.51926/iste.9029.ch9.
Повний текст джерелаBonin, Patrick. "Chapitre 5. Modèles de la production verbale de mots." In Psychologie du langage, 249–305. De Boeck Supérieur, 2013. http://dx.doi.org/10.3917/dbu.bonin.2013.01.0249.
Повний текст джерелаVERGARA, ANGIE RIVERA, FRANCE BEAUREGARD, and NATHALIE S. TRÉPANIER. "La classe de langage:." In Des modèles de service pour favoriser l'intégration scolaire, 103–30. Presses de l'Université du Québec, 2010. http://dx.doi.org/10.2307/j.ctv18pgrj2.9.
Повний текст джерелаVergara, Angie Rivera, France Beauregard, and nathalie S. trépanier. "La classe de langage." In Des modèles de service pour favoriser l'intégration scolaire, 103–30. Presses de l'Université du Québec, 2010. http://dx.doi.org/10.1515/9782760525269-007.
Повний текст джерелаNespoulous, Jean-Luc. "11. La « mise en mots » ... De la phrase au discours : modèles psycholinguistiques et pathologie du langage." In Langage et aphasie, 251. De Boeck Supérieur, 1993. http://dx.doi.org/10.3917/dbu.eusta.1993.01.0251.
Повний текст джерелаFLEURY SOARES, Gustavo, and Induraj PUDHUPATTU RAMAMURTHY. "Comparaison de modèles d’apprentissage automatique et d’apprentissage profond." In Optimisation et apprentissage, 153–71. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9071.ch6.
Повний текст джерелаRadu-Lefebvre, Miruna, and Eric Michaël Laviolette. "Chapitre 12. L’impact des modèles de rôle positifs et négatifs, selon l’activation d’un but de promotion versus prévention." In Psychologie sociale, communication et langage, 217–37. De Boeck Supérieur, 2011. http://dx.doi.org/10.3917/dbu.caste.2011.01.0217.
Повний текст джерелаMartin, Serge. "La voix comme sujet-relation : de la transmission des modèles de langue aux relations de voix." In Sens de la langue. Sens du langage, 127–38. Presses Universitaires de Bordeaux, 2011. http://dx.doi.org/10.4000/books.pub.8072.
Повний текст джерела