Gotowa bibliografia na temat „Protein language models”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Protein language models”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Protein language models"
Tang, Lin. "Protein language models using convolutions". Nature Methods 21, nr 4 (kwiecień 2024): 550. http://dx.doi.org/10.1038/s41592-024-02252-3.
Pełny tekst źródłaAli, Sarwan, Prakash Chourasia i Murray Patterson. "When Protein Structure Embedding Meets Large Language Models". Genes 15, nr 1 (23.12.2023): 25. http://dx.doi.org/10.3390/genes15010025.
Pełny tekst źródłaFerruz, Noelia, i Birte Höcker. "Controllable protein design with language models". Nature Machine Intelligence 4, nr 6 (czerwiec 2022): 521–32. http://dx.doi.org/10.1038/s42256-022-00499-z.
Pełny tekst źródłaLi, Xiang, Zhuoyu Wei, Yueran Hu i Xiaolei Zhu. "GraphNABP: Identifying nucleic acid-binding proteins with protein graphs and protein language models". International Journal of Biological Macromolecules 280 (listopad 2024): 135599. http://dx.doi.org/10.1016/j.ijbiomac.2024.135599.
Pełny tekst źródłaSingh, Arunima. "Protein language models guide directed antibody evolution". Nature Methods 20, nr 6 (czerwiec 2023): 785. http://dx.doi.org/10.1038/s41592-023-01924-w.
Pełny tekst źródłaTran, Chau, Siddharth Khadkikar i Aleksey Porollo. "Survey of Protein Sequence Embedding Models". International Journal of Molecular Sciences 24, nr 4 (14.02.2023): 3775. http://dx.doi.org/10.3390/ijms24043775.
Pełny tekst źródłaPokharel, Suresh, Pawel Pratyush, Hamid D. Ismail, Junfeng Ma i Dukka B. KC. "Integrating Embeddings from Multiple Protein Language Models to Improve Protein O-GlcNAc Site Prediction". International Journal of Molecular Sciences 24, nr 21 (6.11.2023): 16000. http://dx.doi.org/10.3390/ijms242116000.
Pełny tekst źródłaWeissenow, Konstantin, i Burkhard Rost. "Are protein language models the new universal key?" Current Opinion in Structural Biology 91 (kwiecień 2025): 102997. https://doi.org/10.1016/j.sbi.2025.102997.
Pełny tekst źródłaWang, Wenkai, Zhenling Peng i Jianyi Yang. "Single-sequence protein structure prediction using supervised transformer protein language models". Nature Computational Science 2, nr 12 (19.12.2022): 804–14. http://dx.doi.org/10.1038/s43588-022-00373-3.
Pełny tekst źródłaKenlay, Henry, Frédéric A. Dreyer, Aleksandr Kovaltsuk, Dom Miketa, Douglas Pires i Charlotte M. Deane. "Large scale paired antibody language models". PLOS Computational Biology 20, nr 12 (6.12.2024): e1012646. https://doi.org/10.1371/journal.pcbi.1012646.
Pełny tekst źródłaRozprawy doktorskie na temat "Protein language models"
Meynard, Barthélémy. "Language Models towards Conditional Generative Modelsof Proteins Sequences". Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS195.
Pełny tekst źródłaThis thesis explores the intersection of artificial intelligence (AI) and biology, focusing on how generative models can innovate in protein sequence design. Our research unfolds in three distinct yet interconnected stages, each building upon the insights of the previous to enhance the model's applicability and performance in protein engineering.We begin by examining what makes a generative model effective for protein sequences. In our first study, "Interpretable Pairwise Distillations for Generative Protein Sequence Models," we compare complex neural network models to simpler, pairwise distribution models. This comparison highlights that deep learning strategy mainly model second order interaction, highlighting their fundamental role in modeling proteins family.In a second part, we try to expand this principle of using second order interaction to inverse folding. We explore structure conditioning in "Uncovering Sequence Diversity from a Known Protein Structure" Here, we present InvMSAFold, a method that produces diverse protein sequences designed to fold into a specific structure. This approach tries to combines two different tradition of proteins modeling: the MSA based models that try to capture the entire fitness landscape and the inverse folding types of model that focus on recovering one specific sequence. This is a first step towards the possibility of conditioning the fitness landscape by considering the protein's final structure in the design process, enabling the generation of sequences that are not only diverse but also maintain their intended structural integrity. Finally, we delve into sequence conditioning with "Generating Interacting Protein Sequences using Domain-to-Domain Translation." This study introduces a novel approach to generate protein sequences that can interact with specific other proteins. By treating this as a translation problem, similar to methods used in language processing, we create sequences with intended functionalities. Furthermore, we address the critical challenge of T-cell receptor (TCR) and epitope interaction prediction in "TULIP—a Transformer based Unsupervised Language model for Interacting Peptides and T-cell receptors." This study introduces an unsupervised learning approach to accurately predict TCR-epitope bindings, overcoming limitations in data quality and training bias inherent in previous models. These advancements underline the potential of sequence conditioning in creating functionally specific and interaction-aware protein designs
Vander, Meersche Yann. "Étude de la flexibilité des protéines : analyse à grande échelle de simulations de dynamique moléculaire et prédiction par apprentissage profond". Electronic Thesis or Diss., Université Paris Cité, 2024. http://www.theses.fr/2024UNIP5147.
Pełny tekst źródłaProteins are essential to biological processes. Understanding their dynamics is crucial for elucidating their biological functions and interactions. However, experimentally measuring protein flexibility remains challenging due to technical limitations and associated costs. This thesis aims to deepen the understanding of protein dynamic properties and to propose computational methods for predicting their flexibility directly from their sequence. This work is organised in four main contributions: 1) Protein flexibility prediction in terms of B-factors. We have developed MEDUSA, a flexibility prediction method based on deep learning, which leverages the physicochemical and evolutionary information of amino acids to predict experimental flexibility classes from protein sequences. MEDUSA has outperformed previously available tools but shows limitations due to the variability of experimental data. 2) Large-scale analysis of in silico protein dynamics. We have released ATLAS, a database of standardised all-atom molecular dynamics simulations providing detailed information on protein flexibility for over 1.5k representative protein structures. ATLAS enables interactive analysis of protein dynamics at different levels and offers valuable insights into proteins exhibiting atypical dynamical behaviour, such as dual personality fragments. 3) An in-depth analysis of AlphaFold 2's pLDDT score and its relation to protein flexibility. We have assessed pLDDT correlation with different flexibility descriptors derived from molecular dynamics simulations and from NMR ensembles and demonstrated that confidence in 3D structure prediction does not necessarily reflect expected flexibility of the protein region, in particular, for protein fragments involved in molecular interaction. 4) Prediction of MD-derived flexibility descriptors using protein language embeddings. We introduce PEGASUS, a novel flexibility prediction tool developed using ATLAS database. Using protein sequence encoding by protein language models and a simple deep learning model, PEGASUS provides precise predictions of flexibility metrics and effectively captures the impact of mutations on protein dynamics. The perspectives of this work include enriching simulations with varied environments and integrating membrane proteins to enhance PEGASUS and enable new analyses. We also highlight the emergence of methods capable of predicting conformational ensembles, offering promising advances for better capturing protein dynamics. This thesis offers new perspectives for the prediction and analysis of protein flexibility, paving the way for advances in areas such as biomedical research, mutation studies, and drug design
Hladiš, Matej. "Réseaux de neurones en graphes et modèle de langage des protéines pour révéler le code combinatoire de l'olfaction". Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ5024.
Pełny tekst źródłaMammals identify and interpret a myriad of olfactory stimuli using a complex coding mechanism involving interactions between odorant molecules and hundreds of olfactory receptors (ORs). These interactions generate unique combinations of activated receptors, called the combinatorial code, which the human brain interprets as the sensation we call smell. Until now, the vast number of possible receptor-molecule combinations have prevented a large-scale experimental study of this code and its link to odor perception. Therefore, revealing this code is crucial to answering the long-term question of how we perceive our intricate chemical environment. ORs belong to the class A of G protein-coupled receptors (GPCRs) and constitute the largest known multigene family. To systematically study olfactory coding, we develop M2OR, a comprehensive database compiling the last 25 years of OR bioassays. Using this dataset, a tailored deep learning model is designed and trained. It combines the [CLS] token embedding from a protein language model with graph neural networks and multi-head attention. This model predicts the activation of ORs by odorants and reveals the resulting combinatorial code for any odorous molecule. This approach is refined by developing a novel model capable of predicting the activity of an odorant at a specific concentration, subsequently allowing the estimation of the EC50 value for any OR-odorant pair. Finally, the combinatorial codes derived from both models are used to predict the odor perception of molecules. By incorporating inductive biases inspired by olfactory coding theory, a machine learning model based on these codes outperforms the current state-of-the-art in smell prediction. To the best of our knowledge, this is the most comprehensive and successful application of combinatorial coding to odor quality prediction. Overall, this work provides a link between the complex molecule-receptor interactions and human perception
Książki na temat "Protein language models"
Beatriz, Solís Leree, red. La Ley televisa y la lucha por el poder en México. México, D.F: Universidad Autónoma Metropolitana, Unidad Xochimilco, 2009.
Znajdź pełny tekst źródłaYoshikawa, Saeko. William Wordsworth and Modern Travel. Liverpool University Press, 2020. http://dx.doi.org/10.3828/liverpool/9781789621181.001.0001.
Pełny tekst źródłaHardiman, David. The Nonviolent Struggle for Indian Freedom, 1905-19. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190920678.001.0001.
Pełny tekst źródłaMcNally, Michael D. Defend the Sacred. Princeton University Press, 2020. http://dx.doi.org/10.23943/princeton/9780691190907.001.0001.
Pełny tekst źródłaMeddings, Jennifer, Vineet Chopra i Sanjay Saint. Preventing Hospital Infections. Wyd. 2. Oxford University Press, 2021. http://dx.doi.org/10.1093/med/9780197509159.001.0001.
Pełny tekst źródłaHalvorsen, Tar, i Peter Vale. One World, Many Knowledges: Regional experiences and cross-regional links in higher education. African Minds, 2016. http://dx.doi.org/10.47622/978-0-620-55789-4.
Pełny tekst źródłaCzęści książek na temat "Protein language models"
Xu, Yaoyao, Xinjian Zhao, Xiaozhuang Song, Benyou Wang i Tianshu Yu. "Boosting Protein Language Models with Negative Sample Mining". W Lecture Notes in Computer Science, 199–214. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-70381-2_13.
Pełny tekst źródłaZhao, Junming, Chao Zhang i Yunan Luo. "Contrastive Fitness Learning: Reprogramming Protein Language Models for Low-N Learning of Protein Fitness Landscape". W Lecture Notes in Computer Science, 470–74. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-1-0716-3989-4_55.
Pełny tekst źródłaPratyush, Pawel, Suresh Pokharel, Hamid D. Ismail, Soufia Bahmani i Dukka B. KC. "LMPTMSite: A Platform for PTM Site Prediction in Proteins Leveraging Transformer-Based Protein Language Models". W Methods in Molecular Biology, 261–97. New York, NY: Springer US, 2024. http://dx.doi.org/10.1007/978-1-0716-4196-5_16.
Pełny tekst źródłaGhazikhani, Hamed, i Gregory Butler. "A Study on the Application of Protein Language Models in the Analysis of Membrane Proteins". W Distributed Computing and Artificial Intelligence, Special Sessions, 19th International Conference, 147–52. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-23210-7_14.
Pełny tekst źródłaWu, Tianqi, Weihang Cheng i Jianlin Cheng. "Improving Protein Secondary Structure Prediction by Deep Language Models and Transformer Networks". W Methods in Molecular Biology, 43–53. New York, NY: Springer US, 2024. http://dx.doi.org/10.1007/978-1-0716-4196-5_3.
Pełny tekst źródłaZeng, Shuai, Duolin Wang, Lei Jiang i Dong Xu. "Prompt-Based Learning on Large Protein Language Models Improves Signal Peptide Prediction". W Lecture Notes in Computer Science, 400–405. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-1-0716-3989-4_40.
Pełny tekst źródłaFernández, Diego, Álvaro Olivera-Nappa, Roberto Uribe-Paredes i David Medina-Ortiz. "Exploring Machine Learning Algorithms and Protein Language Models Strategies to Develop Enzyme Classification Systems". W Bioinformatics and Biomedical Engineering, 307–19. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-34953-9_24.
Pełny tekst źródłaPaaß, Gerhard, i Sven Giesselbach. "Foundation Models for Speech, Images, Videos, and Control". W Artificial Intelligence: Foundations, Theory, and Algorithms, 313–82. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-23190-2_7.
Pełny tekst źródłaShan, Kaixuan, Xiankun Zhang i Chen Song. "Prediction of Protein-DNA Binding Sites Based on Protein Language Model and Deep Learning". W Advanced Intelligent Computing in Bioinformatics, 314–25. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-5692-6_28.
Pełny tekst źródłaMatsiunova, Antonina. "Semantic opposition of US versus THEM in late 2020 Russian-language Belarusian discourse". W Protest in Late Modern Societies, 42–55. London: Routledge, 2023. http://dx.doi.org/10.4324/9781003270065-4.
Pełny tekst źródłaStreszczenia konferencji na temat "Protein language models"
Amjad, Maheera, Ayesha Munir, Usman Zia i Rehan Zafar Paracha. "Pre-trained Language Models for Decoding Protein Language: a Survey". W 2024 4th International Conference on Digital Futures and Transformative Technologies (ICoDT2), 1–12. IEEE, 2024. http://dx.doi.org/10.1109/icodt262145.2024.10740205.
Pełny tekst źródłaLiu, Xu, Yiming Li, Fuhao Zhang, Ruiqing Zheng, Fei Guo, Min Li i Min Zeng. "ComLMEss: Combining multiple protein language models enables accurate essential protein prediction". W 2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 67–72. IEEE, 2024. https://doi.org/10.1109/bibm62325.2024.10822205.
Pełny tekst źródłaSun, Xin, i Yuhao Wu. "Combinative Bio-feature Proteins Generation via Pre-trained Protein Large Language Models". W 2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 99–104. IEEE, 2024. https://doi.org/10.1109/bibm62325.2024.10821839.
Pełny tekst źródłaLiang, Po-Yu, Xueting Huang, Tibo Duran, Andrew J. Wiemer i Jun Bai. "Exploring Latent Space for Generating Peptide Analogs Using Protein Language Models". W 2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 842–47. IEEE, 2024. https://doi.org/10.1109/bibm62325.2024.10821777.
Pełny tekst źródłaJiang, Yanfeng, Ning Sun, Zhengxian Lu, Shuang Peng, Yi Zhang, Fei Yang i Tao Li. "MEFold: Memory-Efficient Optimization for Protein Language Models via Chunk and Quantization". W 2024 International Joint Conference on Neural Networks (IJCNN), 1–8. IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10651470.
Pełny tekst źródłaKim, Yunsoo. "Foundation Model for Biomedical Graphs: Integrating Knowledge Graphs and Protein Structures to Large Language Models". W Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop), 346–55. Stroudsburg, PA, USA: Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.acl-srw.30.
Pełny tekst źródłaShen, Yiqing, Zan Chen, Michail Mamalakis, Luhan He, Haiyang Xia, Tianbin Li, Yanzhou Su, Junjun He i Yu Guang Wang. "A Fine-tuning Dataset and Benchmark for Large Language Models for Protein Understanding". W 2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 2390–95. IEEE, 2024. https://doi.org/10.1109/bibm62325.2024.10821894.
Pełny tekst źródłaZhang, Jun, Zhiqiang Yan, Hao Zeng i Zexuan Zhu. "PAIR: protein-aptamer interaction prediction based on language models and contrastive learning framework". W 2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 5426–32. IEEE, 2024. https://doi.org/10.1109/bibm62325.2024.10822859.
Pełny tekst źródłaEngel, Ryan, i Gilchan Park. "Evaluating Large Language Models for Predicting Protein Behavior under Radiation Exposure and Disease Conditions". W Proceedings of the 23rd Workshop on Biomedical Natural Language Processing, 427–39. Stroudsburg, PA, USA: Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.bionlp-1.34.
Pełny tekst źródłaTsutaoka, Takuya, Noriji Kato, Toru Nishino, Yuanzhong Li i Masahito Ohue. "Predicting Antibody Stability pH Values from Amino Acid Sequences: Leveraging Protein Language Models for Formulation Optimization". W 2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 240–43. IEEE, 2024. https://doi.org/10.1109/bibm62325.2024.10822009.
Pełny tekst źródłaRaporty organizacyjne na temat "Protein language models"
Wu, Jyun-Jie. Improving Predictive Efficiency and Literature Quality Assessment for Lung Cancer Complications Post-Proton Therapy Through Large Language Models and Meta-Analysis. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, sierpień 2024. http://dx.doi.org/10.37766/inplasy2024.8.0103.
Pełny tekst źródłaShani, Uri, Lynn Dudley, Alon Ben-Gal, Menachem Moshelion i Yajun Wu. Root Conductance, Root-soil Interface Water Potential, Water and Ion Channel Function, and Tissue Expression Profile as Affected by Environmental Conditions. United States Department of Agriculture, październik 2007. http://dx.doi.org/10.32747/2007.7592119.bard.
Pełny tekst źródłaMelnyk, Iurii. JUSTIFICATION OF OCCUPATION IN GERMAN (1938) AND RUSSIAN (2014) MEDIA: SUBSTITUTION OF AGGRESSOR AND VICTIM. Ivan Franko National University of Lviv, marzec 2021. http://dx.doi.org/10.30970/vjo.2021.50.11101.
Pełny tekst źródłaYatsymirska, Mariya. KEY IMPRESSIONS OF 2020 IN JOURNALISTIC TEXTS. Ivan Franko National University of Lviv, marzec 2021. http://dx.doi.org/10.30970/vjo.2021.50.11107.
Pełny tekst źródłaOr, Etti, David Galbraith i Anne Fennell. Exploring mechanisms involved in grape bud dormancy: Large-scale analysis of expression reprogramming following controlled dormancy induction and dormancy release. United States Department of Agriculture, grudzień 2002. http://dx.doi.org/10.32747/2002.7587232.bard.
Pełny tekst źródła