Gotowa bibliografia na temat „Large language model”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Large language model”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Large language model"
B, Mr DHANUSH. "CHATBOT USING LARGE LANGUAGE MODEL". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, nr 05 (14.05.2024): 1–5. http://dx.doi.org/10.55041/ijsrem34001.
Pełny tekst źródłaZhang, Chengyi, Xingyu Wang i Ziyun Wang. "Large language model in electrocatalysis". Chinese Journal of Catalysis 59 (kwiecień 2024): 7–14. http://dx.doi.org/10.1016/s1872-2067(23)64612-1.
Pełny tekst źródłaSagi, Sriram. "Advancing AI: Enhancing Large Language Model Performance through GPU Optimization Techniques". International Journal of Science and Research (IJSR) 13, nr 3 (5.03.2024): 630–33. http://dx.doi.org/10.21275/sr24309100709.
Pełny tekst źródłaBaral, Elina, i Sagar Shrestha. "Large Vocabulary Continuous Speech Recognition for Nepali Language". International Journal of Signal Processing Systems 8, nr 4 (grudzień 2020): 68–73. http://dx.doi.org/10.18178/ijsps.8.4.68-73.
Pełny tekst źródłaGarg, Prerak, i Divya Beeram. "Large Language Model-Based Autonomous Agents". International Journal of Computer Trends and Technology 72, nr 5 (30.05.2024): 151–62. http://dx.doi.org/10.14445/22312803/ijctt-v72i5p118.
Pełny tekst źródłaHuang, Sen, Kaixiang Yang, Sheng Qi i Rui Wang. "When large language model meets optimization". Swarm and Evolutionary Computation 90 (październik 2024): 101663. http://dx.doi.org/10.1016/j.swevo.2024.101663.
Pełny tekst źródłaShi, Zhouxing, Yihan Wang, Fan Yin, Xiangning Chen, Kai-Wei Chang i Cho-Jui Hsieh. "Red Teaming Language Model Detectors with Language Models". Transactions of the Association for Computational Linguistics 12 (2024): 174–89. http://dx.doi.org/10.1162/tacl_a_00639.
Pełny tekst źródłaAman, Mussa. "Large Language Model Based Fake News Detection". Procedia Computer Science 231 (2024): 740–45. http://dx.doi.org/10.1016/j.procs.2023.12.144.
Pełny tekst źródłaSingh, Pranaydeep, Orphée De Clercq i Els Lefever. "Distilling Monolingual Models from Large Multilingual Transformers". Electronics 12, nr 4 (18.02.2023): 1022. http://dx.doi.org/10.3390/electronics12041022.
Pełny tekst źródłaBeurer-Kellner, Luca, Marc Fischer i Martin Vechev. "Prompting Is Programming: A Query Language for Large Language Models". Proceedings of the ACM on Programming Languages 7, PLDI (6.06.2023): 1946–69. http://dx.doi.org/10.1145/3591300.
Pełny tekst źródłaRozprawy doktorskie na temat "Large language model"
Jiang, Yuandong. "Large Scale Distributed Semantic N-gram Language Model". Wright State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=wright1316200173.
Pełny tekst źródłaTang, Haijiang. "Building phrase based language model from large corpus /". View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202002%20TANG.
Pełny tekst źródłaIncludes bibliographical references (leaves 74-79). Also available in electronic version. Access restricted to campus users.
McGreevy, Michael. "Statistical language modelling for large vocabulary speech recognition". Thesis, Queensland University of Technology, 2006. https://eprints.qut.edu.au/16444/1/Michael_McGreevy_Thesis.pdf.
Pełny tekst źródłaMcGreevy, Michael. "Statistical language modelling for large vocabulary speech recognition". Queensland University of Technology, 2006. http://eprints.qut.edu.au/16444/.
Pełny tekst źródłaTan, Ming. "A Large Scale Distributed Syntactic, Semantic and Lexical Language Model for Machine Translation". Wright State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=wright1386111950.
Pełny tekst źródłaSusman, Derya. "Turkish Large Vocabulary Continuous Speech Recognition By Using Limited Audio Corpus". Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614207/index.pdf.
Pełny tekst źródłaComez, Murat Ali. "Large Vocabulary Continuous Speech Recogniton For Turkish Using Htk". Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1205491/index.pdf.
Pełny tekst źródłaSagen, Markus. "Large-Context Question Answering with Cross-Lingual Transfer". Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-440704.
Pełny tekst źródłaUzelac, Lawrence Stevan. "A Multiple Coupled Microstrip Transmission Line Model for High-Speed VLSI Interconnect Simulation". PDXScholar, 1991. https://pdxscholar.library.pdx.edu/open_access_etds/4526.
Pełny tekst źródłaLabeau, Matthieu. "Neural language models : Dealing with large vocabularies". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS313/document.
Pełny tekst źródłaThis work investigates practical methods to ease training and improve performances of neural language models with large vocabularies. The main limitation of neural language models is their expensive computational cost: it depends on the size of the vocabulary, with which it grows linearly. Despite several training tricks, the most straightforward way to limit computation time is to limit the vocabulary size, which is not a satisfactory solution for numerous tasks. Most of the existing methods used to train large-vocabulary language models revolve around avoiding the computation of the partition function, ensuring that output scores are normalized into a probability distribution. Here, we focus on sampling-based approaches, including importance sampling and noise contrastive estimation. These methods allow an approximate computation of the partition function. After examining the mechanism of self-normalization in noise-contrastive estimation, we first propose to improve its efficiency with solutions that are adapted to the inner workings of the method and experimentally show that they considerably ease training. Our second contribution is to expand on a generalization of several sampling based objectives as Bregman divergences, in order to experiment with new objectives. We use Beta divergences to derive a set of objectives from which noise contrastive estimation is a particular case. Finally, we aim at improving performances on full vocabulary language models, by augmenting output words representation with subwords. We experiment on a Czech dataset and show that using character-based representations besides word embeddings for output representations gives better results. We also show that reducing the size of the output look-up table improves results even more
Książki na temat "Large language model"
Satō, Hideto. A data model, knowledge base, and natural language processing for sharing a large statistical database. Ibaraki, Osaka, Japan: Institute of Social and Economic Research, Osaka University, 1989.
Znajdź pełny tekst źródłaAmaratunga, Thimira. Understanding Large Language Models. Berkeley, CA: Apress, 2023. http://dx.doi.org/10.1007/979-8-8688-0017-7.
Pełny tekst źródłaKucharavy, Andrei, Octave Plancherel, Valentin Mulder, Alain Mermoud i Vincent Lenders, red. Large Language Models in Cybersecurity. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7.
Pełny tekst źródłaTörnberg, Petter. How to Use Large-Language Models for Text Analysis. 1 Oliver’s Yard, 55 City Road, London EC1Y 1SP United Kingdom: SAGE Publications Ltd, 2024. http://dx.doi.org/10.4135/9781529683707.
Pełny tekst źródłaBashkatov, Alexander. Modeling in OpenSCAD: examples. ru: INFRA-M Academic Publishing LLC., 2019. http://dx.doi.org/10.12737/959073.
Pełny tekst źródłaBuild a Large Language Model (from Scratch). Manning Publications Co. LLC, 2024.
Znajdź pełny tekst źródłaGenerative AI with LangChain: Build Large Language Model Apps with Python, ChatGPT and Other LLMs. Packt Publishing, Limited, 2023.
Znajdź pełny tekst źródłaGenerative AI with LangChain: Build Large Language Model Apps with Python, ChatGPT, and Other LLMs. de Gruyter GmbH, Walter, 2023.
Znajdź pełny tekst źródłaLarge Language Model-Based Solutions: How to Deliver Value with Cost-Effective Generative AI Applications. Wiley & Sons, Limited, John, 2024.
Znajdź pełny tekst źródłaLarge Language Model-Based Solutions: How to Deliver Value with Cost-Effective Generative AI Applications. Wiley & Sons, Incorporated, John, 2024.
Znajdź pełny tekst źródłaCzęści książek na temat "Large language model"
Wu, Yonghui. "Large Language Model and Text Generation". W Cognitive Informatics in Biomedicine and Healthcare, 265–97. Cham: Springer International Publishing, 2024. http://dx.doi.org/10.1007/978-3-031-55865-8_10.
Pełny tekst źródłaRuiu, Dragos. "LLMs Red Teaming". W Large Language Models in Cybersecurity, 213–23. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_24.
Pełny tekst źródłaKucharavy, Andrei. "Overview of Existing LLM Families". W Large Language Models in Cybersecurity, 31–44. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_3.
Pełny tekst źródłaDolamic, Ljiljana. "Conversational Agents". W Large Language Models in Cybersecurity, 45–53. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_4.
Pełny tekst źródłaKucharavy, Andrei. "Adapting LLMs to Downstream Applications". W Large Language Models in Cybersecurity, 19–29. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_2.
Pełny tekst źródłaSchillaci, Zachary. "On-Site Deployment of LLMs". W Large Language Models in Cybersecurity, 205–11. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_23.
Pełny tekst źródłaKurimo, Mikko, i Krista Lagus. "An Efficiently Focusing Large Vocabulary Language Model". W Artificial Neural Networks — ICANN 2002, 1068–73. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-46084-5_173.
Pełny tekst źródłaJi, Jianchao, Zelong Li, Shuyuan Xu, Wenyue Hua, Yingqiang Ge, Juntao Tan i Yongfeng Zhang. "GenRec: Large Language Model for Generative Recommendation". W Lecture Notes in Computer Science, 494–502. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-56063-7_42.
Pełny tekst źródłaMajumdar, Subhabrata, i Terry Vogelsang. "Towards Safe LLMs Integration". W Large Language Models in Cybersecurity, 243–47. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_27.
Pełny tekst źródłaMajumdar, Subhabrata. "Standards for LLM Security". W Large Language Models in Cybersecurity, 225–31. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_25.
Pełny tekst źródłaStreszczenia konferencji na temat "Large language model"
Huang, Jiaji, Yi Li, Wei Ping i Liang Huang. "Large Margin Neural Language Model". W Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/d18-1150.
Pełny tekst źródłaChen, Kua, Yujing Yang, Boqi Chen, José Antonio Hernández López, Gunter Mussbacher i Dániel Varró. "Automated Domain Modeling with Large Language Models: A Comparative Study". W 2023 ACM/IEEE 26th International Conference on Model Driven Engineering Languages and Systems (MODELS). IEEE, 2023. http://dx.doi.org/10.1109/models58315.2023.00037.
Pełny tekst źródłaMeng, Ruijie, Martin Mirchev, Marcel Böhme i Abhik Roychoudhury. "Large Language Model guided Protocol Fuzzing". W Network and Distributed System Security Symposium. Reston, VA: Internet Society, 2024. http://dx.doi.org/10.14722/ndss.2024.24556.
Pełny tekst źródłaHASHIMOTO, Tomomi. "Ethical Judgment using Large Language Model". W 2024 16th International Conference on Computer and Automation Engineering (ICCAE). IEEE, 2024. http://dx.doi.org/10.1109/iccae59995.2024.10569797.
Pełny tekst źródłaSingh, Aditi, Saket Kumar, Abul Ehtesham, Tala Talaei Khoei i Deepshikha Bhati. "Large Language Model-Driven Immersive Agent". W 2024 IEEE World AI IoT Congress (AIIoT). IEEE, 2024. http://dx.doi.org/10.1109/aiiot61789.2024.10578948.
Pełny tekst źródłaGalindo, José A., Antonio J. Dominguez, Jules White i David Benavides. "Large Language Models to generate meaningful feature model instances". W SPLC '23: 27th ACM International Systems and Software Product Line Conference. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3579027.3608973.
Pełny tekst źródłaStammbach, Dominik, Vilém Zouhar, Alexander Hoyle, Mrinmaya Sachan i Elliott Ash. "Revisiting Automated Topic Model Evaluation with Large Language Models". W Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.emnlp-main.581.
Pełny tekst źródłaZhao, James, Yuxi Xie, Kenji Kawaguchi, Junxian He i Michael Xie. "Automatic Model Selection with Large Language Models for Reasoning". W Findings of the Association for Computational Linguistics: EMNLP 2023. Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.findings-emnlp.55.
Pełny tekst źródłaXu, Austin, Will Monroe i Klinton Bicknell. "Large language model augmented exercise retrieval for personalized language learning". W LAK '24: The 14th Learning Analytics and Knowledge Conference. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3636555.3636883.
Pełny tekst źródłaMysore, Sheshera, Andrew Mccallum i Hamed Zamani. "Large Language Model Augmented Narrative Driven Recommendations". W RecSys '23: Seventeenth ACM Conference on Recommender Systems. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3604915.3608829.
Pełny tekst źródłaRaporty organizacyjne na temat "Large language model"
Seymore, Kristie, i Ronald Rosenfeld. Large-Scale Topic Detection and Language Model Adaptation. Fort Belvoir, VA: Defense Technical Information Center, czerwiec 1997. http://dx.doi.org/10.21236/ada327553.
Pełny tekst źródłaZhang, Hao. Large Language Model (LLM) Monthly Report (2024 Apr). ResearchHub Technologies, Inc., maj 2024. http://dx.doi.org/10.55277/researchhub.0ps6xenm.
Pełny tekst źródłaSun, Ruiqi, i Daniel Trefler. The Impact of AI and Cross-Border Data Regulation on International Trade in Digital Services: A Large Language Model. Cambridge, MA: National Bureau of Economic Research, listopad 2023. http://dx.doi.org/10.3386/w31925.
Pełny tekst źródłaLavadenz, Magaly, Sheila Cassidy, Elvira G. Armas, Rachel Salivar, Grecya V. Lopez i Amanda A. Ross. Sobrato Early Academic Language (SEAL) Model: Final Report of Findings from a Four-Year Study. Center for Equity for English Learners, Loyola Marymount University, 2020. http://dx.doi.org/10.15365/ceel.seal2020.
Pełny tekst źródłaPrasad, Jayanti. Large Language Models: AI Foundations and Applications in Python. Instats Inc., 2023. http://dx.doi.org/10.61700/85rfezw01y0q9521.
Pełny tekst źródłaAlonso-Robisco, Andres, i Jose Manuel Carbo. Analysis of CBDC Narrative OF Central Banks using Large Language Models. Madrid: Banco de España, sierpień 2023. http://dx.doi.org/10.53479/33412.
Pełny tekst źródłaMarra de Artiñano, Ignacio, Franco Riottini Depetris i Christian Volpe Martincus. Automatic Product Classification in International Trade: Machine Learning and Large Language Models. Inter-American Development Bank, lipiec 2023. http://dx.doi.org/10.18235/0005012.
Pełny tekst źródłaWindsor, Callan, i Max Zang. Firms' Price-setting Behaviour: Insights from Earnings Calls. Reserve Bank of Australia, wrzesień 2023. http://dx.doi.org/10.47688/rdp2023-06.
Pełny tekst źródłaHorton, John. Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus? Cambridge, MA: National Bureau of Economic Research, kwiecień 2023. http://dx.doi.org/10.3386/w31122.
Pełny tekst źródłaGluckman, Peter, i Hema Sridhar. A framework for evaluating rapidly developing digital and related technologies: AI, Large Language Models and beyond. International Science Council, październik 2023. http://dx.doi.org/10.24948/2023.11.
Pełny tekst źródła