Статті в журналах з теми "Large language models"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Large language models.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Large language models".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Cerf, Vinton G. "Large Language Models." Communications of the ACM 66, no. 8 (July 25, 2023): 7. http://dx.doi.org/10.1145/3606337.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Sharma Shria Verma, Dhananjai. "Automated Penetration Testing using Large Language Models." International Journal of Science and Research (IJSR) 13, no. 4 (April 5, 2024): 1826–31. http://dx.doi.org/10.21275/sr24427043741.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Mishra, Vinaytosh. "Large Language Models in Medical Education and Quality Concerns." Journal of Quality in Health Care & Economics 6, no. 1 (2023): 1–3. http://dx.doi.org/10.23880/jqhe-16000319.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Jain, Migul. "Future of Interacting with Computers and Large Language Models." International Journal of Science and Research (IJSR) 12, no. 10 (October 5, 2023): 1711–12. http://dx.doi.org/10.21275/sr231023121603.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Noever, David. "LARGE LANGUAGE MODELS FOR CIPHERS." International Journal of Artificial Intelligence & Applications 14, no. 03 (May 28, 2023): 1–20. http://dx.doi.org/10.5121/ijaia.2023.14301.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study investigates whether transformer models like ChatGPT (GPT4, MAR2023) can generalize beyond their training data by examining their performance on the novel Cipher Dataset, which scrambles token order. The dataset consists of 654 test cases, and the analysis focuses on 51 text examples and 13 algorithmic choices. Results show that the models perform well on low-difficulty ciphers like Caesar and can unscramble tokens in 77% of the cipher examples. Despite their reliance on training data, the model's ability to generalize outside of token order is surprising, especially when leveraging large-scale models with hundreds of billions of weights and a comprehensive text corpus with few examples. The original contributions of the work focus on presenting a cipher challenge dataset and then scoring historically significant ciphers for large language models to descramble. The real challenge for these generational models lies in executing the complex algorithmic steps on new cipher inputs, potentially as a novel reasoning challenge that relies less on knowledge acquisition and more on trial-and-error or out-ofbounds responses.
6

D’Alessandro, William, Harry R. Lloyd, and Nathaniel Sharadin. "Large Language Models and Biorisk." American Journal of Bioethics 23, no. 10 (October 3, 2023): 115–18. http://dx.doi.org/10.1080/15265161.2023.2250333.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Shanahan, Murray. "Talking about Large Language Models." Communications of the ACM 67, no. 2 (January 25, 2024): 68–79. http://dx.doi.org/10.1145/3624724.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Interacting with a contemporary LLM-based conversational agent can create an illusion of being in the presence of a thinking creature. Yet, in their very nature, such systems are fundamentally not like us.
8

Cheon, Hyundeuk. "Do Large Language Models Understand?" CHUL HAK SA SANG : Journal of Philosophical Ideas 90 (November 30, 2023): 75–105. http://dx.doi.org/10.15750/chss.90.202311.003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Veres, Csaba. "Large Language Models are Not Models of Natural Language: They are Corpus Models." IEEE Access 10 (2022): 61970–79. http://dx.doi.org/10.1109/access.2022.3182505.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ross, Angela, Kathleen McGrow, Degui Zhi, and Laila Rasmy. "Foundation Models, Generative AI, and Large Language Models." CIN: Computers, Informatics, Nursing 42, no. 5 (May 2024): 377–87. http://dx.doi.org/10.1097/cin.0000000000001149.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We are in a booming era of artificial intelligence, particularly with the increased availability of technologies that can help generate content, such as ChatGPT. Healthcare institutions are discussing or have started utilizing these innovative technologies within their workflow. Major electronic health record vendors have begun to leverage large language models to process and analyze vast amounts of clinical natural language text, performing a wide range of tasks in healthcare settings to help alleviate clinicians' burden. Although such technologies can be helpful in applications such as patient education, drafting responses to patient questions and emails, medical record summarization, and medical research facilitation, there are concerns about the tools' readiness for use within the healthcare domain and acceptance by the current workforce. The goal of this article is to provide nurses with an understanding of the currently available foundation models and artificial intelligence tools, enabling them to evaluate the need for such tools and assess how they can impact current clinical practice. This will help nurses efficiently assess, implement, and evaluate these tools to ensure these technologies are ethically and effectively integrated into healthcare systems, while also rigorously monitoring their performance and impact on patient care.
11

y Arcas, Blaise Agüera. "Do Large Language Models Understand Us?" Daedalus 151, no. 2 (2022): 183–97. http://dx.doi.org/10.1162/daed_a_01909.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Large language models (LLMs) represent a major advance in artificial intelligence and, in particular, toward the goal of human-like artificial general intelligence. It is sometimes claimed, though, that machine learning is “just statistics,” hence that, in this grander ambition, progress in AI is illusory. Here I take the contrary view that LLMs have a great deal to teach us about the nature of language, understanding, intelligence, sociality, and personhood. Specifically: statistics do amount to understanding, in any falsifiable sense. Furthermore, much of what we consider intelligence is inherently dialogic, hence social; it requires a theory of mind. Complex sequence learning and social interaction may be a sufficient basis for general intelligence, including theory of mind and consciousness. Since the interior state of another being can only be understood through interaction, no objective answer is possible to the question of when an “it” becomes a “who,” but for many people, neural nets running on computers are likely to cross this threshold in the very near future.
12

Mochihashi, Daichi. "Large Language Models(LLM)and Robotics." Journal of the Robotics Society of Japan 40, no. 10 (2022): 863–66. http://dx.doi.org/10.7210/jrsj.40.863.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Long, Robert. "Introspective Capabilities in Large Language Models." Journal of Consciousness Studies 30, no. 9 (September 30, 2023): 143–53. http://dx.doi.org/10.53765/20512201.30.9.143.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper considers the kind of introspection that large language models (LLMs) might be able to have. It argues that LLMs, while currently limited in their introspective capabilities, are not inherently unable to have such capabilities: they already model the world, including mental concepts, and already have some introspection-like capabilities. With deliberate training, LLMs may develop introspective capabilities. The paper proposes a method for such training for introspection, situates possible LLM introspection in the 'possible forms of introspection' framework proposed by Kammerer and Frankish, and considers the ethical ramifications of introspection and self-report in AI systems.
14

Lin, Hsiao-Ying, and Jeffrey Voas. "Lower Energy Large Language Models (LLMs)." Computer 56, no. 10 (October 2023): 14–16. http://dx.doi.org/10.1109/mc.2023.3278160.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Kaye, Jofish. "L-Space and Large Language Models." Communications of the ACM 66, no. 8 (July 25, 2023): 116. http://dx.doi.org/10.1145/3596900.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
From the intersection of computational science and technological speculation, with boundaries limited only by our ability to imagine what could be. Design fiction is an approach to understanding and speculating about alternate futures. One part of this can involve creating representative artifacts or prototypes from the future, as if they fell through a time warp to the present day. This column is a piece of such speculative fiction, set in 2025.
16

Friedman, Robert. "Large Language Models and Logical Reasoning." Encyclopedia 3, no. 2 (May 30, 2023): 687–97. http://dx.doi.org/10.3390/encyclopedia3020049.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In deep learning, large language models are typically trained on data from a corpus as representative of current knowledge. However, natural language is not an ideal form for the reliable communication of concepts. Instead, formal logical statements are preferable since they are subject to verifiability, reliability, and applicability. Another reason for this preference is that natural language is not designed for an efficient and reliable flow of information and knowledge, but is instead designed as an evolutionary adaptation as formed from a prior set of natural constraints. As a formally structured language, logical statements are also more interpretable. They may be informally constructed in the form of a natural language statement, but a formalized logical statement is expected to follow a stricter set of rules, such as with the use of symbols for representing the logic-based operators that connect multiple simple statements and form verifiable propositions.
17

Denning, Peter J. "The Smallness of Large Language Models." Communications of the ACM 66, no. 9 (August 23, 2023): 24–27. http://dx.doi.org/10.1145/3608966.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Rousseau, Ronald, Liying Yang, Johan Bollen, and Zhesi Shen. "Large language models and scientific publishing." Journal of Data and Information Science 8, no. 1 (February 1, 2023): 1. http://dx.doi.org/10.2478/jdis-2023-0007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Borchers, Moritz. "Large Language Models in der Medizin." Uro-News 28, no. 1 (January 2024): 48–49. http://dx.doi.org/10.1007/s00092-023-6207-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Matzakos, Nikolaos, Spyridon Doukakis, and Maria Moundridou. "Learning Mathematics with Large Language Models." International Journal of Emerging Technologies in Learning (iJET) 18, no. 20 (October 17, 2023): 51–71. http://dx.doi.org/10.3991/ijet.v18i20.42979.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Artificial intelligence (AI) has permeated all human activities, bringing about significant changes and creating new scientific and ethical challenges. The field of education could not be an exception to this development. OpenAI’s unveiling of ChatGPT, their large language model (LLM), has sparked significant interest in the potential applications of this technology in education. This paper aims to contribute to the ongoing discussion on the role of AI in education and its potential implications for the future of learning by exploring how LLMs could be utilized in the teaching of mathematics in higher education and how they compare to the currently widely used computer algebra systems (CAS) and other mathematical tools. It argues that these innovative tools have the potential to provide functional and pedagogical opportunities that may influence changes in curriculum and assessment approaches.
21

Sabbatella, Antonio, Andrea Ponti, Ilaria Giordani, Antonio Candelieri, and Francesco Archetti. "Prompt Optimization in Large Language Models." Mathematics 12, no. 6 (March 21, 2024): 929. http://dx.doi.org/10.3390/math12060929.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Prompt optimization is a crucial task for improving the performance of large language models for downstream tasks. In this paper, a prompt is a sequence of n-grams selected from a vocabulary. Consequently, the aim is to select the optimal prompt concerning a certain performance metric. Prompt optimization can be considered as a combinatorial optimization problem, with the number of possible prompts (i.e., the combinatorial search space) given by the size of the vocabulary (i.e., all the possible n-grams) raised to the power of the length of the prompt. Exhaustive search is impractical; thus, an efficient search strategy is needed. We propose a Bayesian Optimization method performed over a continuous relaxation of the combinatorial search space. Bayesian Optimization is the dominant approach in black-box optimization for its sample efficiency, along with its modular structure and versatility. We use BoTorch, a library for Bayesian Optimization research built on top of PyTorch. Specifically, we focus on Hard Prompt Tuning, which directly searches for an optimal prompt to be added to the text input without requiring access to the Large Language Model, using it as a black-box (such as for GPT-4 which is available as a Model as a Service). Albeit preliminary and based on “vanilla” Bayesian Optimization algorithms, our experiments with RoBERTa as a large language model, on six benchmark datasets, show good performances when compared against other state-of-the-art black-box prompt optimization methods and enable an analysis of the trade-off between the size of the search space, accuracy, and wall-clock time.
22

Bent, Adam Allen. "Large Language Models: AI's Legal Revolution." Pace Law Review 44, no. 1 (December 20, 2023): 91. http://dx.doi.org/10.58948/2331-3528.2083.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Fang, Meng, Shilong Deng, Yudi Zhang, Zijing Shi, Ling Chen, Mykola Pechenizkiy, and Jun Wang. "Large Language Models Are Neurosymbolic Reasoners." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 16 (March 24, 2024): 17985–93. http://dx.doi.org/10.1609/aaai.v38i16.29754.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A wide range of real-world applications is characterized by their symbolic nature, necessitating a strong capability for symbolic reasoning. This paper investigates the potential application of Large Language Models (LLMs) as symbolic reasoners. We focus on text-based games, significant benchmarks for agents with natural language capabilities, particularly in symbolic tasks like math, map reading, sorting, and applying common sense in text-based worlds. To facilitate these agents, we propose an LLM agent designed to tackle symbolic challenges and achieve in-game objectives. We begin by initializing the LLM agent and informing it of its role. The agent then receives observations and a set of valid actions from the text-based games, along with a specific symbolic module. With these inputs, the LLM agent chooses an action and interacts with the game environments. Our experimental results demonstrate that our method significantly enhances the capability of LLMs as automated agents for symbolic reasoning, and our LLM agent is effective in text-based games involving symbolic tasks, achieving an average performance of 88% across all tasks.
24

Barta, Walter. "Will Large Language Models Overwrite Us?" Double Helix: A Journal of Critical Thinking and Writing 11, no. 1 (2023): 1–8. http://dx.doi.org/10.37514/dbh-j.2023.11.1.08.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Dvoichenkov, Danylo D. "Knowledge Graphs and Large Language Models." Control Systems and Computers, no. 3 (303) (2023): 54–60. http://dx.doi.org/10.15407/csc.2023.03.054.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Large Language Models(LLM) based on the Transformer architecture is nowadays one of the most widely used tool in Natural Language Processing(NLP) field. Nonetheless this approach has some limitations and flaws. In particular, these problems become crucial for the NLP-based expert systems. The LLMs may sometimes hallucinate and provide non-trustworthy responses. We will advocate the using of Knowledge Graphs for solving this problem.
26

Singh, Pranaydeep, Orphée De Clercq, and Els Lefever. "Distilling Monolingual Models from Large Multilingual Transformers." Electronics 12, no. 4 (February 18, 2023): 1022. http://dx.doi.org/10.3390/electronics12041022.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Although language modeling has been trending upwards steadily, models available for low-resourced languages are limited to large multilingual models such as mBERT and XLM-RoBERTa, which come with significant overheads for deployment vis-à-vis their model size, inference speeds, etc. We attempt to tackle this problem by proposing a novel methodology to apply knowledge distillation techniques to filter language-specific information from a large multilingual model into a small, fast monolingual model that can often outperform the teacher model. We demonstrate the viability of this methodology on two downstream tasks each for six languages. We further dive into the possible modifications to the basic setup for low-resourced languages by exploring ideas to tune the final vocabulary of the distilled models. Lastly, we perform a detailed ablation study to understand the different components of the setup better and find out what works best for the two under-resourced languages, Swahili and Slovene.
27

Youssef, Alaa, Samantha Stein, Justin Clapp, and David Magnus. "The Importance of Understanding Language in Large Language Models." American Journal of Bioethics 23, no. 10 (October 3, 2023): 6–7. http://dx.doi.org/10.1080/15265161.2023.2256614.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Hamaniuk, Vita A. "The potential of Large Language Models in language education." Educational Dimension 5 (December 9, 2021): 208–10. http://dx.doi.org/10.31812/ed.650.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This editorial explores the potential of Large Language Models (LLMs) in language education. It discusses the role of LLMs in machine translation, the concept of ‘prompt programming’, and the inductive bias of LLMs for abstract textual reasoning. The editorial also highlights using LLMs as creative writing tools and their effectiveness in paraphrasing tasks. It concludes by emphasizing the need for responsible and ethical use of these tools in language education.
29

Makridakis, Spyros, Fotios Petropoulos, and Yanfei Kang. "Large Language Models: Their Success and Impact." Forecasting 5, no. 3 (August 25, 2023): 536–49. http://dx.doi.org/10.3390/forecast5030030.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
ChatGPT, a state-of-the-art large language model (LLM), is revolutionizing the AI field by exhibiting humanlike skills in a range of tasks that include understanding and answering natural language questions, translating languages, writing code, passing professional exams, and even composing poetry, among its other abilities. ChatGPT has gained an immense popularity since its launch, amassing 100 million active monthly users in just two months, thereby establishing itself as the fastest-growing consumer application to date. This paper discusses the reasons for its success as well as the future prospects of similar large language models (LLMs), with an emphasis on their potential impact on forecasting, a specialized and domain-specific field. This is achieved by first comparing the correctness of the answers of the standard ChatGPT and a custom one, trained using published papers from a subfield of forecasting where the answers to the questions asked are known, allowing us to determine their correctness compared to those of the two ChatGPT versions. Then, we also compare the responses of the two versions on how judgmental adjustments to the statistical/ML forecasts should be applied by firms to improve their accuracy. The paper concludes by considering the future of LLMs and their impact on all aspects of our life and work, as well as on the field of forecasting specifically. Finally, the conclusion section is generated by ChatGPT, which was provided with a condensed version of this paper and asked to write a four-paragraph conclusion.
30

Abid, Abubakar, Maheen Farooqi, and James Zou. "Large language models associate Muslims with violence." Nature Machine Intelligence 3, no. 6 (June 2021): 461–63. http://dx.doi.org/10.1038/s42256-021-00359-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Sorin, Vera, and Eyal Klang. "Large language models and the emergence phenomena." European Journal of Radiology Open 10 (2023): 100494. http://dx.doi.org/10.1016/j.ejro.2023.100494.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Zheng, Elise Li, and Sandra Soo-Jin Lee. "The Epistemological Danger of Large Language Models." American Journal of Bioethics 23, no. 10 (October 3, 2023): 102–4. http://dx.doi.org/10.1080/15265161.2023.2250294.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

van Doorn, William, and Steef Kurstjens. "Large-language models binnen de klinische chemie." Laboratoriumgeneeskunde 6, no. 4 (October 2023): 9. http://dx.doi.org/10.24078/labgeneeskunde.2023.10.23776.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Polesie, Sam, and Olle Larkö. "Use of Large Language Models: Editorial Comments." Acta Dermato-Venereologica 103 (February 16, 2023): adv00874. http://dx.doi.org/10.2340/actadv.v103.9593.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Zhang, Tianyi, Faisal Ladhak, Esin Durmus, Percy Liang, Kathleen McKeown, and Tatsunori B. Hashimoto. "Benchmarking Large Language Models for News Summarization." Transactions of the Association for Computational Linguistics 12 (2024): 39–57. http://dx.doi.org/10.1162/tacl_a_00632.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Large language models (LLMs) have shown promise for automatic summarization but the reasons behind their successes are poorly understood. By conducting a human evaluation on ten LLMs across different pretraining methods, prompts, and model scales, we make two important observations. First, we find instruction tuning, not model size, is the key to the LLM’s zero-shot summarization capability. Second, existing studies have been limited by low-quality references, leading to underestimates of human performance and lower few-shot and finetuning performance. To better evaluate LLMs, we perform human evaluation over high-quality summaries we collect from freelance writers. Despite major stylistic differences such as the amount of paraphrasing, we find that LLM summaries are judged to be on par with human written summaries.
36

Boiko, Daniil A., Robert MacKnight, Ben Kline, and Gabe Gomes. "Autonomous chemical research with large language models." Nature 624, no. 7992 (December 20, 2023): 570–78. http://dx.doi.org/10.1038/s41586-023-06792-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractTransformer-based large language models are making significant strides in various fields, such as natural language processing1–5, biology6,7, chemistry8–10 and computer programming11,12. Here, we show the development and capabilities of Coscientist, an artificial intelligence system driven by GPT-4 that autonomously designs, plans and performs complex experiments by incorporating large language models empowered by tools such as internet and documentation search, code execution and experimental automation. Coscientist showcases its potential for accelerating research across six diverse tasks, including the successful reaction optimization of palladium-catalysed cross-couplings, while exhibiting advanced capabilities for (semi-)autonomous experimental design and execution. Our findings demonstrate the versatility, efficacy and explainability of artificial intelligence systems like Coscientist in advancing research.
37

Dias, Ana Laura, and Tiago Rodrigues. "Large language models direct automated chemistry laboratory." Nature 624, no. 7992 (December 20, 2023): 530–31. http://dx.doi.org/10.1038/d41586-023-03790-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Joshi, Himanshu, and Volkan Ustun. "Augmenting Cognitive Architectures with Large Language Models." Proceedings of the AAAI Symposium Series 2, no. 1 (January 22, 2024): 281–85. http://dx.doi.org/10.1609/aaaiss.v2i1.27689.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A particular fusion of generative models and cognitive architectures is discussed with the help of the Soar and Sigma cognitive architectures. After a brief introduction to cognitive architecture concepts and Large Language Models as exemplar generative AI models, one approach towards their fusion is discussed. This is then analyzed with a summary of potential benefits and extensions needed to existing cognitive architecture that is closest to the proposal.
39

Kim, Sunkyu, Choong-kun Lee, and Seung-seob Kim. "Large Language Models: A Guide for Radiologists." Korean Journal of Radiology 25, no. 2 (2024): 126. http://dx.doi.org/10.3348/kjr.2023.0997.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Fralick, Michael, Chana A. Sacks, Daniel Muller, Tim Vining, Emily Ling, Jeffrey M. Drazen, and C. Corey Hardin. "Large Language Models." NEJM Evidence 2, no. 8 (July 25, 2023). http://dx.doi.org/10.1056/evidstat2300128.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Bakhshandeh, Sadra. "Benchmarking medical large language models." Nature Reviews Bioengineering, July 24, 2023. http://dx.doi.org/10.1038/s44222-023-00097-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Kleebayoon, Amnuay, and Viroj Wiwanitkit. "Large Language Models and Psychoeducation." Journal of ECT, August 3, 2023. http://dx.doi.org/10.1097/yct.0000000000000956.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Thirunavukarasu, Arun James, Darren Shu Jeng Ting, Kabilan Elangovan, Laura Gutierrez, Ting Fang Tan, and Daniel Shu Wei Ting. "Large language models in medicine." Nature Medicine, July 17, 2023. http://dx.doi.org/10.1038/s41591-023-02448-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Bezzi, Michele. "Large Language Models and Security." IEEE Security & Privacy, 2024, 2–10. http://dx.doi.org/10.1109/msec.2023.3345568.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Wang, Yu. "On Finetuning Large Language Models." Political Analysis, November 28, 2023, 1–5. http://dx.doi.org/10.1017/pan.2023.36.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract A recent paper by Häffner et al. (2023, Political Analysis 31, 481–499) introduces an interpretable deep learning approach for domain-specific dictionary creation, where it is claimed that the dictionary-based approach outperforms finetuned language models in predictive accuracy while retaining interpretability. We show that the dictionary-based approach’s reported superiority over large language models, BERT specifically, is due to the fact that most of the parameters in the language models are excluded from finetuning. In this letter, we first discuss the architecture of BERT models, then explain the limitations of finetuning only the top classification layer, and lastly we report results where finetuned language models outperform the newly proposed dictionary-based approach by 27% in terms of $R^2$ and 46% in terms of mean squared error once we allow these parameters to learn during finetuning. Researchers interested in large language models, text classification, and text regression should find our results useful. Our code and data are publicly available.
46

Deibert, Christopher M. "Editorial on large language models." Translational Andrology and Urology, January 2023, 0. http://dx.doi.org/10.21037/tau-23-591.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Angelopoulos, Panagiotis, Kevin Lee, and Sanjog Misra. "Value Aligned Large Language Models." SSRN Electronic Journal, 2024. http://dx.doi.org/10.2139/ssrn.4781850.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Yoshikawa, Naruki, Marta Skreta, Kourosh Darvish, Sebastian Arellano-Rubach, Zhi Ji, Lasse Bjørn Kristensen, Andrew Zou Li, et al. "Large language models for chemistry robotics." Autonomous Robots, October 25, 2023. http://dx.doi.org/10.1007/s10514-023-10136-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractThis paper proposes an approach to automate chemistry experiments using robots by translating natural language instructions into robot-executable plans, using large language models together with task and motion planning. Adding natural language interfaces to autonomous chemistry experiment systems lowers the barrier to using complicated robotics systems and increases utility for non-expert users, but translating natural language experiment descriptions from users into low-level robotics languages is nontrivial. Furthermore, while recent advances have used large language models to generate task plans, reliably executing those plans in the real world by an embodied agent remains challenging. To enable autonomous chemistry experiments and alleviate the workload of chemists, robots must interpret natural language commands, perceive the workspace, autonomously plan multi-step actions and motions, consider safety precautions, and interact with various laboratory equipment. Our approach, CLAIRify, combines automatic iterative prompting with program verification to ensure syntactically valid programs in a data-scarce domain-specific language that incorporates environmental constraints. The generated plan is executed through solving a constrained task and motion planning problem using PDDLStream solvers to prevent spillages of liquids as well as collisions in chemistry labs. We demonstrate the effectiveness of our approach in planning chemistry experiments, with plans successfully executed on a real robot using a repertoire of robot skills and lab tools. Specifically, we showcase the utility of our framework in pouring skills for various materials and two fundamental chemical experiments for materials synthesis: solubility and recrystallization. Further details about CLAIRify can be found at https://ac-rad.github.io/clairify/.
49

Mahowald, Kyle, Anna A. Ivanova, Idan A. Blank, Nancy Kanwisher, Joshua B. Tenenbaum, and Evelina Fedorenko. "Dissociating language and thought in large language models." Trends in Cognitive Sciences, March 2024. http://dx.doi.org/10.1016/j.tics.2024.01.011.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Choi, Jaewoong, and Byungju Lee. "Accelerating materials language processing with large language models." Communications Materials 5, no. 1 (February 15, 2024). http://dx.doi.org/10.1038/s43246-024-00449-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractMaterials language processing (MLP) can facilitate materials science research by automating the extraction of structured data from research papers. Despite the existence of deep learning models for MLP tasks, there are ongoing practical issues associated with complex model architectures, extensive fine-tuning, and substantial human-labelled datasets. Here, we introduce the use of large language models, such as generative pretrained transformer (GPT), to replace the complex architectures of prior MLP models with strategic designs of prompt engineering. We find that in-context learning of GPT models with few or zero-shots can provide high performance text classification, named entity recognition and extractive question answering with limited datasets, demonstrated for various classes of materials. These generative models can also help identify incorrect annotated data. Our GPT-based approach can assist material scientists in solving knowledge-intensive MLP tasks, even if they lack relevant expertise, by offering MLP guidelines applicable to any materials science domain. In addition, the outcomes of GPT models are expected to reduce the workload of researchers, such as manual labelling, by producing an initial labelling set and verifying human-annotations.

До бібліографії