Artigos de revistas sobre o tema "Large language models"

Siga este link para ver outros tipos de publicações sobre o tema: Large language models.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Large language models".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Cerf, Vinton G. "Large Language Models". Communications of the ACM 66, n.º 8 (25 de julho de 2023): 7. http://dx.doi.org/10.1145/3606337.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Sharma Shria Verma, Dhananjai. "Automated Penetration Testing using Large Language Models". International Journal of Science and Research (IJSR) 13, n.º 4 (5 de abril de 2024): 1826–31. http://dx.doi.org/10.21275/sr24427043741.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Mishra, Vinaytosh. "Large Language Models in Medical Education and Quality Concerns". Journal of Quality in Health Care & Economics 6, n.º 1 (2023): 1–3. http://dx.doi.org/10.23880/jqhe-16000319.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Jain, Migul. "Future of Interacting with Computers and Large Language Models". International Journal of Science and Research (IJSR) 12, n.º 10 (5 de outubro de 2023): 1711–12. http://dx.doi.org/10.21275/sr231023121603.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Noever, David. "LARGE LANGUAGE MODELS FOR CIPHERS". International Journal of Artificial Intelligence & Applications 14, n.º 03 (28 de maio de 2023): 1–20. http://dx.doi.org/10.5121/ijaia.2023.14301.

Texto completo da fonte
Resumo:
This study investigates whether transformer models like ChatGPT (GPT4, MAR2023) can generalize beyond their training data by examining their performance on the novel Cipher Dataset, which scrambles token order. The dataset consists of 654 test cases, and the analysis focuses on 51 text examples and 13 algorithmic choices. Results show that the models perform well on low-difficulty ciphers like Caesar and can unscramble tokens in 77% of the cipher examples. Despite their reliance on training data, the model's ability to generalize outside of token order is surprising, especially when leveraging large-scale models with hundreds of billions of weights and a comprehensive text corpus with few examples. The original contributions of the work focus on presenting a cipher challenge dataset and then scoring historically significant ciphers for large language models to descramble. The real challenge for these generational models lies in executing the complex algorithmic steps on new cipher inputs, potentially as a novel reasoning challenge that relies less on knowledge acquisition and more on trial-and-error or out-ofbounds responses.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

D’Alessandro, William, Harry R. Lloyd e Nathaniel Sharadin. "Large Language Models and Biorisk". American Journal of Bioethics 23, n.º 10 (3 de outubro de 2023): 115–18. http://dx.doi.org/10.1080/15265161.2023.2250333.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Shanahan, Murray. "Talking about Large Language Models". Communications of the ACM 67, n.º 2 (25 de janeiro de 2024): 68–79. http://dx.doi.org/10.1145/3624724.

Texto completo da fonte
Resumo:
Interacting with a contemporary LLM-based conversational agent can create an illusion of being in the presence of a thinking creature. Yet, in their very nature, such systems are fundamentally not like us.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Cheon, Hyundeuk. "Do Large Language Models Understand?" CHUL HAK SA SANG : Journal of Philosophical Ideas 90 (30 de novembro de 2023): 75–105. http://dx.doi.org/10.15750/chss.90.202311.003.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Veres, Csaba. "Large Language Models are Not Models of Natural Language: They are Corpus Models". IEEE Access 10 (2022): 61970–79. http://dx.doi.org/10.1109/access.2022.3182505.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Ross, Angela, Kathleen McGrow, Degui Zhi e Laila Rasmy. "Foundation Models, Generative AI, and Large Language Models". CIN: Computers, Informatics, Nursing 42, n.º 5 (maio de 2024): 377–87. http://dx.doi.org/10.1097/cin.0000000000001149.

Texto completo da fonte
Resumo:
We are in a booming era of artificial intelligence, particularly with the increased availability of technologies that can help generate content, such as ChatGPT. Healthcare institutions are discussing or have started utilizing these innovative technologies within their workflow. Major electronic health record vendors have begun to leverage large language models to process and analyze vast amounts of clinical natural language text, performing a wide range of tasks in healthcare settings to help alleviate clinicians' burden. Although such technologies can be helpful in applications such as patient education, drafting responses to patient questions and emails, medical record summarization, and medical research facilitation, there are concerns about the tools' readiness for use within the healthcare domain and acceptance by the current workforce. The goal of this article is to provide nurses with an understanding of the currently available foundation models and artificial intelligence tools, enabling them to evaluate the need for such tools and assess how they can impact current clinical practice. This will help nurses efficiently assess, implement, and evaluate these tools to ensure these technologies are ethically and effectively integrated into healthcare systems, while also rigorously monitoring their performance and impact on patient care.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

y Arcas, Blaise Agüera. "Do Large Language Models Understand Us?" Daedalus 151, n.º 2 (2022): 183–97. http://dx.doi.org/10.1162/daed_a_01909.

Texto completo da fonte
Resumo:
Abstract Large language models (LLMs) represent a major advance in artificial intelligence and, in particular, toward the goal of human-like artificial general intelligence. It is sometimes claimed, though, that machine learning is “just statistics,” hence that, in this grander ambition, progress in AI is illusory. Here I take the contrary view that LLMs have a great deal to teach us about the nature of language, understanding, intelligence, sociality, and personhood. Specifically: statistics do amount to understanding, in any falsifiable sense. Furthermore, much of what we consider intelligence is inherently dialogic, hence social; it requires a theory of mind. Complex sequence learning and social interaction may be a sufficient basis for general intelligence, including theory of mind and consciousness. Since the interior state of another being can only be understood through interaction, no objective answer is possible to the question of when an “it” becomes a “who,” but for many people, neural nets running on computers are likely to cross this threshold in the very near future.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Mochihashi, Daichi. "Large Language Models(LLM)and Robotics". Journal of the Robotics Society of Japan 40, n.º 10 (2022): 863–66. http://dx.doi.org/10.7210/jrsj.40.863.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Long, Robert. "Introspective Capabilities in Large Language Models". Journal of Consciousness Studies 30, n.º 9 (30 de setembro de 2023): 143–53. http://dx.doi.org/10.53765/20512201.30.9.143.

Texto completo da fonte
Resumo:
This paper considers the kind of introspection that large language models (LLMs) might be able to have. It argues that LLMs, while currently limited in their introspective capabilities, are not inherently unable to have such capabilities: they already model the world, including mental concepts, and already have some introspection-like capabilities. With deliberate training, LLMs may develop introspective capabilities. The paper proposes a method for such training for introspection, situates possible LLM introspection in the 'possible forms of introspection' framework proposed by Kammerer and Frankish, and considers the ethical ramifications of introspection and self-report in AI systems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Lin, Hsiao-Ying, e Jeffrey Voas. "Lower Energy Large Language Models (LLMs)". Computer 56, n.º 10 (outubro de 2023): 14–16. http://dx.doi.org/10.1109/mc.2023.3278160.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Kaye, Jofish. "L-Space and Large Language Models". Communications of the ACM 66, n.º 8 (25 de julho de 2023): 116. http://dx.doi.org/10.1145/3596900.

Texto completo da fonte
Resumo:
From the intersection of computational science and technological speculation, with boundaries limited only by our ability to imagine what could be. Design fiction is an approach to understanding and speculating about alternate futures. One part of this can involve creating representative artifacts or prototypes from the future, as if they fell through a time warp to the present day. This column is a piece of such speculative fiction, set in 2025.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Friedman, Robert. "Large Language Models and Logical Reasoning". Encyclopedia 3, n.º 2 (30 de maio de 2023): 687–97. http://dx.doi.org/10.3390/encyclopedia3020049.

Texto completo da fonte
Resumo:
In deep learning, large language models are typically trained on data from a corpus as representative of current knowledge. However, natural language is not an ideal form for the reliable communication of concepts. Instead, formal logical statements are preferable since they are subject to verifiability, reliability, and applicability. Another reason for this preference is that natural language is not designed for an efficient and reliable flow of information and knowledge, but is instead designed as an evolutionary adaptation as formed from a prior set of natural constraints. As a formally structured language, logical statements are also more interpretable. They may be informally constructed in the form of a natural language statement, but a formalized logical statement is expected to follow a stricter set of rules, such as with the use of symbols for representing the logic-based operators that connect multiple simple statements and form verifiable propositions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Denning, Peter J. "The Smallness of Large Language Models". Communications of the ACM 66, n.º 9 (23 de agosto de 2023): 24–27. http://dx.doi.org/10.1145/3608966.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Rousseau, Ronald, Liying Yang, Johan Bollen e Zhesi Shen. "Large language models and scientific publishing". Journal of Data and Information Science 8, n.º 1 (1 de fevereiro de 2023): 1. http://dx.doi.org/10.2478/jdis-2023-0007.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Borchers, Moritz. "Large Language Models in der Medizin". Uro-News 28, n.º 1 (janeiro de 2024): 48–49. http://dx.doi.org/10.1007/s00092-023-6207-8.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Matzakos, Nikolaos, Spyridon Doukakis e Maria Moundridou. "Learning Mathematics with Large Language Models". International Journal of Emerging Technologies in Learning (iJET) 18, n.º 20 (17 de outubro de 2023): 51–71. http://dx.doi.org/10.3991/ijet.v18i20.42979.

Texto completo da fonte
Resumo:
Artificial intelligence (AI) has permeated all human activities, bringing about significant changes and creating new scientific and ethical challenges. The field of education could not be an exception to this development. OpenAI’s unveiling of ChatGPT, their large language model (LLM), has sparked significant interest in the potential applications of this technology in education. This paper aims to contribute to the ongoing discussion on the role of AI in education and its potential implications for the future of learning by exploring how LLMs could be utilized in the teaching of mathematics in higher education and how they compare to the currently widely used computer algebra systems (CAS) and other mathematical tools. It argues that these innovative tools have the potential to provide functional and pedagogical opportunities that may influence changes in curriculum and assessment approaches.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Sabbatella, Antonio, Andrea Ponti, Ilaria Giordani, Antonio Candelieri e Francesco Archetti. "Prompt Optimization in Large Language Models". Mathematics 12, n.º 6 (21 de março de 2024): 929. http://dx.doi.org/10.3390/math12060929.

Texto completo da fonte
Resumo:
Prompt optimization is a crucial task for improving the performance of large language models for downstream tasks. In this paper, a prompt is a sequence of n-grams selected from a vocabulary. Consequently, the aim is to select the optimal prompt concerning a certain performance metric. Prompt optimization can be considered as a combinatorial optimization problem, with the number of possible prompts (i.e., the combinatorial search space) given by the size of the vocabulary (i.e., all the possible n-grams) raised to the power of the length of the prompt. Exhaustive search is impractical; thus, an efficient search strategy is needed. We propose a Bayesian Optimization method performed over a continuous relaxation of the combinatorial search space. Bayesian Optimization is the dominant approach in black-box optimization for its sample efficiency, along with its modular structure and versatility. We use BoTorch, a library for Bayesian Optimization research built on top of PyTorch. Specifically, we focus on Hard Prompt Tuning, which directly searches for an optimal prompt to be added to the text input without requiring access to the Large Language Model, using it as a black-box (such as for GPT-4 which is available as a Model as a Service). Albeit preliminary and based on “vanilla” Bayesian Optimization algorithms, our experiments with RoBERTa as a large language model, on six benchmark datasets, show good performances when compared against other state-of-the-art black-box prompt optimization methods and enable an analysis of the trade-off between the size of the search space, accuracy, and wall-clock time.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Bent, Adam Allen. "Large Language Models: AI's Legal Revolution". Pace Law Review 44, n.º 1 (20 de dezembro de 2023): 91. http://dx.doi.org/10.58948/2331-3528.2083.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Fang, Meng, Shilong Deng, Yudi Zhang, Zijing Shi, Ling Chen, Mykola Pechenizkiy e Jun Wang. "Large Language Models Are Neurosymbolic Reasoners". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 16 (24 de março de 2024): 17985–93. http://dx.doi.org/10.1609/aaai.v38i16.29754.

Texto completo da fonte
Resumo:
A wide range of real-world applications is characterized by their symbolic nature, necessitating a strong capability for symbolic reasoning. This paper investigates the potential application of Large Language Models (LLMs) as symbolic reasoners. We focus on text-based games, significant benchmarks for agents with natural language capabilities, particularly in symbolic tasks like math, map reading, sorting, and applying common sense in text-based worlds. To facilitate these agents, we propose an LLM agent designed to tackle symbolic challenges and achieve in-game objectives. We begin by initializing the LLM agent and informing it of its role. The agent then receives observations and a set of valid actions from the text-based games, along with a specific symbolic module. With these inputs, the LLM agent chooses an action and interacts with the game environments. Our experimental results demonstrate that our method significantly enhances the capability of LLMs as automated agents for symbolic reasoning, and our LLM agent is effective in text-based games involving symbolic tasks, achieving an average performance of 88% across all tasks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Barta, Walter. "Will Large Language Models Overwrite Us?" Double Helix: A Journal of Critical Thinking and Writing 11, n.º 1 (2023): 1–8. http://dx.doi.org/10.37514/dbh-j.2023.11.1.08.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Dvoichenkov, Danylo D. "Knowledge Graphs and Large Language Models". Control Systems and Computers, n.º 3 (303) (2023): 54–60. http://dx.doi.org/10.15407/csc.2023.03.054.

Texto completo da fonte
Resumo:
Large Language Models(LLM) based on the Transformer architecture is nowadays one of the most widely used tool in Natural Language Processing(NLP) field. Nonetheless this approach has some limitations and flaws. In particular, these problems become crucial for the NLP-based expert systems. The LLMs may sometimes hallucinate and provide non-trustworthy responses. We will advocate the using of Knowledge Graphs for solving this problem.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Singh, Pranaydeep, Orphée De Clercq e Els Lefever. "Distilling Monolingual Models from Large Multilingual Transformers". Electronics 12, n.º 4 (18 de fevereiro de 2023): 1022. http://dx.doi.org/10.3390/electronics12041022.

Texto completo da fonte
Resumo:
Although language modeling has been trending upwards steadily, models available for low-resourced languages are limited to large multilingual models such as mBERT and XLM-RoBERTa, which come with significant overheads for deployment vis-à-vis their model size, inference speeds, etc. We attempt to tackle this problem by proposing a novel methodology to apply knowledge distillation techniques to filter language-specific information from a large multilingual model into a small, fast monolingual model that can often outperform the teacher model. We demonstrate the viability of this methodology on two downstream tasks each for six languages. We further dive into the possible modifications to the basic setup for low-resourced languages by exploring ideas to tune the final vocabulary of the distilled models. Lastly, we perform a detailed ablation study to understand the different components of the setup better and find out what works best for the two under-resourced languages, Swahili and Slovene.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Youssef, Alaa, Samantha Stein, Justin Clapp e David Magnus. "The Importance of Understanding Language in Large Language Models". American Journal of Bioethics 23, n.º 10 (3 de outubro de 2023): 6–7. http://dx.doi.org/10.1080/15265161.2023.2256614.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Hamaniuk, Vita A. "The potential of Large Language Models in language education". Educational Dimension 5 (9 de dezembro de 2021): 208–10. http://dx.doi.org/10.31812/ed.650.

Texto completo da fonte
Resumo:
This editorial explores the potential of Large Language Models (LLMs) in language education. It discusses the role of LLMs in machine translation, the concept of ‘prompt programming’, and the inductive bias of LLMs for abstract textual reasoning. The editorial also highlights using LLMs as creative writing tools and their effectiveness in paraphrasing tasks. It concludes by emphasizing the need for responsible and ethical use of these tools in language education.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Makridakis, Spyros, Fotios Petropoulos e Yanfei Kang. "Large Language Models: Their Success and Impact". Forecasting 5, n.º 3 (25 de agosto de 2023): 536–49. http://dx.doi.org/10.3390/forecast5030030.

Texto completo da fonte
Resumo:
ChatGPT, a state-of-the-art large language model (LLM), is revolutionizing the AI field by exhibiting humanlike skills in a range of tasks that include understanding and answering natural language questions, translating languages, writing code, passing professional exams, and even composing poetry, among its other abilities. ChatGPT has gained an immense popularity since its launch, amassing 100 million active monthly users in just two months, thereby establishing itself as the fastest-growing consumer application to date. This paper discusses the reasons for its success as well as the future prospects of similar large language models (LLMs), with an emphasis on their potential impact on forecasting, a specialized and domain-specific field. This is achieved by first comparing the correctness of the answers of the standard ChatGPT and a custom one, trained using published papers from a subfield of forecasting where the answers to the questions asked are known, allowing us to determine their correctness compared to those of the two ChatGPT versions. Then, we also compare the responses of the two versions on how judgmental adjustments to the statistical/ML forecasts should be applied by firms to improve their accuracy. The paper concludes by considering the future of LLMs and their impact on all aspects of our life and work, as well as on the field of forecasting specifically. Finally, the conclusion section is generated by ChatGPT, which was provided with a condensed version of this paper and asked to write a four-paragraph conclusion.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Abid, Abubakar, Maheen Farooqi e James Zou. "Large language models associate Muslims with violence". Nature Machine Intelligence 3, n.º 6 (junho de 2021): 461–63. http://dx.doi.org/10.1038/s42256-021-00359-2.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Sorin, Vera, e Eyal Klang. "Large language models and the emergence phenomena". European Journal of Radiology Open 10 (2023): 100494. http://dx.doi.org/10.1016/j.ejro.2023.100494.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Zheng, Elise Li, e Sandra Soo-Jin Lee. "The Epistemological Danger of Large Language Models". American Journal of Bioethics 23, n.º 10 (3 de outubro de 2023): 102–4. http://dx.doi.org/10.1080/15265161.2023.2250294.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

van Doorn, William, e Steef Kurstjens. "Large-language models binnen de klinische chemie". Laboratoriumgeneeskunde 6, n.º 4 (outubro de 2023): 9. http://dx.doi.org/10.24078/labgeneeskunde.2023.10.23776.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Polesie, Sam, e Olle Larkö. "Use of Large Language Models: Editorial Comments". Acta Dermato-Venereologica 103 (16 de fevereiro de 2023): adv00874. http://dx.doi.org/10.2340/actadv.v103.9593.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Zhang, Tianyi, Faisal Ladhak, Esin Durmus, Percy Liang, Kathleen McKeown e Tatsunori B. Hashimoto. "Benchmarking Large Language Models for News Summarization". Transactions of the Association for Computational Linguistics 12 (2024): 39–57. http://dx.doi.org/10.1162/tacl_a_00632.

Texto completo da fonte
Resumo:
Abstract Large language models (LLMs) have shown promise for automatic summarization but the reasons behind their successes are poorly understood. By conducting a human evaluation on ten LLMs across different pretraining methods, prompts, and model scales, we make two important observations. First, we find instruction tuning, not model size, is the key to the LLM’s zero-shot summarization capability. Second, existing studies have been limited by low-quality references, leading to underestimates of human performance and lower few-shot and finetuning performance. To better evaluate LLMs, we perform human evaluation over high-quality summaries we collect from freelance writers. Despite major stylistic differences such as the amount of paraphrasing, we find that LLM summaries are judged to be on par with human written summaries.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Boiko, Daniil A., Robert MacKnight, Ben Kline e Gabe Gomes. "Autonomous chemical research with large language models". Nature 624, n.º 7992 (20 de dezembro de 2023): 570–78. http://dx.doi.org/10.1038/s41586-023-06792-0.

Texto completo da fonte
Resumo:
AbstractTransformer-based large language models are making significant strides in various fields, such as natural language processing1–5, biology6,7, chemistry8–10 and computer programming11,12. Here, we show the development and capabilities of Coscientist, an artificial intelligence system driven by GPT-4 that autonomously designs, plans and performs complex experiments by incorporating large language models empowered by tools such as internet and documentation search, code execution and experimental automation. Coscientist showcases its potential for accelerating research across six diverse tasks, including the successful reaction optimization of palladium-catalysed cross-couplings, while exhibiting advanced capabilities for (semi-)autonomous experimental design and execution. Our findings demonstrate the versatility, efficacy and explainability of artificial intelligence systems like Coscientist in advancing research.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Dias, Ana Laura, e Tiago Rodrigues. "Large language models direct automated chemistry laboratory". Nature 624, n.º 7992 (20 de dezembro de 2023): 530–31. http://dx.doi.org/10.1038/d41586-023-03790-0.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Joshi, Himanshu, e Volkan Ustun. "Augmenting Cognitive Architectures with Large Language Models". Proceedings of the AAAI Symposium Series 2, n.º 1 (22 de janeiro de 2024): 281–85. http://dx.doi.org/10.1609/aaaiss.v2i1.27689.

Texto completo da fonte
Resumo:
A particular fusion of generative models and cognitive architectures is discussed with the help of the Soar and Sigma cognitive architectures. After a brief introduction to cognitive architecture concepts and Large Language Models as exemplar generative AI models, one approach towards their fusion is discussed. This is then analyzed with a summary of potential benefits and extensions needed to existing cognitive architecture that is closest to the proposal.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Kim, Sunkyu, Choong-kun Lee e Seung-seob Kim. "Large Language Models: A Guide for Radiologists". Korean Journal of Radiology 25, n.º 2 (2024): 126. http://dx.doi.org/10.3348/kjr.2023.0997.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Fralick, Michael, Chana A. Sacks, Daniel Muller, Tim Vining, Emily Ling, Jeffrey M. Drazen e C. Corey Hardin. "Large Language Models". NEJM Evidence 2, n.º 8 (25 de julho de 2023). http://dx.doi.org/10.1056/evidstat2300128.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Bakhshandeh, Sadra. "Benchmarking medical large language models". Nature Reviews Bioengineering, 24 de julho de 2023. http://dx.doi.org/10.1038/s44222-023-00097-7.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Kleebayoon, Amnuay, e Viroj Wiwanitkit. "Large Language Models and Psychoeducation". Journal of ECT, 3 de agosto de 2023. http://dx.doi.org/10.1097/yct.0000000000000956.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Thirunavukarasu, Arun James, Darren Shu Jeng Ting, Kabilan Elangovan, Laura Gutierrez, Ting Fang Tan e Daniel Shu Wei Ting. "Large language models in medicine". Nature Medicine, 17 de julho de 2023. http://dx.doi.org/10.1038/s41591-023-02448-8.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Bezzi, Michele. "Large Language Models and Security". IEEE Security & Privacy, 2024, 2–10. http://dx.doi.org/10.1109/msec.2023.3345568.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Wang, Yu. "On Finetuning Large Language Models". Political Analysis, 28 de novembro de 2023, 1–5. http://dx.doi.org/10.1017/pan.2023.36.

Texto completo da fonte
Resumo:
Abstract A recent paper by Häffner et al. (2023, Political Analysis 31, 481–499) introduces an interpretable deep learning approach for domain-specific dictionary creation, where it is claimed that the dictionary-based approach outperforms finetuned language models in predictive accuracy while retaining interpretability. We show that the dictionary-based approach’s reported superiority over large language models, BERT specifically, is due to the fact that most of the parameters in the language models are excluded from finetuning. In this letter, we first discuss the architecture of BERT models, then explain the limitations of finetuning only the top classification layer, and lastly we report results where finetuned language models outperform the newly proposed dictionary-based approach by 27% in terms of $R^2$ and 46% in terms of mean squared error once we allow these parameters to learn during finetuning. Researchers interested in large language models, text classification, and text regression should find our results useful. Our code and data are publicly available.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Deibert, Christopher M. "Editorial on large language models". Translational Andrology and Urology, janeiro de 2023, 0. http://dx.doi.org/10.21037/tau-23-591.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Angelopoulos, Panagiotis, Kevin Lee e Sanjog Misra. "Value Aligned Large Language Models". SSRN Electronic Journal, 2024. http://dx.doi.org/10.2139/ssrn.4781850.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Yoshikawa, Naruki, Marta Skreta, Kourosh Darvish, Sebastian Arellano-Rubach, Zhi Ji, Lasse Bjørn Kristensen, Andrew Zou Li et al. "Large language models for chemistry robotics". Autonomous Robots, 25 de outubro de 2023. http://dx.doi.org/10.1007/s10514-023-10136-2.

Texto completo da fonte
Resumo:
AbstractThis paper proposes an approach to automate chemistry experiments using robots by translating natural language instructions into robot-executable plans, using large language models together with task and motion planning. Adding natural language interfaces to autonomous chemistry experiment systems lowers the barrier to using complicated robotics systems and increases utility for non-expert users, but translating natural language experiment descriptions from users into low-level robotics languages is nontrivial. Furthermore, while recent advances have used large language models to generate task plans, reliably executing those plans in the real world by an embodied agent remains challenging. To enable autonomous chemistry experiments and alleviate the workload of chemists, robots must interpret natural language commands, perceive the workspace, autonomously plan multi-step actions and motions, consider safety precautions, and interact with various laboratory equipment. Our approach, CLAIRify, combines automatic iterative prompting with program verification to ensure syntactically valid programs in a data-scarce domain-specific language that incorporates environmental constraints. The generated plan is executed through solving a constrained task and motion planning problem using PDDLStream solvers to prevent spillages of liquids as well as collisions in chemistry labs. We demonstrate the effectiveness of our approach in planning chemistry experiments, with plans successfully executed on a real robot using a repertoire of robot skills and lab tools. Specifically, we showcase the utility of our framework in pouring skills for various materials and two fundamental chemical experiments for materials synthesis: solubility and recrystallization. Further details about CLAIRify can be found at https://ac-rad.github.io/clairify/.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Mahowald, Kyle, Anna A. Ivanova, Idan A. Blank, Nancy Kanwisher, Joshua B. Tenenbaum e Evelina Fedorenko. "Dissociating language and thought in large language models". Trends in Cognitive Sciences, março de 2024. http://dx.doi.org/10.1016/j.tics.2024.01.011.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Choi, Jaewoong, e Byungju Lee. "Accelerating materials language processing with large language models". Communications Materials 5, n.º 1 (15 de fevereiro de 2024). http://dx.doi.org/10.1038/s43246-024-00449-9.

Texto completo da fonte
Resumo:
AbstractMaterials language processing (MLP) can facilitate materials science research by automating the extraction of structured data from research papers. Despite the existence of deep learning models for MLP tasks, there are ongoing practical issues associated with complex model architectures, extensive fine-tuning, and substantial human-labelled datasets. Here, we introduce the use of large language models, such as generative pretrained transformer (GPT), to replace the complex architectures of prior MLP models with strategic designs of prompt engineering. We find that in-context learning of GPT models with few or zero-shots can provide high performance text classification, named entity recognition and extractive question answering with limited datasets, demonstrated for various classes of materials. These generative models can also help identify incorrect annotated data. Our GPT-based approach can assist material scientists in solving knowledge-intensive MLP tasks, even if they lack relevant expertise, by offering MLP guidelines applicable to any materials science domain. In addition, the outcomes of GPT models are expected to reduce the workload of researchers, such as manual labelling, by producing an initial labelling set and verifying human-annotations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia