Segui questo link per vedere altri tipi di pubblicazioni sul tema: Large language model.

Articoli di riviste sul tema "Large language model"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Large language model".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

B, Mr DHANUSH. "CHATBOT USING LARGE LANGUAGE MODEL". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, n. 05 (14 maggio 2024): 1–5. http://dx.doi.org/10.55041/ijsrem34001.

Testo completo
Abstract (sommario):
The concept of Natural Language Processing has seen a remarkable advancement in the recent years. This remarkable advancement was particularly with the development of Large Language Models (LLM). Large Language Models are used to develop a human like conversations. This LLM is a part of Natural Language Processing which focuses on enabling computers to understand, interpret, and generate human language. The existing system of chatbots does not generate human like responses. The proposed system of chatbots uses the power of Large Language Models to generate more human like responses, providing the conversation in a natural way. By genereating human like respones, it will be in a natural way for the user. To enhance user experience, the chatbot uses a dynamic learning mechanism, by which it continuously adapt to user preferences and evolving conversational patterns. This system uses feedbacks from the users to refine its responses everytime.Moreover, the chatbot is designed with a multi-turn conversational context awareness, allowing it to maintain coherence and relevance throughout extended dialogues.The effectiveness of the proposed chatbot is evaluated through user testing, comparing its performance against traditional rule-based chatbots and existing conversational agents. This report explains about the usage of Large Language Models in the design and implementation of conversational chatbots. The outcomes of this research contribute to the advancement of intelligent chatbot systems, demonstrating the potential of large language models to significantly enhance conversational AI applications.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Zhang, Chengyi, Xingyu Wang e Ziyun Wang. "Large language model in electrocatalysis". Chinese Journal of Catalysis 59 (aprile 2024): 7–14. http://dx.doi.org/10.1016/s1872-2067(23)64612-1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Sagi, Sriram. "Advancing AI: Enhancing Large Language Model Performance through GPU Optimization Techniques". International Journal of Science and Research (IJSR) 13, n. 3 (5 marzo 2024): 630–33. http://dx.doi.org/10.21275/sr24309100709.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Baral, Elina, e Sagar Shrestha. "Large Vocabulary Continuous Speech Recognition for Nepali Language". International Journal of Signal Processing Systems 8, n. 4 (dicembre 2020): 68–73. http://dx.doi.org/10.18178/ijsps.8.4.68-73.

Testo completo
Abstract (sommario):
Speech Recognition is a widely studied topic for high-resource languages like English and Mandarin. A plethora of publications exist that study the performance of several recognition methods for these languages. However differences in phonetics, accent, language model, etc between any two different languages demand for a study of speech recognition methodologies and components separately for each language. In this paper, we present a comparative study of popular speech recognition methods for Nepali, a low-resource Indo-Aryan language. We describe our approach to building the phonetic dictionary and present our findings for DNN and GMM based techniques with speaker adaptation on 50K vocabulary speech recognition task.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Garg, Prerak, e Divya Beeram. "Large Language Model-Based Autonomous Agents". International Journal of Computer Trends and Technology 72, n. 5 (30 maggio 2024): 151–62. http://dx.doi.org/10.14445/22312803/ijctt-v72i5p118.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Huang, Sen, Kaixiang Yang, Sheng Qi e Rui Wang. "When large language model meets optimization". Swarm and Evolutionary Computation 90 (ottobre 2024): 101663. http://dx.doi.org/10.1016/j.swevo.2024.101663.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Shi, Zhouxing, Yihan Wang, Fan Yin, Xiangning Chen, Kai-Wei Chang e Cho-Jui Hsieh. "Red Teaming Language Model Detectors with Language Models". Transactions of the Association for Computational Linguistics 12 (2024): 174–89. http://dx.doi.org/10.1162/tacl_a_00639.

Testo completo
Abstract (sommario):
Abstract The prevalence and strong capability of large language models (LLMs) present significant safety and ethical risks if exploited by malicious users. To prevent the potentially deceptive usage of LLMs, recent work has proposed algorithms to detect LLM-generated text and protect LLMs. In this paper, we investigate the robustness and reliability of these LLM detectors under adversarial attacks. We study two types of attack strategies: 1) replacing certain words in an LLM’s output with their synonyms given the context; 2) automatically searching for an instructional prompt to alter the writing style of the generation. In both strategies, we leverage an auxiliary LLM to generate the word replacements or the instructional prompt. Different from previous works, we consider a challenging setting where the auxiliary LLM can also be protected by a detector. Experiments reveal that our attacks effectively compromise the performance of all detectors in the study with plausible generations, underscoring the urgent need to improve the robustness of LLM-generated text detection systems. Code is available at https://github.com/shizhouxing/LLM-Detector-Robustness.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Aman, Mussa. "Large Language Model Based Fake News Detection". Procedia Computer Science 231 (2024): 740–45. http://dx.doi.org/10.1016/j.procs.2023.12.144.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Singh, Pranaydeep, Orphée De Clercq e Els Lefever. "Distilling Monolingual Models from Large Multilingual Transformers". Electronics 12, n. 4 (18 febbraio 2023): 1022. http://dx.doi.org/10.3390/electronics12041022.

Testo completo
Abstract (sommario):
Although language modeling has been trending upwards steadily, models available for low-resourced languages are limited to large multilingual models such as mBERT and XLM-RoBERTa, which come with significant overheads for deployment vis-à-vis their model size, inference speeds, etc. We attempt to tackle this problem by proposing a novel methodology to apply knowledge distillation techniques to filter language-specific information from a large multilingual model into a small, fast monolingual model that can often outperform the teacher model. We demonstrate the viability of this methodology on two downstream tasks each for six languages. We further dive into the possible modifications to the basic setup for low-resourced languages by exploring ideas to tune the final vocabulary of the distilled models. Lastly, we perform a detailed ablation study to understand the different components of the setup better and find out what works best for the two under-resourced languages, Swahili and Slovene.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Beurer-Kellner, Luca, Marc Fischer e Martin Vechev. "Prompting Is Programming: A Query Language for Large Language Models". Proceedings of the ACM on Programming Languages 7, PLDI (6 giugno 2023): 1946–69. http://dx.doi.org/10.1145/3591300.

Testo completo
Abstract (sommario):
Large language models have demonstrated outstanding performance on a wide range of tasks such as question answering and code generation. On a high level, given an input, a language model can be used to automatically complete the sequence in a statistically-likely way. Based on this, users prompt these models with language instructions or examples, to implement a variety of downstream tasks. Advanced prompting methods can even imply interaction between the language model, a user, and external tools such as calculators. However, to obtain state-of-the-art performance or adapt language models for specific tasks, complex task- and model-specific programs have to be implemented, which may still require ad-hoc interaction. Based on this, we present the novel idea of Language Model Programming (LMP). LMP generalizes language model prompting from pure text prompts to an intuitive combination of text prompting and scripting. Additionally, LMP allows constraints to be specified over the language model output. This enables easy adaption to many tasks while abstracting language model internals and providing high-level semantics. To enable LMP, we implement LMQL (short for Language Model Query Language), which leverages the constraints and control flow from an LMP prompt to generate an efficient inference procedure that minimizes the number of expensive calls to the underlying language model. We show that LMQL can capture a wide range of state-of-the-art prompting methods in an intuitive way, especially facilitating interactive flows that are challenging to implement with existing high-level APIs. Our evaluation shows that we retain or increase the accuracy on several downstream tasks, while also significantly reducing the required amount of computation or cost in the case of pay-to-use APIs (26-85% cost savings).
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Kinoshita, Shotaro, e Hiromi Yokoyama. "Large language model is a flagship for Japan". Nature 619, n. 7969 (11 luglio 2023): 252. http://dx.doi.org/10.1038/d41586-023-02230-3.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Meuwese, A. C. M. "WetGPT? Kan een Large Language Model wetten schrijven?" RegelMaat 39, n. 1 (aprile 2023): 90–97. http://dx.doi.org/10.5553/rm/0920055x2023039001008.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Zhu, Feiyu, e Reid Simmons. "Bootstrapping Cognitive Agents with a Large Language Model". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 1 (24 marzo 2024): 655–63. http://dx.doi.org/10.1609/aaai.v38i1.27822.

Testo completo
Abstract (sommario):
Large language models contain noisy general knowledge of the world, yet are hard to train or fine-tune. In contrast cognitive architectures have excellent interpretability and are flexible to update but require a lot of manual work to instantiate. In this work, we combine the best of both worlds: bootstrapping a cognitive-based model with the noisy knowledge encoded in large language models. Through an embodied agent doing kitchen tasks, we show that our proposed framework yields better efficiency compared to an agent entirely based on large language models. Our experiments also indicate that the cognitive agent bootstrapped using this framework can generalize to novel environments and be scaled to complex tasks.
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Xu, Yaoxun, Hangting Chen, Jianwei Yu, Qiaochu Huang, Zhiyong Wu, Shi-Xiong Zhang, Guangzhi Li, Yi Luo e Rongzhi Gu. "SECap: Speech Emotion Captioning with Large Language Model". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 17 (24 marzo 2024): 19323–31. http://dx.doi.org/10.1609/aaai.v38i17.29902.

Testo completo
Abstract (sommario):
Speech emotions are crucial in human communication and are extensively used in fields like speech synthesis and natural language understanding. Most prior studies, such as speech emotion recognition, have categorized speech emotions into a fixed set of classes. Yet, emotions expressed in human speech are often complex, and categorizing them into predefined groups can be insufficient to adequately represent speech emotions. On the contrary, describing speech emotions directly by means of natural language may be a more effective approach. Regrettably, there are not many studies available that have focused on this direction. Therefore, this paper proposes a speech emotion captioning framework named SECap, aiming at effectively describing speech emotions using natural language. Owing to the impressive capabilities of large language models in language comprehension and text generation, SECap employs LLaMA as the text decoder to allow the production of coherent speech emotion captions. In addition, SECap leverages HuBERT as the audio encoder to extract general speech features and Q-Former as the Bridge-Net to provide LLaMA with emotion-related speech features. To accomplish this, Q-Former utilizes mutual information learning to disentangle emotion-related speech features and speech contents, while implementing contrastive learning to extract more emotion-related speech features. The results of objective and subjective evaluations demonstrate that: 1) the SECap framework outperforms the HTSAT-BART baseline in all objective evaluations; 2) SECap can generate high-quality speech emotion captions that attain performance on par with human annotators in subjective mean opinion score tests.
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Singh, Nina, Katharine Lawrence, Safiya Richardson e Devin M. Mann. "Centering health equity in large language model deployment". PLOS Digital Health 2, n. 10 (24 ottobre 2023): e0000367. http://dx.doi.org/10.1371/journal.pdig.0000367.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
16

McGraw, Gary, Richie Bonett, Harold Figueroa e Katie McMahon. "23 Security Risks in Black-Box Large Language Model Foundation Models". Computer 57, n. 4 (aprile 2024): 160–64. http://dx.doi.org/10.1109/mc.2024.3363250.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Sabbatella, Antonio, Andrea Ponti, Ilaria Giordani, Antonio Candelieri e Francesco Archetti. "Prompt Optimization in Large Language Models". Mathematics 12, n. 6 (21 marzo 2024): 929. http://dx.doi.org/10.3390/math12060929.

Testo completo
Abstract (sommario):
Prompt optimization is a crucial task for improving the performance of large language models for downstream tasks. In this paper, a prompt is a sequence of n-grams selected from a vocabulary. Consequently, the aim is to select the optimal prompt concerning a certain performance metric. Prompt optimization can be considered as a combinatorial optimization problem, with the number of possible prompts (i.e., the combinatorial search space) given by the size of the vocabulary (i.e., all the possible n-grams) raised to the power of the length of the prompt. Exhaustive search is impractical; thus, an efficient search strategy is needed. We propose a Bayesian Optimization method performed over a continuous relaxation of the combinatorial search space. Bayesian Optimization is the dominant approach in black-box optimization for its sample efficiency, along with its modular structure and versatility. We use BoTorch, a library for Bayesian Optimization research built on top of PyTorch. Specifically, we focus on Hard Prompt Tuning, which directly searches for an optimal prompt to be added to the text input without requiring access to the Large Language Model, using it as a black-box (such as for GPT-4 which is available as a Model as a Service). Albeit preliminary and based on “vanilla” Bayesian Optimization algorithms, our experiments with RoBERTa as a large language model, on six benchmark datasets, show good performances when compared against other state-of-the-art black-box prompt optimization methods and enable an analysis of the trade-off between the size of the search space, accuracy, and wall-clock time.
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Shu, Lei, Liangchen Luo, Jayakumar Hoskere, Yun Zhu, Yinxiao Liu, Simon Tong, Jindong Chen e Lei Meng. "RewriteLM: An Instruction-Tuned Large Language Model for Text Rewriting". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 17 (24 marzo 2024): 18970–80. http://dx.doi.org/10.1609/aaai.v38i17.29863.

Testo completo
Abstract (sommario):
Large Language Models (LLMs) have demonstrated impressive capabilities in creative tasks such as storytelling and E-mail generation. However, as LLMs are primarily trained on final text results rather than intermediate revisions, it might be challenging for them to perform text rewriting tasks. Most studies in the rewriting tasks focus on a particular transformation type within the boundaries of single sentences. In this work, we develop new strategies for instruction tuning and reinforcement learning to better align LLMs for cross-sentence rewriting tasks using diverse wording and structures expressed through natural languages including 1) generating rewriting instruction data from Wiki edits and public corpus through instruction generation and chain-of-thought prompting; 2) collecting comparison data for reward model training through a new ranking function. To facilitate this research, we introduce OpenRewriteEval, a novel benchmark covers a wide variety of rewriting types expressed through natural language instructions. Our results show significant improvements over a variety of baselines.
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Luo, Yawei, e Yi Yang. "Large language model and domain-specific model collaboration for smart education". Frontiers of Information Technology & Electronic Engineering 25, n. 3 (marzo 2024): 333–41. http://dx.doi.org/10.1631/fitee.2300747.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Duan, Gaoxiang, Jiajie Chen, Yueying Zhou, Xiaoying Zheng e Yongxin Zhu. "Large Language Model Inference Acceleration Based on Hybrid Model Branch Prediction". Electronics 13, n. 7 (5 aprile 2024): 1376. http://dx.doi.org/10.3390/electronics13071376.

Testo completo
Abstract (sommario):
As the size of deep learning models continues to expand, the elongation of inference time has gradually evolved into a significant challenge to efficiency and practicality for autoregressive models. This work introduces a hybrid model acceleration strategy based on branch prediction, which accelerates autoregressive model inference without requiring retraining and ensures output consistency with the original model. Specifically, the algorithm employs two models with different parameter sizes aimed at the same task. The smaller model generates a series of potential tokens that are then parallelly validated by the larger model to determine their acceptability. By orchestrating the workflow of the large and small models through a branch-prediction strategy, the algorithm conceals the validation time of the larger model when predictions are successful, thereby accelerating inference. We propose a binomial distribution-based prediction function that blends theoretical principles with empirical evidence, specifically designed for the nuanced requirements of accelerating inference within a hybrid model framework. The entire algorithm was designed and implemented on the llama model for text generation and translation tasks. The experimental results indicate significant improvements. The proposed algorithm achieves a 1.2× to 3.4× increase in inference speed compared to the original model, consistently outperforming the speculative sampling inference acceleration algorithm.
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Long, Robert. "Introspective Capabilities in Large Language Models". Journal of Consciousness Studies 30, n. 9 (30 settembre 2023): 143–53. http://dx.doi.org/10.53765/20512201.30.9.143.

Testo completo
Abstract (sommario):
This paper considers the kind of introspection that large language models (LLMs) might be able to have. It argues that LLMs, while currently limited in their introspective capabilities, are not inherently unable to have such capabilities: they already model the world, including mental concepts, and already have some introspection-like capabilities. With deliberate training, LLMs may develop introspective capabilities. The paper proposes a method for such training for introspection, situates possible LLM introspection in the 'possible forms of introspection' framework proposed by Kammerer and Frankish, and considers the ethical ramifications of introspection and self-report in AI systems.
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Matzakos, Nikolaos, Spyridon Doukakis e Maria Moundridou. "Learning Mathematics with Large Language Models". International Journal of Emerging Technologies in Learning (iJET) 18, n. 20 (17 ottobre 2023): 51–71. http://dx.doi.org/10.3991/ijet.v18i20.42979.

Testo completo
Abstract (sommario):
Artificial intelligence (AI) has permeated all human activities, bringing about significant changes and creating new scientific and ethical challenges. The field of education could not be an exception to this development. OpenAI’s unveiling of ChatGPT, their large language model (LLM), has sparked significant interest in the potential applications of this technology in education. This paper aims to contribute to the ongoing discussion on the role of AI in education and its potential implications for the future of learning by exploring how LLMs could be utilized in the teaching of mathematics in higher education and how they compare to the currently widely used computer algebra systems (CAS) and other mathematical tools. It argues that these innovative tools have the potential to provide functional and pedagogical opportunities that may influence changes in curriculum and assessment approaches.
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Long, Xinwei, Jiali Zeng, Fandong Meng, Zhiyuan Ma, Kaiyan Zhang, Bowen Zhou e Jie Zhou. "Generative Multi-Modal Knowledge Retrieval with Large Language Models". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 17 (24 marzo 2024): 18733–41. http://dx.doi.org/10.1609/aaai.v38i17.29837.

Testo completo
Abstract (sommario):
Knowledge retrieval with multi-modal queries plays a crucial role in supporting knowledge-intensive multi-modal applications. However, existing methods face challenges in terms of their effectiveness and training efficiency, especially when it comes to training and integrating multiple retrievers to handle multi-modal queries. In this paper, we propose an innovative end-to-end generative framework for multi-modal knowledge retrieval. Our framework takes advantage of the fact that large language models (LLMs) can effectively serve as virtual knowledge bases, even when trained with limited data. We retrieve knowledge via a two-step process: 1) generating knowledge clues related to the queries, and 2) obtaining the relevant document by searching databases using the knowledge clue. In particular, we first introduce an object-aware prefix-tuning technique to guide multi-grained visual learning. Then, we align multi-grained visual features into the textual feature space of the LLM, employing the LLM to capture cross-modal interactions. Subsequently, we construct instruction data with a unified format for model training. Finally, we propose the knowledge-guided generation strategy to impose prior constraints in the decoding steps, thereby promoting the generation of distinctive knowledge clues. Through experiments conducted on three benchmarks, we demonstrate significant improvements ranging from 3.0% to 14.6% across all evaluation metrics when compared to strong baselines.
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Gupta, V., M. Lennig e P. Mermelstein. "A language model for very large-vocabulary speech recognition". Computer Speech & Language 6, n. 4 (ottobre 1992): 331–44. http://dx.doi.org/10.1016/0885-2308(92)90027-2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Bezirhan, Ummugul, e Matthias von Davier. "Automated reading passage generation with OpenAI's large language model". Computers and Education: Artificial Intelligence 5 (2023): 100161. http://dx.doi.org/10.1016/j.caeai.2023.100161.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Yin, Xunjian, Jin Jiang, Liming Yang e Xiaojun Wan. "History Matters: Temporal Knowledge Editing in Large Language Model". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 17 (24 marzo 2024): 19413–21. http://dx.doi.org/10.1609/aaai.v38i17.29912.

Testo completo
Abstract (sommario):
The imperative task of revising or updating the knowledge stored within large language models arises from two distinct sources: intrinsic errors inherent in the model which should be corrected and outdated knowledge due to external shifts in the real world which should be updated. Prevailing efforts in model editing conflate these two distinct categories of edits arising from distinct reasons and directly modify the original knowledge in models into new knowledge. However, we argue that preserving the model's original knowledge remains pertinent. Specifically, if a model's knowledge becomes outdated due to evolving worldly dynamics, it should retain recollection of the historical knowledge while integrating the newfound knowledge. In this work, we introduce the task of Temporal Knowledge Editing (TKE) and establish a benchmark AToKe (Assessment of TempOral Knowledge Editing) to evaluate current model editing methods. We find that while existing model editing methods are effective at making models remember new knowledge, the edited model catastrophically forgets historical knowledge. To address this gap, we propose a simple and general framework termed Multi-Editing with Time Objective (METO) for enhancing existing editing models, which edits both historical and new knowledge concurrently and optimizes the model's prediction for the time of each fact. Our assessments demonstrate that while AToKe is still difficult, METO maintains the effectiveness of learning new knowledge and meanwhile substantially improves the performance of edited models on utilizing historical knowledge.
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Wang, Yan, Zhixuan Chu, Xin Ouyang, Simeng Wang, Hongyan Hao, Yue Shen, Jinjie Gu et al. "LLMRG: Improving Recommendations through Large Language Model Reasoning Graphs". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 17 (24 marzo 2024): 19189–96. http://dx.doi.org/10.1609/aaai.v38i17.29887.

Testo completo
Abstract (sommario):
Recommendation systems aim to provide users with relevant suggestions, but often lack interpretability and fail to capture higher-level semantic relationships between user behaviors and profiles. In this paper, we propose a novel approach that leverages large language models (LLMs) to construct personalized reasoning graphs. These graphs link a user's profile and behavioral sequences through causal and logical inferences, representing the user's interests in an interpretable way. Our approach, LLM reasoning graphs (LLMRG), has four components: chained graph reasoning, divergent extension, self-verification and scoring, and knowledge base self-improvement. The resulting reasoning graph is encoded using graph neural networks, which serves as additional input to improve conventional recommender systems, without requiring extra user or item information. Our approach demonstrates how LLMs can enable more logical and interpretable recommender systems through personalized reasoning graphs. LLMRG allows recommendations to benefit from both engineered recommendation systems and LLM-derived reasoning graphs. We demonstrate the effectiveness of LLMRG on benchmarks and real-world scenarios in enhancing base recommendation models.
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Bakken, Suzanne. "What can you do with a large language model?" Journal of the American Medical Informatics Association 31, n. 6 (20 maggio 2024): 1217–18. http://dx.doi.org/10.1093/jamia/ocae106.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Oami, Takehiko, Yohei Okada e Taka-aki Nakada. "Performance of a Large Language Model in Screening Citations". JAMA Network Open 7, n. 7 (8 luglio 2024): e2420496. http://dx.doi.org/10.1001/jamanetworkopen.2024.20496.

Testo completo
Abstract (sommario):
ImportanceLarge language models (LLMs) are promising as tools for citation screening in systematic reviews. However, their applicability has not yet been determined.ObjectiveTo evaluate the accuracy and efficiency of an LLM in title and abstract literature screening.Design, Setting, and ParticipantsThis prospective diagnostic study used the data from the title and abstract screening process for 5 clinical questions (CQs) in the development of the Japanese Clinical Practice Guidelines for Management of Sepsis and Septic Shock. The LLM decided to include or exclude citations based on the inclusion and exclusion criteria in terms of patient, population, problem; intervention; comparison; and study design of the selected CQ and was compared with the conventional method for title and abstract screening. This study was conducted from January 7 to 15, 2024.ExposuresLLM (GPT-4 Turbo)–assisted citation screening or the conventional method.Main Outcomes and MeasuresThe sensitivity and specificity of the LLM-assisted screening process was calculated, and the full-text screening result using the conventional method was set as the reference standard in the primary analysis. Pooled sensitivity and specificity were also estimated, and screening times of the 2 methods were compared.ResultsIn the conventional citation screening process, 8 of 5634 publications in CQ 1, 4 of 3418 in CQ 2, 4 of 1038 in CQ 3, 17 of 4326 in CQ 4, and 8 of 2253 in CQ 5 were selected. In the primary analysis of 5 CQs, LLM-assisted citation screening demonstrated an integrated sensitivity of 0.75 (95% CI, 0.43 to 0.92) and specificity of 0.99 (95% CI, 0.99 to 0.99). Post hoc modifications to the command prompt improved the integrated sensitivity to 0.91 (95% CI, 0.77 to 0.97) without substantially compromising specificity (0.98 [95% CI, 0.96 to 0.99]). Additionally, LLM-assisted screening was associated with reduced time for processing 100 studies (1.3 minutes vs 17.2 minutes for conventional screening methods; mean difference, −15.25 minutes [95% CI, −17.70 to −12.79 minutes]).Conclusions and RelevanceIn this prospective diagnostic study investigating the performance of LLM-assisted citation screening, the model demonstrated acceptable sensitivity and reasonably high specificity with reduced processing time. This novel method could potentially enhance efficiency and reduce workload in systematic reviews.
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Zhang, Tianyi, Faisal Ladhak, Esin Durmus, Percy Liang, Kathleen McKeown e Tatsunori B. Hashimoto. "Benchmarking Large Language Models for News Summarization". Transactions of the Association for Computational Linguistics 12 (2024): 39–57. http://dx.doi.org/10.1162/tacl_a_00632.

Testo completo
Abstract (sommario):
Abstract Large language models (LLMs) have shown promise for automatic summarization but the reasons behind their successes are poorly understood. By conducting a human evaluation on ten LLMs across different pretraining methods, prompts, and model scales, we make two important observations. First, we find instruction tuning, not model size, is the key to the LLM’s zero-shot summarization capability. Second, existing studies have been limited by low-quality references, leading to underestimates of human performance and lower few-shot and finetuning performance. To better evaluate LLMs, we perform human evaluation over high-quality summaries we collect from freelance writers. Despite major stylistic differences such as the amount of paraphrasing, we find that LLM summaries are judged to be on par with human written summaries.
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Makridakis, Spyros, Fotios Petropoulos e Yanfei Kang. "Large Language Models: Their Success and Impact". Forecasting 5, n. 3 (25 agosto 2023): 536–49. http://dx.doi.org/10.3390/forecast5030030.

Testo completo
Abstract (sommario):
ChatGPT, a state-of-the-art large language model (LLM), is revolutionizing the AI field by exhibiting humanlike skills in a range of tasks that include understanding and answering natural language questions, translating languages, writing code, passing professional exams, and even composing poetry, among its other abilities. ChatGPT has gained an immense popularity since its launch, amassing 100 million active monthly users in just two months, thereby establishing itself as the fastest-growing consumer application to date. This paper discusses the reasons for its success as well as the future prospects of similar large language models (LLMs), with an emphasis on their potential impact on forecasting, a specialized and domain-specific field. This is achieved by first comparing the correctness of the answers of the standard ChatGPT and a custom one, trained using published papers from a subfield of forecasting where the answers to the questions asked are known, allowing us to determine their correctness compared to those of the two ChatGPT versions. Then, we also compare the responses of the two versions on how judgmental adjustments to the statistical/ML forecasts should be applied by firms to improve their accuracy. The paper concludes by considering the future of LLMs and their impact on all aspects of our life and work, as well as on the field of forecasting specifically. Finally, the conclusion section is generated by ChatGPT, which was provided with a condensed version of this paper and asked to write a four-paragraph conclusion.
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Head, Cari Beth, Paul Jasper, Matthew McConnachie, Linda Raftree e Grace Higdon. "Large language model applications for evaluation: Opportunities and ethical implications". New Directions for Evaluation 2023, n. 178-179 (giugno 2023): 33–46. http://dx.doi.org/10.1002/ev.20556.

Testo completo
Abstract (sommario):
AbstractLarge language models (LLMs) are a type of generative artificial intelligence (AI) designed to produce text‐based content. LLMs use deep learning techniques and massively large data sets to understand, summarize, generate, and predict new text. LLMs caught the public eye in early 2023 when ChatGPT (the first consumer facing LLM) was released. LLM technologies are driven by recent advances in deep‐learning AI techniques, where language models are trained on extremely large text data from the internet and then re‐used for downstream tasks with limited fine‐tuning required. They offer exciting opportunities for evaluators to automate and accelerate time‐consuming tasks involving text analytics and text generation. We estimate that over two‐thirds of evaluation tasks will be affected by LLMs in the next 5 years. Use‐case examples include summarizing text data, extracting key information from text, analyzing and classifying text content, writing text, and translation. Despite the advances, the technologies pose significant challenges and risks. Because LLM technologies are generally trained on text from the internet, they tend to perpetuate biases (racism, sexism, ethnocentrism, and more) and exclusion of non‐majority languages. Current tools like ChatGPT have not been specifically developed for monitoring, evaluation, research, and learning (MERL) purposes, possibly limiting their accuracy and usefulness for evaluation. In addition, technical limitations and challenges with bias can lead to real world harm. To overcome these technical challenges and ethical risks, the evaluation community will need to work collaboratively with the data science community to co‐develop tools and processes and to ensure the application of quality and ethical standards.
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Kumar, Deepak, Yousef Anees AbuHashem e Zakir Durumeric. "Watch Your Language: Investigating Content Moderation with Large Language Models". Proceedings of the International AAAI Conference on Web and Social Media 18 (28 maggio 2024): 865–78. http://dx.doi.org/10.1609/icwsm.v18i1.31358.

Testo completo
Abstract (sommario):
Large language models (LLMs) have exploded in popularity due to their ability to perform a wide array of natural language tasks. Text-based content moderation is one LLM use case that has received recent enthusiasm, however, there is little research investigating how LLMs can help in content moderation settings. In this work, we evaluate a suite of commodity LLMs on two common content moderation tasks: rule-based community moderation and toxic content detection. For rule-based community moderation, we instantiate 95 subcommunity specific LLMs by prompting GPT-3.5 with rules from 95 Reddit subcommunities. We find that GPT-3.5 is effective at rule-based moderation for many communities, achieving a median accuracy of 64% and a median precision of 83%. For toxicity detection, we evaluate a range of LLMs (GPT-3, GPT-3.5, GPT-4, Gemini Pro, LLAMA 2) and show that LLMs significantly outperform currently widespread toxicity classifiers. However, we also found that increases in model size add only marginal benefit to toxicity detection, suggesting a potential performance plateau for LLMs on toxicity detection tasks. We conclude by outlining avenues for future work in studying LLMs and content moderation.
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Singh Negi, Dr Pritam. "Language Model for Translating the English Language to Garhwali Language". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, n. 11 (1 novembre 2023): 1–11. http://dx.doi.org/10.55041/ijsrem26775.

Testo completo
Abstract (sommario):
The main aim of this paper is to develop a language model for translating the English language to Garhwali, a language spoken in the Garhwal region of Uttarakhand, India. The model will use state-of-the-art natural language processing techniques, including machine learning and neural networks, to enable accurate and efficient translation of English sentences into Garhwali and vice versa. The model will be trained on a large dataset of parallel English-Garhwali text to ensure high accuracy and fluency of the translations. The successful development of this language model will help bridge the language barrier between English speakers and Garhwali speakers, facilitate communication and exchange of ideas, and promote cultural exchange and understanding. Local language conversion system can help to improve the user experience by providing content in the language that the user is most comfortable with. It can help to reduce confusion and frustration, and make the user experience more enjoyable. Keywords: Statistical Machine Translation, Mathematical model, English-Garhwali, Evaluation.
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Hu, Linmei, Hongyu He, Duokang Wang, Ziwang Zhao, Yingxia Shao e Liqiang Nie. "LLM vs Small Model? Large Language Model Based Text Augmentation Enhanced Personality Detection Model". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 16 (24 marzo 2024): 18234–42. http://dx.doi.org/10.1609/aaai.v38i16.29782.

Testo completo
Abstract (sommario):
Personality detection aims to detect one's personality traits underlying in social media posts. One challenge of this task is the scarcity of ground-truth personality traits which are collected from self-report questionnaires. Most existing methods learn post features directly by fine-tuning the pre-trained language models under the supervision of limited personality labels. This leads to inferior quality of post features and consequently affects the performance. In addition, they treat personality traits as one-hot classification labels, overlooking the semantic information within them. In this paper, we propose a large language model (LLM) based text augmentation enhanced personality detection model, which distills the LLM's knowledge to enhance the small model for personality detection, even when the LLM fails in this task. Specifically, we enable LLM to generate post analyses (augmentations) from the aspects of semantic, sentiment, and linguistic, which are critical for personality detection. By using contrastive learning to pull them together in the embedding space, the post encoder can better capture the psycho-linguistic information within the post representations, thus improving personality detection. Furthermore, we utilize the LLM to enrich the information of personality labels for enhancing the detection performance. Experimental results on the benchmark datasets demonstrate that our model outperforms the state-of-the-art methods on personality detection.
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Gomez, Alejandro Pradas, Petter Krus, Massimo Panarotto e Ola Isaksson. "Large language models in complex system design". Proceedings of the Design Society 4 (maggio 2024): 2197–206. http://dx.doi.org/10.1017/pds.2024.222.

Testo completo
Abstract (sommario):
AbstractThis paper investigates the use of Large Language Models (LLMs) in engineering complex systems, demonstrating how they can support designers on detail design phases. Two aerospace cases, a system architecture definition and a CAD model generation activities are studied. The research reveals LLMs' challenges and opportunities to support designers, and future research areas to further improve their application in engineering tasks. It emphasizes the new paradigm of LLMs support compared to traditional Machine Learning techniques, as they can successfully perform tasks with just a few examples.
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Oswald, James, Kavitha Srinivas, Harsha Kokel, Junkyu Lee, Michael Katz e Shirin Sohrabi. "Large Language Models as Planning Domain Generators". Proceedings of the International Conference on Automated Planning and Scheduling 34 (30 maggio 2024): 423–31. http://dx.doi.org/10.1609/icaps.v34i1.31502.

Testo completo
Abstract (sommario):
Developing domain models is one of the few remaining places that require manual human labor in AI planning. Thus, in order to make planning more accessible, it is desirable to automate the process of domain model generation. To this end, we investigate if large language models (LLMs) can be used to generate planning domain models from simple textual descriptions. Specifically, we introduce a framework for automated evaluation of LLM-generated domains by comparing the sets of plans for domain instances. Finally, we perform an empirical analysis of 7 large language models, including coding and chat models across 9 different planning domains, and under three classes of natural language domain descriptions. Our results indicate that LLMs, particularly those with high parameter counts, exhibit a moderate level of proficiency in generating correct planning domains from natural language descriptions. Our code is available at https://github.com/IBM/NL2PDDL.
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Fanelli, C., J. Giroux, P. Moran, H. Nayak, K. Suresh e E. Walter. "Physics event classification using Large Language Models". Journal of Instrumentation 19, n. 07 (1 luglio 2024): C07011. http://dx.doi.org/10.1088/1748-0221/19/07/c07011.

Testo completo
Abstract (sommario):
Abstract The 2023 AI4EIC hackathon was the culmination of the third annual AI4EIC workshop at The Catholic University of America. This workshop brought together researchers from physics, data science and computer science to discuss the latest developments in Artificial Intelligence (AI) and Machine Learning (ML) for the Electron Ion Collider (EIC), including applications for detectors, accelerators, and experimental control. The hackathon, held on the final day of the workshop, involved using a chatbot powered by a Large Language Model, ChatGPT-3.5, to train a binary classifier neutrons and photons in simulated data from the GlueX Barrel Calorimeter. In total, six teams of up to four participants from all over the world took part in this intense educational and research event. This article highlights the hackathon challenge, the resources and methodology used, and the results and insights gained from analyzing physics data using the most cutting-edge tools in AI/ML.
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Viswanathan, Vijay, Kiril Gashteovski, Kiril Gashteovski, Carolin Lawrence, Tongshuang Wu e Graham Neubig. "Large Language Models Enable Few-Shot Clustering". Transactions of the Association for Computational Linguistics 12 (2024): 321–33. http://dx.doi.org/10.1162/tacl_a_00648.

Testo completo
Abstract (sommario):
Abstract Unlike traditional unsupervised clustering, semi-supervised clustering allows users to provide meaningful structure to the data, which helps the clustering algorithm to match the user’s intent. Existing approaches to semi-supervised clustering require a significant amount of feedback from an expert to improve the clusters. In this paper, we ask whether a large language model (LLM) can amplify an expert’s guidance to enable query-efficient, few-shot semi-supervised text clustering. We show that LLMs are surprisingly effective at improving clustering. We explore three stages where LLMs can be incorporated into clustering: before clustering (improving input features), during clustering (by providing constraints to the clusterer), and after clustering (using LLMs post-correction). We find that incorporating LLMs in the first two stages routinely provides significant improvements in cluster quality, and that LLMs enable a user to make trade-offs between cost and accuracy to produce desired clusters. We release our code and LLM prompts for the public to use.1
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Pendyala, Vishnu S., e Christopher E. Hall. "Explaining Misinformation Detection Using Large Language Models". Electronics 13, n. 9 (26 aprile 2024): 1673. http://dx.doi.org/10.3390/electronics13091673.

Testo completo
Abstract (sommario):
Large language models (LLMs) are a compressed repository of a vast corpus of valuable information on which they are trained. Therefore, this work hypothesizes that LLMs such as Llama, Orca, Falcon, and Mistral can be used for misinformation detection by making them cross-check new information with the repository on which they are trained. Accordingly, this paper describes the findings from the investigation of the abilities of LLMs in detecting misinformation on multiple datasets. The results are interpreted using explainable AI techniques such as Local Interpretable Model-Agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), and Integrated Gradients. The LLMs themselves are also asked to explain their classification. These complementary approaches aid in better understanding the inner workings of misinformation detection using LLMs and lead to conclusions about their effectiveness at the task. The methodology is generic and nothing specific is assumed for any of the LLMs, so the conclusions apply generally. Primarily, when it comes to misinformation detection, the experiments show that the LLMs are limited by the data on which they are trained.
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Tian, Yijun, Huan Song, Zichen Wang, Haozhu Wang, Ziqing Hu, Fang Wang, Nitesh V. Chawla e Panpan Xu. "Graph Neural Prompting with Large Language Models". Proceedings of the AAAI Conference on Artificial Intelligence 38, n. 17 (24 marzo 2024): 19080–88. http://dx.doi.org/10.1609/aaai.v38i17.29875.

Testo completo
Abstract (sommario):
Large language models (LLMs) have shown remarkable generalization capability with exceptional performance in various language modeling tasks. However, they still exhibit inherent limitations in precisely capturing and returning grounded knowledge. While existing work has explored utilizing knowledge graphs (KGs) to enhance language modeling via joint training and customized model architectures, applying this to LLMs is problematic owing to their large number of parameters and high computational cost. Therefore, how to enhance pre-trained LLMs using grounded knowledge, e.g., retrieval-augmented generation, remains an open question. In this work, we propose Graph Neural Prompting (GNP), a novel plug-and-play method to assist pre-trained LLMs in learning beneficial knowledge from KGs. GNP encompasses various designs, including a standard graph neural network encoder, a cross-modality pooling module, a domain projector, and a self-supervised link prediction objective. Extensive experiments on multiple datasets demonstrate the superiority of GNP on both commonsense and biomedical reasoning tasks across different LLM sizes and settings. Code is available at https://github.com/meettyj/GNP.
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Sugiyama, Kohei, e Tsukasa Yamanaka. "Proposals and Methods for Foreign Language Learning Using Machine Translation and Large Language Model". Procedia Computer Science 225 (2023): 4750–57. http://dx.doi.org/10.1016/j.procs.2023.10.474.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Prof. Trupti Farande, Vishal Waghmare, Rushikesh Barkade, Adesh Shinde e Omkar Naikade. "Large Language Model for Chatbot". International Journal of Advanced Research in Science, Communication and Technology, 15 marzo 2024, 291–93. http://dx.doi.org/10.48175/ijarsct-15750.

Testo completo
Abstract (sommario):
The integration of Artificial Intelligence (AI) and Natural Language Processing (NLP) has led to the development of sophisticated Chatbots capable of mimicking human conversation and providing automated responses. In the context of the mining industry, which operates under a complex framework of Acts, Rules, and Regulations, there is a growing need for a comprehensive and easily accessible information system. This research proposes the implementation of a 24/7 available Chatbot, equipped with the ability to address stakeholder and customer queries regarding various legal aspects, including the Coal Mines Act, 1952, Indian Explosives Act, 1884, Colliery Control Order, 2000, Colliery Control Rules, 2004, The Coal Mines Regulations, 2017, and The Payment of Wages (Mines) Rules, 1956. Furthermore, the Chatbot's scope will encompass land-related laws, such as Community Benefits Agreement (CBA), Land Acquisition (LA), and Resettlement and Rehabilitation (RandR), thereby establishing a robust Management Information System tailored to the specific needs of the mining industry
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Blank, Idan A. "What are large language models supposed to model?" Trends in Cognitive Sciences, agosto 2023. http://dx.doi.org/10.1016/j.tics.2023.08.006.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
45

"Detecting Propaganda in News Articles Using Large Language Models". Engineering: Open Access 2, n. 1 (13 febbraio 2024): 01–12. http://dx.doi.org/10.33140/eoa.01.02.10.

Testo completo
Abstract (sommario):
The proliferation of media channels as a result of the information age has ushered in a new era of communication and access to information. However, this increased accessibility has also opened up new avenues for propaganda and the manipulation of public opinion. With the recent release of OpenAI's artificial intelligence chatbot, ChatGPT, users and the media are increasingly discovering and reporting on its range of novel capabilities. The most notable of these, such as answering technical questions, stem from its ability to perform advanced natural language processing and text generation. In this paper, we aim to assess the feasibility of using the underlying technology behind ChatGPT, Large Language Models (LLMs), to detect features of propaganda in news articles. The features we consider leverage the work of Martino et al., who define a list of 18 distinct propaganda techniques. For example, they outline the 'straw man' technique, which refers to the use of 'refuting an argument that was not presented' [1]. Based on these techniques, we develop a refined prompt that is coupled with news articles from Russia Today (RT), a prominent state-controlled news network, and from the labelled SemEval-2020 Task 11 dataset [2]. The prompt and article content are then sent to OpenAI’s gpt-3.5-turbo model to determine which propaganda techniques are present and to make a final judgement on whether the article is propaganda or not. We then qualitatively analyse a subset of the resulting output to determine whether LLMs can be used effectively in this way. With the results of the study, we aim to uncover whether such technologies show promise in detecting propaganda, and what sort of prompts lead to the most useful output. This has the potential to be useful for media consumers, for example, who could use our prompts to detect signs of propaganda in the articles they read.
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Pan, Jie. "Large language model for molecular chemistry". Nature Computational Science, 23 gennaio 2023. http://dx.doi.org/10.1038/s43588-023-00399-1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
47

"Large Language Model Hallcntion Mitigation Techniques". مجلة الجمعية المصرية لنظم المعلومات وتکنولوجيا الحاسبات 35, n. 35 (1 maggio 2024): 68–69. http://dx.doi.org/10.21608/jstc.2024.355414.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Zang, Amy, Jiexin Zheng e Rong Zheng. "Measuring Readability with Language Predictability: A Large Language Model Approach". SSRN Electronic Journal, 2024. http://dx.doi.org/10.2139/ssrn.4764707.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Sun, Tianxiang, Xiaotian Zhang, Zhengfu He, Peng Li, Qinyuan Cheng, Xiangyang Liu, Hang Yan et al. "MOSS: An Open Conversational Large Language Model". Machine Intelligence Research, 20 maggio 2024. http://dx.doi.org/10.1007/s11633-024-1502-8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Chang, Cheng, Siqi Wang, Jiawei Zhang, Jingwei Ge e Li Li. "LLMScenario: Large Language Model Driven Scenario Generation". IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2024, 1–14. http://dx.doi.org/10.1109/tsmc.2024.3392930.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia