Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Large language model.

Książki na temat „Large language model”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych książek naukowych na temat „Large language model”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj książki z różnych dziedzin i twórz odpowiednie bibliografie.

1

Satō, Hideto. A data model, knowledge base, and natural language processing for sharing a large statistical database. Ibaraki, Osaka, Japan: Institute of Social and Economic Research, Osaka University, 1989.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Amaratunga, Thimira. Understanding Large Language Models. Berkeley, CA: Apress, 2023. http://dx.doi.org/10.1007/979-8-8688-0017-7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Kucharavy, Andrei, Octave Plancherel, Valentin Mulder, Alain Mermoud i Vincent Lenders, red. Large Language Models in Cybersecurity. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Törnberg, Petter. How to Use Large-Language Models for Text Analysis. 1 Oliver’s Yard, 55 City Road, London EC1Y 1SP United Kingdom: SAGE Publications Ltd, 2024. http://dx.doi.org/10.4135/9781529683707.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Bashkatov, Alexander. Modeling in OpenSCAD: examples. ru: INFRA-M Academic Publishing LLC., 2019. http://dx.doi.org/10.12737/959073.

Pełny tekst źródła
Streszczenie:
The tutorial is an introductory course to the study of the basics of geometric modeling for 3D printing using the programming language OpenSCAD and is built on the basis of descriptions of instructions for creating primitives, determining their properties, carrying out transformations and other service operations. It contains a large number of examples with detailed comments and description of the performed actions, which allows you to get basic skills in creating three-dimensional and flat models, exporting and importing graphical data. Meets the requirements of the Federal state educational standards of higher education of the last generation. It can be useful for computer science teachers, students, students and anyone who is interested in three-dimensional modeling and preparation of products for 3D printing.
Style APA, Harvard, Vancouver, ISO itp.
6

Build a Large Language Model (from Scratch). Manning Publications Co. LLC, 2024.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Generative AI with LangChain: Build Large Language Model Apps with Python, ChatGPT and Other LLMs. Packt Publishing, Limited, 2023.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Generative AI with LangChain: Build Large Language Model Apps with Python, ChatGPT, and Other LLMs. de Gruyter GmbH, Walter, 2023.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Large Language Model-Based Solutions: How to Deliver Value with Cost-Effective Generative AI Applications. Wiley & Sons, Limited, John, 2024.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Large Language Model-Based Solutions: How to Deliver Value with Cost-Effective Generative AI Applications. Wiley & Sons, Incorporated, John, 2024.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Large Language Model-Based Solutions: How to Deliver Value with Cost-Effective Generative AI Applications. Wiley & Sons, Incorporated, John, 2024.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Alto, Valentina. Ultimate Guide to ChatGPT and OpenAI: Harness the Capabilities of OpenAIs Large Language Model for Productivity and Innovation with GPT Technologies. Packt Publishing, Limited, 2023.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

the power of prompt. Higher Institute of Science and Technology, 2023.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Mulder, Valentin, Vincent Lenders, Andrei Kucharavy i Octave Plancherel. Large Language Models in Cybersecurity: Threats, Exposure and Mitigation. Springer, 2024.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Mulder, Valentin, Vincent Lenders, Andrei Kucharavy i Octave Plancherel. Large Language Models in Cybersecurity: Threats, Exposure and Mitigation. Springer, 2024.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Last Words: Large Language Models and the AI Apocalypse. Prickly Paradigm Press, LLC, 2024.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Last Words: Large Language Models and the AI Apocalypse. Prickly Paradigm Press, LLC, 2024.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Quick Start Guide to Large Language Models, 2nd Edition. Addison Wesley Professional, 2024.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Lamel, Lori, i Jean-Luc Gauvain. Speech Recognition. Redaktor Ruslan Mitkov. Oxford University Press, 2012. http://dx.doi.org/10.1093/oxfordhb/9780199276349.013.0016.

Pełny tekst źródła
Streszczenie:
Speech recognition is concerned with converting the speech waveform, an acoustic signal, into a sequence of words. Today's approaches are based on a statistical modellization of the speech signal. This article provides an overview of the main topics addressed in speech recognition, which are, acoustic-phonetic modelling, lexical representation, language modelling, decoding, and model adaptation. Language models are used in speech recognition to estimate the probability of word sequences. The main components of a generic speech recognition system are, main knowledge sources, feature analysis, and acoustic and language models, which are estimated in a training phase, and the decoder. The focus of this article is on methods used in state-of-the-art speaker-independent, large-vocabulary continuous speech recognition (LVCSR). Primary application areas for such technology are dictation, spoken language dialogue, and transcription for information archival and retrieval systems. Finally, this article discusses issues and directions of future research.
Style APA, Harvard, Vancouver, ISO itp.
20

Natural Language Understanding with Python: Building Human-Like Understanding with Large Language Models. Packt Publishing, Limited, 2023.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

wu, shisong. Prompt Engineering: Unleashing the Infinite Potential of Large Language Models. Chinese Culture Publishing Group Co., Ltd., 2023.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Gpt-3: Building Innovative NLP Products Using Large Language Models. O'Reilly Media, Incorporated, 2022.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Understanding Large Language Models: Learning Their Underlying Concepts and Technologies. Apress L. P., 2023.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Sharma, Avinash Kumar, Nitin Chanderwal, Amarjeet Prajapati, Pancham Singh i Mrignainy Kansal. Advancing Software Engineering Through AI, Federated Learning, and Large Language Models. IGI Global, 2024.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

SocraSynth: Socratic Synthesis with Multiple Large Language Models - Principles and Practices. Socrasynth, 2024.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

SocraSynth: Socratic Synthesis with Multiple Large Language Models --- Principles and Practices. Socrasynth, 2024.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Dimitoglou, George, i Ahmad Tafti. Artificial Intelligence: Machine Learning, Convolutional Neural Networks and Large Language Models. de Gruyter GmbH, Walter, 2024.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Ashwin, Julian, Aditya Chhabra i Vijayendra Rao. Using Large Language Models for Qualitative Analysis can Introduce Serious Bias. World Bank Washington, DC, 2023. http://dx.doi.org/10.1596/1813-9450-10597.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Sharma, Avinash Kumar, Nitin Chanderwal, Amarjeet Prajapati, Pancham Singh i Mrignainy Kansal. Advancing Software Engineering Through AI, Federated Learning, and Large Language Models. IGI Global, 2024.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Dimitoglou, George, i Ahmad Tafti. Artificial Intelligence: Machine Learning, Convolutional Neural Networks and Large Language Models. de Gruyter GmbH, Walter, 2024.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Sharma, Avinash Kumar, Nitin Chanderwal, Amarjeet Prajapati, Pancham Singh i Mrignainy Kansal. Advancing Software Engineering Through AI, Federated Learning, and Large Language Models. IGI Global, 2024.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Dimitoglou, George, i Ahmad Tafti. Artificial Intelligence: Machine Learning, Convolutional Neural Networks and Large Language Models. de Gruyter GmbH, Walter, 2024.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Sharma, Avinash Kumar, Nitin Chanderwal, Amarjeet Prajapati, Pancham Singh i Mrignainy Kansal. Advancing Software Engineering Through AI, Federated Learning, and Large Language Models. IGI Global, 2024.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Whitesell, Lloyd. Style Modes. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190843816.003.0001.

Pełny tekst źródła
Streszczenie:
This chapter introduces a new index for the analysis of individual musical numbers, specifically in the genre of film musicals: “style mode,” which refers to background orientations of stylistic treatment in both sonic and visual design. It defines the genre’s primary style modes—ordinary, children’s, burlesque, razzle-dazzle, and glamour—by way of well-known examples and illustrates their effectiveness as analytical categories, providing insight into large-scale planning as well as the meanings projected within individual numbers. Because the projection of a style mode takes place independently of the musical “language” being spoken (e.g., jazz, blues, musical theater, rock), style modes are clearly distinguished from musical topics and idioms.
Style APA, Harvard, Vancouver, ISO itp.
35

Deep Reinforcement Learning with Python: Understand RLHF, Chatbots, and Large Language Models. Apress L. P., 2024.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Artificial Intelligence and Large Language Models: An Introduction to the Technological Future. CRC Press LLC, 2024.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Artificial Intelligence and Large Language Models: An Introduction to the Technological Future. CRC Press LLC, 2024.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Alto, Valentina. Building LLM Apps: Create Intelligent Apps and Agents with Large Language Models. Packt Publishing, Limited, 2024.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Defoe, Daniel. Robinson Crusoe: An Adventure Story with a Press-out Model to Make. Collins, 1986.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Hickmann, Maya, i Dominique Bassano. Modality and Mood in First Language Acquisition. Redaktorzy Jan Nuyts i Johan Van Der Auwera. Oxford University Press, 2015. http://dx.doi.org/10.1093/oxfordhb/9780199591435.013.20.

Pełny tekst źródła
Streszczenie:
This chapter aims to provide a large overview of research focusing on the development of modality and mood during first language acquisition. This overview synthesizes results concerning both early and later phases of development, within and across a large number of languages, and including some more peripheral categories, such as evidentials and tense–aspect markings. Results recurrently show the earlier acquisition of agent-oriented modality as compared to epistemic modality. However, cross-linguistic variation has raised some questions about this acquisition sequence, suggesting that language-specific properties may partially impact timing during acquisition. In addition, findings about later phases show a long developmental process whereby children gradually come to master complex semantic and pragmatic modal distinctions. The discussion highlights the contribution of these conclusions to current theoretical debates, such as the role of input factors and the relation between language and cognition during ontogenesis.
Style APA, Harvard, Vancouver, ISO itp.
41

Natural Language Understanding with Python: Combine Natural Language Technology, Deep Learning, and Large Language Models to Create Human-Like Language Comprehension in Computer Systems. de Gruyter GmbH, Walter, 2023.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Liang, Percy, Michael Jordan i Dan Klein. Probabilistic grammars and hierarchical Dirichlet processes. Redaktorzy Anthony O'Hagan i Mike West. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198703174.013.27.

Pełny tekst źródła
Streszczenie:
This article focuses on the use of probabilistic context-free grammars (PCFGs) in natural language processing involving a large-scale natural language parsing task. It describes detailed, highly-structured Bayesian modelling in which model dimension and complexity responds naturally to observed data. The framework, termed hierarchical Dirichlet process probabilistic context-free grammar (HDP-PCFG), involves structured hierarchical Dirichlet process modelling and customized model fitting via variational methods to address the problem of syntactic parsing and the underlying problems of grammar induction and grammar refinement. The central object of study is the parse tree, which can be used to describe a substantial amount of the syntactic structure and relational semantics of natural language sentences. The article first provides an overview of the formal probabilistic specification of the HDP-PCFG, algorithms for posterior inference under the HDP-PCFG, and experiments on grammar learning run on the Wall Street Journal portion of the Penn Treebank.
Style APA, Harvard, Vancouver, ISO itp.
43

Linguistics, Association for Computational. Proceedings of BigScience Episode #5 - Workshop on Challenges & Perspectives in Creating Large Language Models. Association for Computational Linguistics, 2022.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

McTear, Michael Frederick, i Marina Ashurkina. Transforming Conversational AI: Exploring the Power of Large Language Models in Interactive Conversational Agents. Apress L. P., 2024.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Programming Large Language Models with Azure Open AI: Conversational Programming and Prompt Engineering with LLMs. Microsoft Press, 2024.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Olgiati, Andrea. Pretrain Vision and Large Language Models in Python: End-To-end Techniques for Building and Deploying Foundation Models on AWS. de Gruyter GmbH, Walter, 2023.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Olgiati, Andrea. Pretrain Vision and Large Language Models in Python: End-To-end Techniques for Building and Deploying Foundation Models on AWS. Packt Publishing, Limited, 2023.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Everything You Always Wanted to Know about ChatGPT: Large Language Models and the Future of AI. MIT Press, 2024.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

Stemmer, Brigitte. Neuropragmatics. Redaktor Yan Huang. Oxford University Press, 2013. http://dx.doi.org/10.1093/oxfordhb/9780199697960.013.003.

Pełny tekst źródła
Streszczenie:
This essay summarizes the findings of studies investigating aspects of linguistic pragmatic behaviour and the brain correlates underlying such behaviour. Although pragmatics is a large field, most brain-oriented studies have focused on specific aspects of linguistic pragmatics such as structural discourse and figurative language. Research indicates that linguistic pragmatic behaviour relies on brain correlates that are routinely activated during word and sentence processing (the default language network). Although no agreement has yet been reached concerning questions such as whether these correlates are qualitatively and/or quantitatively different, whether additional brain areas/networks are implicated, and, if so, what these are, some concrete suggestions have emerged. At a more general level, there is consensus that the classical standard pragmatic model is not supported by most neuroimaging studies and that the right-hemisphere hypothesis on figurative language processing needs revision. The essay ends with some speculations on interpreting pragmatic behaviour within a microgenetic framework.
Style APA, Harvard, Vancouver, ISO itp.
50

Arregui, Ana, María Luisa Rivero i Andrés Salanova, red. Modality Across Syntactic Categories. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780198718208.001.0001.

Pełny tekst źródła
Streszczenie:
This volume explores the extremely rich diversity found under the “modal umbrella” in natural language. Offering a cross-linguistic perspective on the encoding of modal meanings that draws on novel data from an extensive set of languages, the book supports a view according to which modality infuses a much more extensive number of syntactic categories and levels of syntactic structure than has traditionally been thought. The volume distinguishes between “low modality,” which concerns modal interpretations that associate with the verbal and nominal cartographies in syntax, “middle modality” or modal interpretation associated to the syntactic cartography internal to the clause, and “high modality” that relates to the cartography known as the left periphery. By offering enticing combinations of cross-linguistic discussions of the more studied sources of modality together with novel or unexpected sources of modality, the volume presents specific case studies that show how meanings associated with low, middle, and high modality crystallize across a large variety of languages. The chapters on low modality explore modal meanings in structures that lack the complexity of full clauses, including conditional readings in noun phrases and modal features in lexical verbs. The chapters on middle modality examine the effects of tense and aspect on constructions with counterfactual readings, and on those that contain canonical modal verbs. The chapters on high modality are dedicated to constructions with imperative, evidential, and epistemic readings, examining, and at times challenging, traditional perspectives that syntactically associate these interpretations with the left periphery of the clause.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii