Academic literature on the topic 'Language models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Language models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Language models"

1

Li, Hang. "Language models." Communications of the ACM 65, no. 7 (July 2022): 56–63. http://dx.doi.org/10.1145/3490443.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Begnarovich, Uralov Azamat. "The Inconsistency Of Language Models." American Journal of Social Science and Education Innovations 03, no. 09 (September 30, 2021): 39–44. http://dx.doi.org/10.37547/tajssei/volume03issue09-09.

Full text
Abstract:
The article deals with the problem of disproportion in morpheme units of linguistics and patterns. Based on the disproportion, information is given on the combined affixes formed in the morphemes, the expanded forms, and the analytic and synthetic forms. The data is based on the opinions of the world's leading linguists. The ideas are proven using examples. The formation of a particular linguistic model is a disproportion in the language system (meaning-function-methodological features): confusion of meanings, multifunctionality, semantics, competition in the use of forms (one form has more and more privileges, archaic nation of another form).
APA, Harvard, Vancouver, ISO, and other styles
3

Shimi, G., C. Jerin Mahibha, and Durairaj Thenmozhi. "An Empirical Analysis of Language Detection in Dravidian Languages." Indian Journal Of Science And Technology 17, no. 15 (April 16, 2024): 1515–26. http://dx.doi.org/10.17485/ijst/v17i15.765.

Full text
Abstract:
Objectives: Language detection is the process of identifying a language associated with a text. The proposed system aims to detect the Dravidian language that is associated with the given text using different machine learning and deep learning algorithms. The paper presents an empirical analysis of the results obtained using the different models. It also aims to evaluate the performance of a language agnostic model for the purpose of language detection. Method: An empirical analysis of Dravidian language identification in social media text using machine learning and deep learning approaches with k-fold cross validation has been implemented. The identification of Dravidian languages, including Tamil, Malayalam, Tamil Code Mix, and Malayalam Code Mix, is performed using both machine learning (ML) and deep learning algorithms. The machine learning algorithms used for language detection are Naive Bayes (NB), Multinomial Logistic Regression (MLR), Support Vector Machine (SVM), and Random Forest (RF). The supervised Deep Learning (DL) models used include BERT, mBERT and language agnostic models. Findings: The language agnostic model outperform all other models considering the task of language detection in Dravidian languages. The results of both the ML and DL models are analyzed empirically with performance measures like accuracy, precision, recall, and f1-score. The accuracy associated with different machine learning algorithms varies from 85% to 89%. It is evident from the experimental result that the deep learning model outperformed with an accuracy of 98%. Novelty: The proposed system emphasizes on the use of the language agnostic model to implement the process of detecting Dravidian languages associated with the given text which provides a promising result of 98% accuracy which is higher than the existing methodologies. Keywords: Language, Machine learning, Deep learning, Transformer model, Encoder, Decoder
APA, Harvard, Vancouver, ISO, and other styles
4

Mezzoudj, Freha, and Abdelkader Benyettou. "An empirical study of statistical language models: n-gram language models vs. neural network language models." International Journal of Innovative Computing and Applications 9, no. 4 (2018): 189. http://dx.doi.org/10.1504/ijica.2018.095762.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mezzoudj, Freha, and Abdelkader Benyettou. "An empirical study of statistical language models: n-gram language models vs. neural network language models." International Journal of Innovative Computing and Applications 9, no. 4 (2018): 189. http://dx.doi.org/10.1504/ijica.2018.10016827.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Babb, Robert G. "Language and models." ACM SIGSOFT Software Engineering Notes 13, no. 1 (January 3, 1988): 43–45. http://dx.doi.org/10.1145/43857.43872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, X., M. J. F. Gales, and P. C. Woodland. "Paraphrastic language models." Computer Speech & Language 28, no. 6 (November 2014): 1298–316. http://dx.doi.org/10.1016/j.csl.2014.04.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cerf, Vinton G. "Large Language Models." Communications of the ACM 66, no. 8 (July 25, 2023): 7. http://dx.doi.org/10.1145/3606337.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nederhof, Mark-Jan. "A General Technique to Train Language Models on Language Models." Computational Linguistics 31, no. 2 (June 2005): 173–85. http://dx.doi.org/10.1162/0891201054223986.

Full text
Abstract:
We show that under certain conditions, a language model can be trained on the basis of a second language model. The main instance of the technique trains a finite automaton on the basis of a probabilistic context-free grammar, such that the Kullback-Leibler distance between grammar and trained automaton is provably minimal. This is a substantial generalization of an existing algorithm to train an n-gram model on the basis of a probabilistic context-free grammar.
APA, Harvard, Vancouver, ISO, and other styles
10

Veres, Csaba. "Large Language Models are Not Models of Natural Language: They are Corpus Models." IEEE Access 10 (2022): 61970–79. http://dx.doi.org/10.1109/access.2022.3182505.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Language models"

1

Livingstone, Daniel Jack. "Computer models of the evolution of language and languages." Thesis, University of the West of Scotland, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.398331.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ryder, Robin Jeremy. "Phylogenetic models of language diversification." Thesis, University of Oxford, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.543009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Waegner, Nicholas Paul. "Stochastic models for language acquisition." Thesis, University of Cambridge, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.309214.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Niesler, Thomas Richard. "Category-based statistical language models." Thesis, University of Cambridge, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.627372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wallach, Hanna Megan. "Structured topic models for language." Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Douzon, Thibault. "Language models for document understanding." Electronic Thesis or Diss., Lyon, INSA, 2023. http://www.theses.fr/2023ISAL0075.

Full text
Abstract:
Chaque jour, les entreprises du monde entier reçoivent et traitent d'énormes volumes de documents, entraînant des coûts considérables. Pour réduire ces coûts, de grandes entreprises automatisent le traitement documentaire, visant une automatisation complète. Cette thèse se concentre sur l'utilisation de modèles d'apprentissage machine pour extraire des informations de documents. Les progrès récents en matière d'architecture de modèle, en particulier les transformeurs, ont révolutionné le domaine grâce à leur utilisation généralisée de l'attention et à l'amélioration des pré-entraînements auto-supervisés. Nous montrons que les transformeurs, pré-entraînés sur des documents, effectuent des tâches de compréhension de documents avec précision et surpassent les modèles à base de réseaux récurrents pour l'extraction d'informations par classification de mots. Les transformeurs nécessitent également moins de données d'entraînement pour atteindre des performances élevées, soulignant l'importance du pré-entraînement auto-supervisé. Dans la suite, nous introduisons des tâches de pré-entraînement spécifiquement adaptées aux documents d'entreprise, améliorant les performances même avec des modèles plus petits. Cela permet d'atteindre des niveaux de performance similaires à ceux de modèles plus gros, ouvrant la voie à des modèles plus petits et plus économiques. Enfin, nous abordons le défi du coût d'évaluation des transformeurs sur de longues séquences. Nous montrons que des architectures plus efficaces dérivées des transformeurs nécessitent moins de ressources et donnent de meilleurs résultats sur de longues séquences. Cependant, elles peuvent perdre légèrement en performance sur de courtes séquences par rapport aux transformeurs classiques. Cela suggère l'avantage d'utiliser plusieurs modèles en fonction de la longueur des séquences à traiter, ouvrant la possibilité de concaténer des séquences de différentes modalités
Every day, an uncountable amount of documents are received and processed by companies worldwide. In an effort to reduce the cost of processing each document, the largest companies have resorted to document automation technologies. In an ideal world, a document can be automatically processed without any human intervention: its content is read, and information is extracted and forwarded to the relevant service. The state-of-the-art techniques have quickly evolved in the last decades, from rule-based algorithms to statistical models. This thesis focuses on machine learning models for document information extraction. Recent advances in model architecture for natural language processing have shown the importance of the attention mechanism. Transformers have revolutionized the field by generalizing the use of attention and by pushing self-supervised pre-training to the next level. In the first part, we confirm that transformers with appropriate pre-training were able to perform document understanding tasks with high performance. We show that, when used as a token classifier for information extraction, transformers are able to exceptionally efficiently learn the task compared to recurrent networks. Transformers only need a small proportion of the training data to reach close to maximum performance. This highlights the importance of self-supervised pre-training for future fine-tuning. In the following part, we design specialized pre-training tasks, to better prepare the model for specific data distributions such as business documents. By acknowledging the specificities of business documents such as their table structure and their over-representation of numeric figures, we are able to target specific skills useful for the model in its future tasks. We show that those new tasks improve the model's downstream performances, even with small models. Using this pre-training approach, we are able to reach the performances of significantly bigger models without any additional cost during finetuning or inference. Finally, in the last part, we address one drawback of the transformer architecture which is its computational cost when used on long sequences. We show that efficient architectures derived from the classic transformer require fewer resources and perform better on long sequences. However, due to how they approximate the attention computation, efficient models suffer from a small but significant performance drop on short sequences compared to classical architectures. This incentivizes the use of different models depending on the input length and enables concatenating multimodal inputs into a single sequence
APA, Harvard, Vancouver, ISO, and other styles
7

Townsend, Duncan Clarke McIntire. "Using a symbolic language parser to Improve Markov language models." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100621.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 31-32).
This thesis presents a hybrid approach to natural language processing that combines an n-gram (Markov) model with a symbolic parser. In concert these two techniques are applied to the problem of sentence simplification. The n-gram system is comprised of a relational database backend with a frontend application that presents a homogeneous interface for both direct n-gram lookup and Markov approximation. The query language exposed by the frontend also applies lexical information from the START natural language system to allow queries based on part of speech. Using the START natural language system's parser, English sentences are transformed into a collection of structural, syntactic, and lexical statements that are uniquely well-suited to the process of simplification. After reducing the parse of the sentence, the resulting expressions can be processed back into English. These reduced sentences are ranked by likelihood by the n-gram model.
by Duncan Clarke McIntire Townsend.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
8

Buttery, P. J. "Computational models for first language acquisition." Thesis, University of Cambridge, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.597195.

Full text
Abstract:
This work investigates a computational model of first language acquisition; the Categorical Grammar Learner or CGL. The model builds on the work of Villavicenio, who created a parametric Categorical Grammar learner that organises its parameters into an inheritance hierarchy, and also on the work of Buszkowski and Kanazawa, who demonstrated the learnability of a k-valued Classic Categorial Grammar (which uses only the rules of function application) from strings. The CGL is able to learn a k-valued General Categorial Grammar (which uses the rules of function application, function composition and Generalised Weak Permutation). The novel concept of Sentence Objects (simple strings, augmented strings, unlabelled structures and functor-argument structures) are presented as potential points from which learning may commence. Augmented strings (which are stings augmented with some basic syntactic information) are suggested as a sensible input to the CGL as they are cognitively plausible objects and have greater information content than strings alone. Building on the work of Siskind, a method for constructing augmented strings from unordered logic forms is detailed and it is suggested that augmented strings are simply a representation of the constraints placed on the space of possible parses due to a sting’s associated semantic content. The CGL make crucial use of a statistical Memory Module (constructed from a type memory and Word Order Memory) that is used to both constrain hypotheses and handle data which is noisy or parametrically ambiguous. A consequence of the Memory Module is that the CGL learns in an incremental fashion. This echoes real child learning as documented in Brown’s Stages of Language Development and also as alluded to by an included corpus study of child speech. Furthermore, the CGL learns faster when initially presented with simpler linguistic data; a further corpus study of child-directed speech suggests that this echoes the input provided to children. The CGL is demonstrated to learn from real data. It is evaluated against previous parametric learners (the Triggering Learning Algorithm of Gibson and Wexler and the Structural Triggers Learner of Fodor and Sakas) and is found to be more efficient.
APA, Harvard, Vancouver, ISO, and other styles
9

Nkadimeng, Calvin. "Language identification using Gaussian mixture models." Thesis, Stellenbosch : University of Stellenbosch, 2010. http://hdl.handle.net/10019.1/4170.

Full text
Abstract:
Thesis (MScEng (Electrical and Electronic Engineering))--University of Stellenbosch, 2010.
ENGLISH ABSTRACT: The importance of Language Identification for African languages is seeing a dramatic increase due to the development of telecommunication infrastructure and, as a result, an increase in volumes of data and speech traffic in public networks. By automatically processing the raw speech data the vital assistance given to people in distress can be speeded up, by referring their calls to a person knowledgeable in that language. To this effect a speech corpus was developed and various algorithms were implemented and tested on raw telephone speech data. These algorithms entailed data preparation, signal processing, and statistical analysis aimed at discriminating between languages. The statistical model of Gaussian Mixture Models (GMMs) were chosen for this research due to their ability to represent an entire language with a single stochastic model that does not require phonetic transcription. Language Identification for African languages using GMMs is feasible, although there are some few challenges like proper classification and accurate study into the relationship of langauges that need to be overcome. Other methods that make use of phonetically transcribed data need to be explored and tested with the new corpus for the research to be more rigorous.
AFRIKAANSE OPSOMMING: Die belang van die Taal identifiseer vir Afrika-tale is sien ’n dramatiese toename te danke aan die ontwikkeling van telekommunikasie-infrastruktuur en as gevolg ’n toename in volumes van data en spraak verkeer in die openbaar netwerke.Deur outomaties verwerking van die ruwe toespraak gegee die noodsaaklike hulp verleen aan mense in nood kan word vinniger-up ”, deur te verwys hul oproepe na ’n persoon ingelichte in daardie taal. Tot hierdie effek van ’n toespraak corpus het ontwikkel en die verskillende algoritmes is gemplementeer en getoets op die ruwe telefoon toespraak gegee.Hierdie algoritmes behels die data voorbereiding, seinverwerking, en statistiese analise wat gerig is op onderskei tussen tale.Die statistiese model van Gauss Mengsel Modelle (GGM) was gekies is vir hierdie navorsing as gevolg van hul vermo te verteenwoordig ’n hele taal met’ n enkele stogastiese model wat nodig nie fonetiese tanscription nie. Taal identifiseer vir die Afrikatale gebruik GGM haalbaar is, alhoewel daar enkele paar uitdagings soos behoorlike klassifikasie en akkurate ondersoek na die verhouding van TALE wat moet oorkom moet word.Ander metodes wat gebruik maak van foneties getranskribeerde data nodig om ondersoek te word en getoets word met die nuwe corpus vir die ondersoek te word strenger.
APA, Harvard, Vancouver, ISO, and other styles
10

Schuster, Ingmar. "Probabilistic models of natural language semantics." Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-204503.

Full text
Abstract:
This thesis tackles the problem of modeling the semantics of natural language. Neural Network models are reviewed and a new Bayesian approach is developed and evaluated. As the performance of standard Monte Carlo algorithms proofed to be unsatisfactory for the developed models, the main focus lies on a new adaptive algorithm from the Sequential Monte Carlo (SMC) family. The Gradient Importance Sampling (GRIS) algorithm developed in the thesis is shown to give very good performance as compared to many adaptive Markov Chain Monte Carlo (MCMC) algorithms on a range of complex target distributions. Another advantage as compared to MCMC is that GRIS provides a straight forward estimate of model evidence. Finally, Sample Inflation is introduced as a means to reduce variance and speed up mode finding in Importance Sampling and SMC algorithms. Sample Inflation provides provably consistent estimates and is empirically found to improve convergence of integral estimates
Diese Dissertation befasst sich mit der Modellierung der Semantik natürlicher Sprache. Eine Übersicht von Neuronalen Netzwerkmodellen wird gegeben und ein eigener Bayesscher Ansatz wird entwickelt und evaluiert. Da die Leistungsfähigkeit von Standardalgorithmen aus der Monte-Carlo-Familie auf dem entwickelten Model unbefriedigend ist, liegt der Hauptfokus der Arbeit auf neuen adaptiven Algorithmen im Rahmen von Sequential Monte Carlo (SMC). Es wird gezeigt, dass der in der Dissertation entwickelte Gradient Importance Sampling (GRIS) Algorithmus sehr leistungsfähig ist im Vergleich zu vielen Algorithmen des adaptiven Markov Chain Monte Carlo (MCMC), wobei komplexe und hochdimensionale Integrationsprobleme herangezogen werden. Ein weiterer Vorteil im Vergleich mit MCMC ist, dass GRIS einen Schätzer der Modelevidenz liefert. Schließlich wird Sample Inflation eingeführt als Ansatz zur Reduktion von Varianz und schnellerem auffinden von Modi in einer Verteilung, wenn Importance Sampling oder SMC verwendet werden. Sample Inflation ist beweisbar konsistent und es wird empirisch gezeigt, dass seine Anwendung die Konvergenz von Integralschätzern verbessert
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Language models"

1

Stevenson, Rosemary J. Models of language development. Milton Keynes [England]: Open University Press, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ann, Ryan, Wray Alison, and British Association for Applied Linguistics., eds. Evolving models of language. Clevedon, England: British Association for Applied Linguistics in association with Multilingual Matters, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

1945-, Giora Rachel, ed. Models of figurative language. Hillsdale, N. J: Lawrence Erlbaum Association, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Amaratunga, Thimira. Understanding Large Language Models. Berkeley, CA: Apress, 2023. http://dx.doi.org/10.1007/979-8-8688-0017-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Meduna, Alexander, and Ondřej Soukup. Modern Language Models and Computation. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63100-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kucharavy, Andrei, Octave Plancherel, Valentin Mulder, Alain Mermoud, and Vincent Lenders, eds. Large Language Models in Cybersecurity. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Goossens, Louis. English modals and functional models: A confrontation. [Wilrijk, Belgium]: Universiteit Antwerpen, Universitaire Instelling Antwerpen, Departement Germaanse, Afd. Linguïstiek, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dirven, René, Roslyn Frank, and Martin Pütz, eds. Cognitive Models in Language and Thought. Berlin, Boston: DE GRUYTER, 2003. http://dx.doi.org/10.1515/9783110892901.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Leopold, Henrik, ed. Natural Language in Business Process Models. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-04175-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

1957-, Kornai András, ed. Extended finite state models of language. Cambridge, UK: Cambridge University Press, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Language models"

1

Hiemstra, Djoerd. "Language Models." In Encyclopedia of Database Systems, 1–5. New York, NY: Springer New York, 2017. http://dx.doi.org/10.1007/978-1-4899-7993-3_923-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hiemstra, Djoerd. "Language Models." In Encyclopedia of Database Systems, 1591–94. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-39940-9_923.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tanaka-Ishii, Kumiko. "Language Models." In Mathematics in Mind, 173–82. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-59377-3_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hiemstra, Djoerd. "Language Models." In Encyclopedia of Database Systems, 2061–65. New York, NY: Springer New York, 2018. http://dx.doi.org/10.1007/978-1-4614-8265-9_923.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Turner, Raymond. "Programming Language Specification." In Computable Models, 1–7. London: Springer London, 2009. http://dx.doi.org/10.1007/978-1-84882-052-4_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mark, Kevin E., Michael I. Miller, and Ulf Grenander. "Constrained Stochastic Language Models." In Image Models (and their Speech Model Cousins), 131–40. New York, NY: Springer New York, 1996. http://dx.doi.org/10.1007/978-1-4612-4056-3_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Skansi, Sandro. "Neural Language Models." In Undergraduate Topics in Computer Science, 165–73. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-73004-2_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sobernig, Stefan. "Variable Language Models." In Variable Domain-specific Software Languages with DjDSL, 73–136. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-42152-6_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

McTear, Michael, and Marina Ashurkina. "Large Language Models." In Transforming Conversational AI, 61–84. Berkeley, CA: Apress, 2024. http://dx.doi.org/10.1007/979-8-8688-0110-5_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Peng, and David B. Sawyer. "Predicative Language Models." In Machine Learning in Translation, 52–68. London: Routledge, 2023. http://dx.doi.org/10.4324/9781003321538-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Language models"

1

Zaytsev, Vadim. "Language Design with Intent." In 2017 ACM/IEEE 20th International Conference on Model-Driven Engineering Languages and Systems (MODELS). IEEE, 2017. http://dx.doi.org/10.1109/models.2017.16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Torroba Hennigen, Lucas, and Yoon Kim. "Deriving Language Models from Masked Language Models." In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.acl-short.99.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Perez, Ethan, Saffron Huang, Francis Song, Trevor Cai, Roman Ring, John Aslanides, Amelia Glaese, Nat McAleese, and Geoffrey Irving. "Red Teaming Language Models with Language Models." In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.emnlp-main.225.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Pfeiffer, Jérôme. "Systematic Component-Oriented Language Reuse." In 2023 ACM/IEEE International Conference on Model Driven Engineering Languages and Systems Companion (MODELS-C). IEEE, 2023. http://dx.doi.org/10.1109/models-c59198.2023.00043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Schiedermeier, Maximilian, Bowen Li, Ryan Languay, Greta Freitag, Qiutan Wu, Jorg Kienzle, Hyacinth Ali, Ian Gauthier, and Gunter Mussbacher. "Multi-Language Support in TouchCORE." In 2021 ACM/IEEE International Conference on Model Driven Engineering Languages and Systems Companion (MODELS-C). IEEE, 2021. http://dx.doi.org/10.1109/models-c53483.2021.00096.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Brychcin, Tomas, and Miloslav Konopik. "Morphological based language models for inflectional languages." In 2011 IEEE 6th International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS). IEEE, 2011. http://dx.doi.org/10.1109/idaacs.2011.6072829.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

"Natural-language Scenario Descriptions for Testing Core Language Models of Domain-Specific Languages." In International Conference on Model-Driven Engineering and Software Development. SCITEPRESS - Science and and Technology Publications, 2014. http://dx.doi.org/10.5220/0004713703560367.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Vara Larsen, Matias Ezequiel, Julien DeAntoni, Benoit Combemale, and Frederic Mallet. "A Behavioral Coordination Operator Language (BCOoL)." In 2015 ACM/IEEE 18th International Conference on Model Driven Engineering Languages and Systems (MODELS). IEEE, 2015. http://dx.doi.org/10.1109/models.2015.7338249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Seidewitz, Ed, Arnaud Blouin, and Jérôme Pfeiffer. "Modeling Language Engineering Workshop (MLE 2023)." In 2023 ACM/IEEE International Conference on Model Driven Engineering Languages and Systems Companion (MODELS-C). IEEE, 2023. http://dx.doi.org/10.1109/models-c59198.2023.00063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gaudin, Emmanuel, Eric Brunel, and Mihal Brumbulli. "Language Agnostic Model Checking for SDL." In 2023 ACM/IEEE International Conference on Model Driven Engineering Languages and Systems Companion (MODELS-C). IEEE, 2023. http://dx.doi.org/10.1109/models-c59198.2023.00052.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Language models"

1

Seymore, Kristie, and Ronald Rosenfeld. Scalable Trigram Backoff Language Models,. Fort Belvoir, VA: Defense Technical Information Center, May 1996. http://dx.doi.org/10.21236/ada310721.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hacioglu, Kadri, and Wayne Ward. On Combining Language Models: Oracle Approach. Fort Belvoir, VA: Defense Technical Information Center, January 2001. http://dx.doi.org/10.21236/ada460991.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lavrenko, Victor. Localized Smoothing for Multinomial Language Models. Fort Belvoir, VA: Defense Technical Information Center, May 2000. http://dx.doi.org/10.21236/ada477851.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Raimondo, S., T. Chen, A. Zakharov, L. Brin, D. Kur, J. Hui, S L Burgoyne, G. Newton, and C. J. M. Lawley. Datasets to support geological language models. Natural Resources Canada/CMSS/Information Management, 2021. http://dx.doi.org/10.4095/329265.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Howland, Scott, Jessica Yaros, and Noriaki Kono. MetaText: Compositional Generalization in Deep Language Models. Office of Scientific and Technical Information (OSTI), October 2022. http://dx.doi.org/10.2172/1987883.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cullen, Cabot, and Zhehui Wang. Exploration of Language Models for ICF Design. Office of Scientific and Technical Information (OSTI), July 2023. http://dx.doi.org/10.2172/1992244.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Buchanan, Ben, Andrew Lohn, Micah Musser, and Katerina Sedova. Truth, Lies, and Automation: How Language Models Could Change Disinformation. Center for Security and Emerging Technology, May 2021. http://dx.doi.org/10.51593/2021ca003.

Full text
Abstract:
Growing popular and industry interest in high-performing natural language generation models has led to concerns that such models could be used to generate automated disinformation at scale. This report examines the capabilities of GPT-3--a cutting-edge AI system that writes text--to analyze its potential misuse for disinformation. A model like GPT-3 may be able to help disinformation actors substantially reduce the work necessary to write disinformation while expanding its reach and potentially also its effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
8

Borders, Tammie, and Svitlana Volkova. An Introduction to Word Embeddings and Language Models. Office of Scientific and Technical Information (OSTI), April 2021. http://dx.doi.org/10.2172/1773690.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Korinek, Anton. Language Models and Cognitive Automation for Economic Research. Cambridge, MA: National Bureau of Economic Research, February 2023. http://dx.doi.org/10.3386/w30957.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lee, Benjamin Yen Kit. Automated neuron explanation for code-trained language models. Ames (Iowa): Iowa State University, May 2024. http://dx.doi.org/10.31274/cc-20240624-267.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography