Academic literature on the topic 'Neural language models'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Neural language models.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Neural language models"
Buckman, Jacob, and Graham Neubig. "Neural Lattice Language Models." Transactions of the Association for Computational Linguistics 6 (December 2018): 529–41. http://dx.doi.org/10.1162/tacl_a_00036.
Full textBengio, Yoshua. "Neural net language models." Scholarpedia 3, no. 1 (2008): 3881. http://dx.doi.org/10.4249/scholarpedia.3881.
Full textDong, Li. "Learning natural language interfaces with neural models." AI Matters 7, no. 2 (June 2021): 14–17. http://dx.doi.org/10.1145/3478369.3478375.
Full textDe Coster, Mathieu, and Joni Dambre. "Leveraging Frozen Pretrained Written Language Models for Neural Sign Language Translation." Information 13, no. 5 (April 23, 2022): 220. http://dx.doi.org/10.3390/info13050220.
Full textChang, Tyler A., and Benjamin K. Bergen. "Word Acquisition in Neural Language Models." Transactions of the Association for Computational Linguistics 10 (2022): 1–16. http://dx.doi.org/10.1162/tacl_a_00444.
Full textMezzoudj, Freha, and Abdelkader Benyettou. "An empirical study of statistical language models: n-gram language models vs. neural network language models." International Journal of Innovative Computing and Applications 9, no. 4 (2018): 189. http://dx.doi.org/10.1504/ijica.2018.095762.
Full textMezzoudj, Freha, and Abdelkader Benyettou. "An empirical study of statistical language models: n-gram language models vs. neural network language models." International Journal of Innovative Computing and Applications 9, no. 4 (2018): 189. http://dx.doi.org/10.1504/ijica.2018.10016827.
Full textMandy Lau. "Artificial intelligence language models and the false fantasy of participatory language policies." Working papers in Applied Linguistics and Linguistics at York 1 (September 13, 2021): 4–15. http://dx.doi.org/10.25071/2564-2855.5.
Full textQi, Kunxun, and Jianfeng Du. "Translation-Based Matching Adversarial Network for Cross-Lingual Natural Language Inference." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 8632–39. http://dx.doi.org/10.1609/aaai.v34i05.6387.
Full textPark, Myung-Kwan, Keonwoo Koo, Jaemin Lee, and Wonil Chung. "Investigating Syntactic Transfer from English to Korean in Neural L2 Language Models." Studies in Modern Grammar 121 (March 30, 2024): 177–201. http://dx.doi.org/10.14342/smog.2024.121.177.
Full textDissertations / Theses on the topic "Neural language models"
Lei, Tao Ph D. Massachusetts Institute of Technology. "Interpretable neural models for natural language processing." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/108990.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 109-119).
The success of neural network models often comes at a cost of interpretability. This thesis addresses the problem by providing justifications behind the model's structure and predictions. In the first part of this thesis, we present a class of sequence operations for text processing. The proposed component generalizes from convolution operations and gated aggregations. As justifications, we relate this component to string kernels, i.e. functions measuring the similarity between sequences, and demonstrate how it encodes the efficient kernel computing algorithm into its structure. The proposed model achieves state-of-the-art or competitive results compared to alternative architectures (such as LSTMs and CNNs) across several NLP applications. In the second part, we learn rationales behind the model's prediction by extracting input pieces as supporting evidence. Rationales are tailored to be short and coherent, yet sufficient for making the same prediction. Our approach combines two modular components, generator and encoder, which are trained to operate well together. The generator specifies a distribution over text fragments as candidate rationales and these are passed through the encoder for prediction. Rationales are never given during training. Instead, the model is regularized by the desiderata for rationales. We demonstrate the effectiveness of this learning framework in applications such multi-aspect sentiment analysis. Our method achieves a performance over 90% evaluated against manual annotated rationales.
by Tao Lei.
Ph. D.
Kunz, Jenny. "Neural Language Models with Explicit Coreference Decision." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-371827.
Full textLabeau, Matthieu. "Neural language models : Dealing with large vocabularies." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS313/document.
Full textThis work investigates practical methods to ease training and improve performances of neural language models with large vocabularies. The main limitation of neural language models is their expensive computational cost: it depends on the size of the vocabulary, with which it grows linearly. Despite several training tricks, the most straightforward way to limit computation time is to limit the vocabulary size, which is not a satisfactory solution for numerous tasks. Most of the existing methods used to train large-vocabulary language models revolve around avoiding the computation of the partition function, ensuring that output scores are normalized into a probability distribution. Here, we focus on sampling-based approaches, including importance sampling and noise contrastive estimation. These methods allow an approximate computation of the partition function. After examining the mechanism of self-normalization in noise-contrastive estimation, we first propose to improve its efficiency with solutions that are adapted to the inner workings of the method and experimentally show that they considerably ease training. Our second contribution is to expand on a generalization of several sampling based objectives as Bregman divergences, in order to experiment with new objectives. We use Beta divergences to derive a set of objectives from which noise contrastive estimation is a particular case. Finally, we aim at improving performances on full vocabulary language models, by augmenting output words representation with subwords. We experiment on a Czech dataset and show that using character-based representations besides word embeddings for output representations gives better results. We also show that reducing the size of the output look-up table improves results even more
Bayer, Ali Orkan. "Semantic Language models with deep neural Networks." Doctoral thesis, Università degli studi di Trento, 2015. https://hdl.handle.net/11572/367784.
Full textBayer, Ali Orkan. "Semantic Language models with deep neural Networks." Doctoral thesis, University of Trento, 2015. http://eprints-phd.biblio.unitn.it/1578/1/bayer_thesis.pdf.
Full textLi, Zhongliang. "Slim Embedding Layers for Recurrent Neural Language Models." Wright State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=wright1531950458646138.
Full textGangireddy, Siva Reddy. "Recurrent neural network language models for automatic speech recognition." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/28990.
Full textScarcella, Alessandro. "Recurrent neural network language models in the context of under-resourced South African languages." Master's thesis, University of Cape Town, 2018. http://hdl.handle.net/11427/29431.
Full textLe, Hai Son. "Continuous space models with neural networks in natural language processing." Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00776704.
Full textMiao, Yishu. "Deep generative models for natural language processing." Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:e4e1f1f9-e507-4754-a0ab-0246f1e1e258.
Full textBooks on the topic "Neural language models"
1957-, Houghton George, ed. Connectionist models in cognitive psychology. Hove: Psychology Press, 2004.
Find full textMiikkulainen, Risto. Subsymbolic natural language processing: An integrated model of scripts, lexicon, and memory. Cambridge, Mass: MIT Press, 1993.
Find full textBavaeva, Ol'ga. Metaphorical parallels of the neutral nomination "man" in modern English. ru: INFRA-M Academic Publishing LLC., 2022. http://dx.doi.org/10.12737/1858259.
Full textArbib, Michael. Neural Models of Language Processes. Elsevier Science & Technology Books, 2012.
Find full textCairns, Paul, Joseph P. Levy, Dimitrios Bairaktaris, and John A. Bullinaria. Connectionist Models of Memory and Language. Taylor & Francis Group, 2015.
Find full textHoughton, George. Connectionist Models in Cognitive Psychology. Taylor & Francis Group, 2004.
Find full textHoughton, George. Connectionist Models in Cognitive Psychology. Taylor & Francis Group, 2004.
Find full textHoughton, George. Connectionist Models in Cognitive Psychology. Taylor & Francis Group, 2004.
Find full textHoughton, George. Connectionist Models in Cognitive Psychology. Taylor & Francis Group, 2004.
Find full textConnectionist Models in Cognitive Psychology. Taylor & Francis Group, 2014.
Find full textBook chapters on the topic "Neural language models"
Skansi, Sandro. "Neural Language Models." In Undergraduate Topics in Computer Science, 165–73. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-73004-2_9.
Full textDelasalles, Edouard, Sylvain Lamprier, and Ludovic Denoyer. "Dynamic Neural Language Models." In Neural Information Processing, 282–94. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36718-3_24.
Full textHampton, Peter John, Hui Wang, and Zhiwei Lin. "Knowledge Transfer in Neural Language Models." In Artificial Intelligence XXXIV, 143–48. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-71078-5_12.
Full textO’Neill, James, and Danushka Bollegala. "Learning to Evaluate Neural Language Models." In Communications in Computer and Information Science, 123–33. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-6168-9_11.
Full textGoldrick, Matthew. "Neural Network Models of Speech Production." In The Handbook of the Neuropsychology of Language, 125–45. Oxford, UK: Wiley-Blackwell, 2012. http://dx.doi.org/10.1002/9781118432501.ch7.
Full textG, Santhosh Kumar. "Neural Language Models for (Fake?) News Generation." In Data Science for Fake News, 129–47. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-62696-9_6.
Full textHuang, Yue, and Xiaodong Gu. "Temporal Modeling Approach for Video Action Recognition Based on Vision-language Models." In Neural Information Processing, 512–23. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8067-3_38.
Full textGoldberg, Yoav. "From Linear Models to Multi-layer Perceptrons." In Neural Network Methods for Natural Language Processing, 37–39. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-031-02165-7_3.
Full textShen, Tongtong, Longbiao Wang, Xie Chen, Kuntharrgyal Khysru, and Jianwu Dang. "Exploiting the Tibetan Radicals in Recurrent Neural Network for Low-Resource Language Models." In Neural Information Processing, 266–75. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70096-0_28.
Full textTaylor, N. R., and J. G. Taylor. "The Neural Networks for Language in the Brain: Creating LAD." In Computational Models for Neuroscience, 245–65. London: Springer London, 2003. http://dx.doi.org/10.1007/978-1-4471-0085-0_9.
Full textConference papers on the topic "Neural language models"
Ragni, Anton, Edgar Dakin, Xie Chen, Mark J. F. Gales, and Kate M. Knill. "Multi-Language Neural Network Language Models." In Interspeech 2016. ISCA, 2016. http://dx.doi.org/10.21437/interspeech.2016-371.
Full textКузнецов, Алексей Валерьевич. "NEURAL LANGUAGE MODELS FOR HISTORICAL RESEARCH." In Высокие технологии и инновации в науке: сборник избранных статей Международной научной конференции (Санкт-Петербург, Май 2022). Crossref, 2022. http://dx.doi.org/10.37539/vt197.2022.25.51.002.
Full textAlexandrescu, Andrei, and Katrin Kirchhoff. "Factored neural language models." In the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers. Morristown, NJ, USA: Association for Computational Linguistics, 2006. http://dx.doi.org/10.3115/1614049.1614050.
Full textArisoy, Ebru, and Murat Saraclar. "Compositional Neural Network Language Models for Agglutinative Languages." In Interspeech 2016. ISCA, 2016. http://dx.doi.org/10.21437/interspeech.2016-1239.
Full textGandhe, Ankur, Florian Metze, and Ian Lane. "Neural network language models for low resource languages." In Interspeech 2014. ISCA: ISCA, 2014. http://dx.doi.org/10.21437/interspeech.2014-560.
Full textChen, Zihao. "Neural Language Models in Natural Language Processing." In 2023 2nd International Conference on Data Analytics, Computing and Artificial Intelligence (ICDACAI). IEEE, 2023. http://dx.doi.org/10.1109/icdacai59742.2023.00104.
Full textOba, Miyu, Tatsuki Kuribayashi, Hiroki Ouchi, and Taro Watanabe. "Second Language Acquisition of Neural Language Models." In Findings of the Association for Computational Linguistics: ACL 2023. Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.findings-acl.856.
Full textLiu, X., M. J. F. Gales, and P. C. Woodland. "Paraphrastic language models and combination with neural network language models." In ICASSP 2013 - 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2013. http://dx.doi.org/10.1109/icassp.2013.6639308.
Full textJavier Vazquez Martinez, Hector, Annika Lea Heuser, Charles Yang, and Jordan Kodner. "Evaluating Neural Language Models as Cognitive Models of Language Acquisition." In Proceedings of the 1st GenBench Workshop on (Benchmarking) Generalisation in NLP. Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.genbench-1.4.
Full textHuang, Yinghui, Abhinav Sethy, Kartik Audhkhasi, and Bhuvana Ramabhadran. "Whole Sentence Neural Language Models." In ICASSP 2018 - 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018. http://dx.doi.org/10.1109/icassp.2018.8461734.
Full textReports on the topic "Neural language models"
Semerikov, Serhiy O., Illia O. Teplytskyi, Yuliia V. Yechkalo, and Arnold E. Kiv. Computer Simulation of Neural Networks Using Spreadsheets: The Dawn of the Age of Camelot. [б. в.], November 2018. http://dx.doi.org/10.31812/123456789/2648.
Full textApicella, M. L., J. Slaton, and B. Levi. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 10. Neutral Data Manipulation Language (NDML) Precompiler Control Module Product Specification. Fort Belvoir, VA: Defense Technical Information Center, September 1990. http://dx.doi.org/10.21236/ada250451.
Full textAlthoff, J. L., M. L. Apicella, and S. Singh. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 5. Neutral Data Definition Language (NDDL) Development Specification. Fort Belvoir, VA: Defense Technical Information Center, September 1990. http://dx.doi.org/10.21236/ada252450.
Full textApicella, M. L., J. Slaton, and B. Levi. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 13. Neutral Data Manipulation Language (NDML) Precompiler Parse NDML Product Specification. Fort Belvoir, VA: Defense Technical Information Center, September 1990. http://dx.doi.org/10.21236/ada250453.
Full textAlthoff, J., and M. Apicella. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 9. Neutral Data Manipulation Language (NDML) Precompiler Development Specification. Section 2. Fort Belvoir, VA: Defense Technical Information Center, September 1990. http://dx.doi.org/10.21236/ada252526.
Full textApicella, M. L., J. Slaton, and B. Levi. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 12. Neutral Data Manipulation Language (NDML) Precompiler Parse Procedure Division Product Specification. Fort Belvoir, VA: Defense Technical Information Center, September 1990. http://dx.doi.org/10.21236/ada250452.
Full textApicella, M. L., J. Slaton, B. Levi, and A. Pashak. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 23. Neutral Data Manipulation Language (NDML) Precompiler Build Source Code Product Specification. Fort Belvoir, VA: Defense Technical Information Center, September 1990. http://dx.doi.org/10.21236/ada250460.
Full textApicella, M. L., J. Slaton, B. Levi, and A. Pashak. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 24. Neutral Data Manipulation Language (NDML) Precompiler Generator Support Routines Product Specification. Fort Belvoir, VA: Defense Technical Information Center, September 1990. http://dx.doi.org/10.21236/ada250461.
Full textAlthoff, J., M. Apicella, and S. Singh. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 6. Neutral Data Definition Language (NDDL) Product Specification. Section 3 of 6. Fort Belvoir, VA: Defense Technical Information Center, September 1990. http://dx.doi.org/10.21236/ada251997.
Full textAlthoff, J., M. Apicella, and S. Singh. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 6. Neutral Data Definition Language (NDDL) Product Specification. Section 4 of 6. Fort Belvoir, VA: Defense Technical Information Center, September 1990. http://dx.doi.org/10.21236/ada251998.
Full text