Literatura académica sobre el tema "Neural language models"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Neural language models".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Neural language models"
Buckman, Jacob y Graham Neubig. "Neural Lattice Language Models". Transactions of the Association for Computational Linguistics 6 (diciembre de 2018): 529–41. http://dx.doi.org/10.1162/tacl_a_00036.
Texto completoBengio, Yoshua. "Neural net language models". Scholarpedia 3, n.º 1 (2008): 3881. http://dx.doi.org/10.4249/scholarpedia.3881.
Texto completoDong, Li. "Learning natural language interfaces with neural models". AI Matters 7, n.º 2 (junio de 2021): 14–17. http://dx.doi.org/10.1145/3478369.3478375.
Texto completoDe Coster, Mathieu y Joni Dambre. "Leveraging Frozen Pretrained Written Language Models for Neural Sign Language Translation". Information 13, n.º 5 (23 de abril de 2022): 220. http://dx.doi.org/10.3390/info13050220.
Texto completoChang, Tyler A. y Benjamin K. Bergen. "Word Acquisition in Neural Language Models". Transactions of the Association for Computational Linguistics 10 (2022): 1–16. http://dx.doi.org/10.1162/tacl_a_00444.
Texto completoMezzoudj, Freha y Abdelkader Benyettou. "An empirical study of statistical language models: n-gram language models vs. neural network language models". International Journal of Innovative Computing and Applications 9, n.º 4 (2018): 189. http://dx.doi.org/10.1504/ijica.2018.095762.
Texto completoMezzoudj, Freha y Abdelkader Benyettou. "An empirical study of statistical language models: n-gram language models vs. neural network language models". International Journal of Innovative Computing and Applications 9, n.º 4 (2018): 189. http://dx.doi.org/10.1504/ijica.2018.10016827.
Texto completoMandy Lau. "Artificial intelligence language models and the false fantasy of participatory language policies". Working papers in Applied Linguistics and Linguistics at York 1 (13 de septiembre de 2021): 4–15. http://dx.doi.org/10.25071/2564-2855.5.
Texto completoQi, Kunxun y Jianfeng Du. "Translation-Based Matching Adversarial Network for Cross-Lingual Natural Language Inference". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 05 (3 de abril de 2020): 8632–39. http://dx.doi.org/10.1609/aaai.v34i05.6387.
Texto completoPark, Myung-Kwan, Keonwoo Koo, Jaemin Lee y Wonil Chung. "Investigating Syntactic Transfer from English to Korean in Neural L2 Language Models". Studies in Modern Grammar 121 (30 de marzo de 2024): 177–201. http://dx.doi.org/10.14342/smog.2024.121.177.
Texto completoTesis sobre el tema "Neural language models"
Lei, Tao Ph D. Massachusetts Institute of Technology. "Interpretable neural models for natural language processing". Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/108990.
Texto completoCataloged from PDF version of thesis.
Includes bibliographical references (pages 109-119).
The success of neural network models often comes at a cost of interpretability. This thesis addresses the problem by providing justifications behind the model's structure and predictions. In the first part of this thesis, we present a class of sequence operations for text processing. The proposed component generalizes from convolution operations and gated aggregations. As justifications, we relate this component to string kernels, i.e. functions measuring the similarity between sequences, and demonstrate how it encodes the efficient kernel computing algorithm into its structure. The proposed model achieves state-of-the-art or competitive results compared to alternative architectures (such as LSTMs and CNNs) across several NLP applications. In the second part, we learn rationales behind the model's prediction by extracting input pieces as supporting evidence. Rationales are tailored to be short and coherent, yet sufficient for making the same prediction. Our approach combines two modular components, generator and encoder, which are trained to operate well together. The generator specifies a distribution over text fragments as candidate rationales and these are passed through the encoder for prediction. Rationales are never given during training. Instead, the model is regularized by the desiderata for rationales. We demonstrate the effectiveness of this learning framework in applications such multi-aspect sentiment analysis. Our method achieves a performance over 90% evaluated against manual annotated rationales.
by Tao Lei.
Ph. D.
Kunz, Jenny. "Neural Language Models with Explicit Coreference Decision". Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-371827.
Texto completoLabeau, Matthieu. "Neural language models : Dealing with large vocabularies". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS313/document.
Texto completoThis work investigates practical methods to ease training and improve performances of neural language models with large vocabularies. The main limitation of neural language models is their expensive computational cost: it depends on the size of the vocabulary, with which it grows linearly. Despite several training tricks, the most straightforward way to limit computation time is to limit the vocabulary size, which is not a satisfactory solution for numerous tasks. Most of the existing methods used to train large-vocabulary language models revolve around avoiding the computation of the partition function, ensuring that output scores are normalized into a probability distribution. Here, we focus on sampling-based approaches, including importance sampling and noise contrastive estimation. These methods allow an approximate computation of the partition function. After examining the mechanism of self-normalization in noise-contrastive estimation, we first propose to improve its efficiency with solutions that are adapted to the inner workings of the method and experimentally show that they considerably ease training. Our second contribution is to expand on a generalization of several sampling based objectives as Bregman divergences, in order to experiment with new objectives. We use Beta divergences to derive a set of objectives from which noise contrastive estimation is a particular case. Finally, we aim at improving performances on full vocabulary language models, by augmenting output words representation with subwords. We experiment on a Czech dataset and show that using character-based representations besides word embeddings for output representations gives better results. We also show that reducing the size of the output look-up table improves results even more
Bayer, Ali Orkan. "Semantic Language models with deep neural Networks". Doctoral thesis, Università degli studi di Trento, 2015. https://hdl.handle.net/11572/367784.
Texto completoBayer, Ali Orkan. "Semantic Language models with deep neural Networks". Doctoral thesis, University of Trento, 2015. http://eprints-phd.biblio.unitn.it/1578/1/bayer_thesis.pdf.
Texto completoLi, Zhongliang. "Slim Embedding Layers for Recurrent Neural Language Models". Wright State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=wright1531950458646138.
Texto completoGangireddy, Siva Reddy. "Recurrent neural network language models for automatic speech recognition". Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/28990.
Texto completoScarcella, Alessandro. "Recurrent neural network language models in the context of under-resourced South African languages". Master's thesis, University of Cape Town, 2018. http://hdl.handle.net/11427/29431.
Texto completoLe, Hai Son. "Continuous space models with neural networks in natural language processing". Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00776704.
Texto completoMiao, Yishu. "Deep generative models for natural language processing". Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:e4e1f1f9-e507-4754-a0ab-0246f1e1e258.
Texto completoLibros sobre el tema "Neural language models"
1957-, Houghton George, ed. Connectionist models in cognitive psychology. Hove: Psychology Press, 2004.
Buscar texto completoMiikkulainen, Risto. Subsymbolic natural language processing: An integrated model of scripts, lexicon, and memory. Cambridge, Mass: MIT Press, 1993.
Buscar texto completoBavaeva, Ol'ga. Metaphorical parallels of the neutral nomination "man" in modern English. ru: INFRA-M Academic Publishing LLC., 2022. http://dx.doi.org/10.12737/1858259.
Texto completoArbib, Michael. Neural Models of Language Processes. Elsevier Science & Technology Books, 2012.
Buscar texto completoCairns, Paul, Joseph P. Levy, Dimitrios Bairaktaris y John A. Bullinaria. Connectionist Models of Memory and Language. Taylor & Francis Group, 2015.
Buscar texto completoHoughton, George. Connectionist Models in Cognitive Psychology. Taylor & Francis Group, 2004.
Buscar texto completoHoughton, George. Connectionist Models in Cognitive Psychology. Taylor & Francis Group, 2004.
Buscar texto completoHoughton, George. Connectionist Models in Cognitive Psychology. Taylor & Francis Group, 2004.
Buscar texto completoHoughton, George. Connectionist Models in Cognitive Psychology. Taylor & Francis Group, 2004.
Buscar texto completoConnectionist Models in Cognitive Psychology. Taylor & Francis Group, 2014.
Buscar texto completoCapítulos de libros sobre el tema "Neural language models"
Skansi, Sandro. "Neural Language Models". En Undergraduate Topics in Computer Science, 165–73. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-73004-2_9.
Texto completoDelasalles, Edouard, Sylvain Lamprier y Ludovic Denoyer. "Dynamic Neural Language Models". En Neural Information Processing, 282–94. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36718-3_24.
Texto completoHampton, Peter John, Hui Wang y Zhiwei Lin. "Knowledge Transfer in Neural Language Models". En Artificial Intelligence XXXIV, 143–48. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-71078-5_12.
Texto completoO’Neill, James y Danushka Bollegala. "Learning to Evaluate Neural Language Models". En Communications in Computer and Information Science, 123–33. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-6168-9_11.
Texto completoGoldrick, Matthew. "Neural Network Models of Speech Production". En The Handbook of the Neuropsychology of Language, 125–45. Oxford, UK: Wiley-Blackwell, 2012. http://dx.doi.org/10.1002/9781118432501.ch7.
Texto completoG, Santhosh Kumar. "Neural Language Models for (Fake?) News Generation". En Data Science for Fake News, 129–47. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-62696-9_6.
Texto completoHuang, Yue y Xiaodong Gu. "Temporal Modeling Approach for Video Action Recognition Based on Vision-language Models". En Neural Information Processing, 512–23. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8067-3_38.
Texto completoGoldberg, Yoav. "From Linear Models to Multi-layer Perceptrons". En Neural Network Methods for Natural Language Processing, 37–39. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-031-02165-7_3.
Texto completoShen, Tongtong, Longbiao Wang, Xie Chen, Kuntharrgyal Khysru y Jianwu Dang. "Exploiting the Tibetan Radicals in Recurrent Neural Network for Low-Resource Language Models". En Neural Information Processing, 266–75. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70096-0_28.
Texto completoTaylor, N. R. y J. G. Taylor. "The Neural Networks for Language in the Brain: Creating LAD". En Computational Models for Neuroscience, 245–65. London: Springer London, 2003. http://dx.doi.org/10.1007/978-1-4471-0085-0_9.
Texto completoActas de conferencias sobre el tema "Neural language models"
Ragni, Anton, Edgar Dakin, Xie Chen, Mark J. F. Gales y Kate M. Knill. "Multi-Language Neural Network Language Models". En Interspeech 2016. ISCA, 2016. http://dx.doi.org/10.21437/interspeech.2016-371.
Texto completoКузнецов, Алексей Валерьевич. "NEURAL LANGUAGE MODELS FOR HISTORICAL RESEARCH". En Высокие технологии и инновации в науке: сборник избранных статей Международной научной конференции (Санкт-Петербург, Май 2022). Crossref, 2022. http://dx.doi.org/10.37539/vt197.2022.25.51.002.
Texto completoAlexandrescu, Andrei y Katrin Kirchhoff. "Factored neural language models". En the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers. Morristown, NJ, USA: Association for Computational Linguistics, 2006. http://dx.doi.org/10.3115/1614049.1614050.
Texto completoArisoy, Ebru y Murat Saraclar. "Compositional Neural Network Language Models for Agglutinative Languages". En Interspeech 2016. ISCA, 2016. http://dx.doi.org/10.21437/interspeech.2016-1239.
Texto completoGandhe, Ankur, Florian Metze y Ian Lane. "Neural network language models for low resource languages". En Interspeech 2014. ISCA: ISCA, 2014. http://dx.doi.org/10.21437/interspeech.2014-560.
Texto completoChen, Zihao. "Neural Language Models in Natural Language Processing". En 2023 2nd International Conference on Data Analytics, Computing and Artificial Intelligence (ICDACAI). IEEE, 2023. http://dx.doi.org/10.1109/icdacai59742.2023.00104.
Texto completoOba, Miyu, Tatsuki Kuribayashi, Hiroki Ouchi y Taro Watanabe. "Second Language Acquisition of Neural Language Models". En Findings of the Association for Computational Linguistics: ACL 2023. Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.findings-acl.856.
Texto completoLiu, X., M. J. F. Gales y P. C. Woodland. "Paraphrastic language models and combination with neural network language models". En ICASSP 2013 - 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2013. http://dx.doi.org/10.1109/icassp.2013.6639308.
Texto completoJavier Vazquez Martinez, Hector, Annika Lea Heuser, Charles Yang y Jordan Kodner. "Evaluating Neural Language Models as Cognitive Models of Language Acquisition". En Proceedings of the 1st GenBench Workshop on (Benchmarking) Generalisation in NLP. Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.genbench-1.4.
Texto completoHuang, Yinghui, Abhinav Sethy, Kartik Audhkhasi y Bhuvana Ramabhadran. "Whole Sentence Neural Language Models". En ICASSP 2018 - 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018. http://dx.doi.org/10.1109/icassp.2018.8461734.
Texto completoInformes sobre el tema "Neural language models"
Semerikov, Serhiy O., Illia O. Teplytskyi, Yuliia V. Yechkalo y Arnold E. Kiv. Computer Simulation of Neural Networks Using Spreadsheets: The Dawn of the Age of Camelot. [б. в.], noviembre de 2018. http://dx.doi.org/10.31812/123456789/2648.
Texto completoApicella, M. L., J. Slaton y B. Levi. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 10. Neutral Data Manipulation Language (NDML) Precompiler Control Module Product Specification. Fort Belvoir, VA: Defense Technical Information Center, septiembre de 1990. http://dx.doi.org/10.21236/ada250451.
Texto completoAlthoff, J. L., M. L. Apicella y S. Singh. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 5. Neutral Data Definition Language (NDDL) Development Specification. Fort Belvoir, VA: Defense Technical Information Center, septiembre de 1990. http://dx.doi.org/10.21236/ada252450.
Texto completoApicella, M. L., J. Slaton y B. Levi. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 13. Neutral Data Manipulation Language (NDML) Precompiler Parse NDML Product Specification. Fort Belvoir, VA: Defense Technical Information Center, septiembre de 1990. http://dx.doi.org/10.21236/ada250453.
Texto completoAlthoff, J. y M. Apicella. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 9. Neutral Data Manipulation Language (NDML) Precompiler Development Specification. Section 2. Fort Belvoir, VA: Defense Technical Information Center, septiembre de 1990. http://dx.doi.org/10.21236/ada252526.
Texto completoApicella, M. L., J. Slaton y B. Levi. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 12. Neutral Data Manipulation Language (NDML) Precompiler Parse Procedure Division Product Specification. Fort Belvoir, VA: Defense Technical Information Center, septiembre de 1990. http://dx.doi.org/10.21236/ada250452.
Texto completoApicella, M. L., J. Slaton, B. Levi y A. Pashak. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 23. Neutral Data Manipulation Language (NDML) Precompiler Build Source Code Product Specification. Fort Belvoir, VA: Defense Technical Information Center, septiembre de 1990. http://dx.doi.org/10.21236/ada250460.
Texto completoApicella, M. L., J. Slaton, B. Levi y A. Pashak. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 24. Neutral Data Manipulation Language (NDML) Precompiler Generator Support Routines Product Specification. Fort Belvoir, VA: Defense Technical Information Center, septiembre de 1990. http://dx.doi.org/10.21236/ada250461.
Texto completoAlthoff, J., M. Apicella y S. Singh. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 6. Neutral Data Definition Language (NDDL) Product Specification. Section 3 of 6. Fort Belvoir, VA: Defense Technical Information Center, septiembre de 1990. http://dx.doi.org/10.21236/ada251997.
Texto completoAlthoff, J., M. Apicella y S. Singh. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 6. Neutral Data Definition Language (NDDL) Product Specification. Section 4 of 6. Fort Belvoir, VA: Defense Technical Information Center, septiembre de 1990. http://dx.doi.org/10.21236/ada251998.
Texto completo