Gotowa bibliografia na temat „Neural language models”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Spis treści
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Neural language models”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Neural language models"
Buckman, Jacob, i Graham Neubig. "Neural Lattice Language Models". Transactions of the Association for Computational Linguistics 6 (grudzień 2018): 529–41. http://dx.doi.org/10.1162/tacl_a_00036.
Pełny tekst źródłaBengio, Yoshua. "Neural net language models". Scholarpedia 3, nr 1 (2008): 3881. http://dx.doi.org/10.4249/scholarpedia.3881.
Pełny tekst źródłaDong, Li. "Learning natural language interfaces with neural models". AI Matters 7, nr 2 (czerwiec 2021): 14–17. http://dx.doi.org/10.1145/3478369.3478375.
Pełny tekst źródłaDe Coster, Mathieu, i Joni Dambre. "Leveraging Frozen Pretrained Written Language Models for Neural Sign Language Translation". Information 13, nr 5 (23.04.2022): 220. http://dx.doi.org/10.3390/info13050220.
Pełny tekst źródłaChang, Tyler A., i Benjamin K. Bergen. "Word Acquisition in Neural Language Models". Transactions of the Association for Computational Linguistics 10 (2022): 1–16. http://dx.doi.org/10.1162/tacl_a_00444.
Pełny tekst źródłaMezzoudj, Freha, i Abdelkader Benyettou. "An empirical study of statistical language models: n-gram language models vs. neural network language models". International Journal of Innovative Computing and Applications 9, nr 4 (2018): 189. http://dx.doi.org/10.1504/ijica.2018.095762.
Pełny tekst źródłaMezzoudj, Freha, i Abdelkader Benyettou. "An empirical study of statistical language models: n-gram language models vs. neural network language models". International Journal of Innovative Computing and Applications 9, nr 4 (2018): 189. http://dx.doi.org/10.1504/ijica.2018.10016827.
Pełny tekst źródłaMandy Lau. "Artificial intelligence language models and the false fantasy of participatory language policies". Working papers in Applied Linguistics and Linguistics at York 1 (13.09.2021): 4–15. http://dx.doi.org/10.25071/2564-2855.5.
Pełny tekst źródłaQi, Kunxun, i Jianfeng Du. "Translation-Based Matching Adversarial Network for Cross-Lingual Natural Language Inference". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 05 (3.04.2020): 8632–39. http://dx.doi.org/10.1609/aaai.v34i05.6387.
Pełny tekst źródłaPark, Myung-Kwan, Keonwoo Koo, Jaemin Lee i Wonil Chung. "Investigating Syntactic Transfer from English to Korean in Neural L2 Language Models". Studies in Modern Grammar 121 (30.03.2024): 177–201. http://dx.doi.org/10.14342/smog.2024.121.177.
Pełny tekst źródłaRozprawy doktorskie na temat "Neural language models"
Lei, Tao Ph D. Massachusetts Institute of Technology. "Interpretable neural models for natural language processing". Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/108990.
Pełny tekst źródłaCataloged from PDF version of thesis.
Includes bibliographical references (pages 109-119).
The success of neural network models often comes at a cost of interpretability. This thesis addresses the problem by providing justifications behind the model's structure and predictions. In the first part of this thesis, we present a class of sequence operations for text processing. The proposed component generalizes from convolution operations and gated aggregations. As justifications, we relate this component to string kernels, i.e. functions measuring the similarity between sequences, and demonstrate how it encodes the efficient kernel computing algorithm into its structure. The proposed model achieves state-of-the-art or competitive results compared to alternative architectures (such as LSTMs and CNNs) across several NLP applications. In the second part, we learn rationales behind the model's prediction by extracting input pieces as supporting evidence. Rationales are tailored to be short and coherent, yet sufficient for making the same prediction. Our approach combines two modular components, generator and encoder, which are trained to operate well together. The generator specifies a distribution over text fragments as candidate rationales and these are passed through the encoder for prediction. Rationales are never given during training. Instead, the model is regularized by the desiderata for rationales. We demonstrate the effectiveness of this learning framework in applications such multi-aspect sentiment analysis. Our method achieves a performance over 90% evaluated against manual annotated rationales.
by Tao Lei.
Ph. D.
Kunz, Jenny. "Neural Language Models with Explicit Coreference Decision". Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-371827.
Pełny tekst źródłaLabeau, Matthieu. "Neural language models : Dealing with large vocabularies". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS313/document.
Pełny tekst źródłaThis work investigates practical methods to ease training and improve performances of neural language models with large vocabularies. The main limitation of neural language models is their expensive computational cost: it depends on the size of the vocabulary, with which it grows linearly. Despite several training tricks, the most straightforward way to limit computation time is to limit the vocabulary size, which is not a satisfactory solution for numerous tasks. Most of the existing methods used to train large-vocabulary language models revolve around avoiding the computation of the partition function, ensuring that output scores are normalized into a probability distribution. Here, we focus on sampling-based approaches, including importance sampling and noise contrastive estimation. These methods allow an approximate computation of the partition function. After examining the mechanism of self-normalization in noise-contrastive estimation, we first propose to improve its efficiency with solutions that are adapted to the inner workings of the method and experimentally show that they considerably ease training. Our second contribution is to expand on a generalization of several sampling based objectives as Bregman divergences, in order to experiment with new objectives. We use Beta divergences to derive a set of objectives from which noise contrastive estimation is a particular case. Finally, we aim at improving performances on full vocabulary language models, by augmenting output words representation with subwords. We experiment on a Czech dataset and show that using character-based representations besides word embeddings for output representations gives better results. We also show that reducing the size of the output look-up table improves results even more
Bayer, Ali Orkan. "Semantic Language models with deep neural Networks". Doctoral thesis, Università degli studi di Trento, 2015. https://hdl.handle.net/11572/367784.
Pełny tekst źródłaBayer, Ali Orkan. "Semantic Language models with deep neural Networks". Doctoral thesis, University of Trento, 2015. http://eprints-phd.biblio.unitn.it/1578/1/bayer_thesis.pdf.
Pełny tekst źródłaLi, Zhongliang. "Slim Embedding Layers for Recurrent Neural Language Models". Wright State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=wright1531950458646138.
Pełny tekst źródłaGangireddy, Siva Reddy. "Recurrent neural network language models for automatic speech recognition". Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/28990.
Pełny tekst źródłaScarcella, Alessandro. "Recurrent neural network language models in the context of under-resourced South African languages". Master's thesis, University of Cape Town, 2018. http://hdl.handle.net/11427/29431.
Pełny tekst źródłaLe, Hai Son. "Continuous space models with neural networks in natural language processing". Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00776704.
Pełny tekst źródłaMiao, Yishu. "Deep generative models for natural language processing". Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:e4e1f1f9-e507-4754-a0ab-0246f1e1e258.
Pełny tekst źródłaKsiążki na temat "Neural language models"
1957-, Houghton George, red. Connectionist models in cognitive psychology. Hove: Psychology Press, 2004.
Znajdź pełny tekst źródłaMiikkulainen, Risto. Subsymbolic natural language processing: An integrated model of scripts, lexicon, and memory. Cambridge, Mass: MIT Press, 1993.
Znajdź pełny tekst źródłaBavaeva, Ol'ga. Metaphorical parallels of the neutral nomination "man" in modern English. ru: INFRA-M Academic Publishing LLC., 2022. http://dx.doi.org/10.12737/1858259.
Pełny tekst źródłaArbib, Michael. Neural Models of Language Processes. Elsevier Science & Technology Books, 2012.
Znajdź pełny tekst źródłaCairns, Paul, Joseph P. Levy, Dimitrios Bairaktaris i John A. Bullinaria. Connectionist Models of Memory and Language. Taylor & Francis Group, 2015.
Znajdź pełny tekst źródłaHoughton, George. Connectionist Models in Cognitive Psychology. Taylor & Francis Group, 2004.
Znajdź pełny tekst źródłaHoughton, George. Connectionist Models in Cognitive Psychology. Taylor & Francis Group, 2004.
Znajdź pełny tekst źródłaHoughton, George. Connectionist Models in Cognitive Psychology. Taylor & Francis Group, 2004.
Znajdź pełny tekst źródłaHoughton, George. Connectionist Models in Cognitive Psychology. Taylor & Francis Group, 2004.
Znajdź pełny tekst źródłaConnectionist Models in Cognitive Psychology. Taylor & Francis Group, 2014.
Znajdź pełny tekst źródłaCzęści książek na temat "Neural language models"
Skansi, Sandro. "Neural Language Models". W Undergraduate Topics in Computer Science, 165–73. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-73004-2_9.
Pełny tekst źródłaDelasalles, Edouard, Sylvain Lamprier i Ludovic Denoyer. "Dynamic Neural Language Models". W Neural Information Processing, 282–94. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36718-3_24.
Pełny tekst źródłaHampton, Peter John, Hui Wang i Zhiwei Lin. "Knowledge Transfer in Neural Language Models". W Artificial Intelligence XXXIV, 143–48. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-71078-5_12.
Pełny tekst źródłaO’Neill, James, i Danushka Bollegala. "Learning to Evaluate Neural Language Models". W Communications in Computer and Information Science, 123–33. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-6168-9_11.
Pełny tekst źródłaGoldrick, Matthew. "Neural Network Models of Speech Production". W The Handbook of the Neuropsychology of Language, 125–45. Oxford, UK: Wiley-Blackwell, 2012. http://dx.doi.org/10.1002/9781118432501.ch7.
Pełny tekst źródłaG, Santhosh Kumar. "Neural Language Models for (Fake?) News Generation". W Data Science for Fake News, 129–47. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-62696-9_6.
Pełny tekst źródłaHuang, Yue, i Xiaodong Gu. "Temporal Modeling Approach for Video Action Recognition Based on Vision-language Models". W Neural Information Processing, 512–23. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8067-3_38.
Pełny tekst źródłaGoldberg, Yoav. "From Linear Models to Multi-layer Perceptrons". W Neural Network Methods for Natural Language Processing, 37–39. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-031-02165-7_3.
Pełny tekst źródłaShen, Tongtong, Longbiao Wang, Xie Chen, Kuntharrgyal Khysru i Jianwu Dang. "Exploiting the Tibetan Radicals in Recurrent Neural Network for Low-Resource Language Models". W Neural Information Processing, 266–75. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70096-0_28.
Pełny tekst źródłaTaylor, N. R., i J. G. Taylor. "The Neural Networks for Language in the Brain: Creating LAD". W Computational Models for Neuroscience, 245–65. London: Springer London, 2003. http://dx.doi.org/10.1007/978-1-4471-0085-0_9.
Pełny tekst źródłaStreszczenia konferencji na temat "Neural language models"
Ragni, Anton, Edgar Dakin, Xie Chen, Mark J. F. Gales i Kate M. Knill. "Multi-Language Neural Network Language Models". W Interspeech 2016. ISCA, 2016. http://dx.doi.org/10.21437/interspeech.2016-371.
Pełny tekst źródłaКузнецов, Алексей Валерьевич. "NEURAL LANGUAGE MODELS FOR HISTORICAL RESEARCH". W Высокие технологии и инновации в науке: сборник избранных статей Международной научной конференции (Санкт-Петербург, Май 2022). Crossref, 2022. http://dx.doi.org/10.37539/vt197.2022.25.51.002.
Pełny tekst źródłaAlexandrescu, Andrei, i Katrin Kirchhoff. "Factored neural language models". W the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers. Morristown, NJ, USA: Association for Computational Linguistics, 2006. http://dx.doi.org/10.3115/1614049.1614050.
Pełny tekst źródłaArisoy, Ebru, i Murat Saraclar. "Compositional Neural Network Language Models for Agglutinative Languages". W Interspeech 2016. ISCA, 2016. http://dx.doi.org/10.21437/interspeech.2016-1239.
Pełny tekst źródłaGandhe, Ankur, Florian Metze i Ian Lane. "Neural network language models for low resource languages". W Interspeech 2014. ISCA: ISCA, 2014. http://dx.doi.org/10.21437/interspeech.2014-560.
Pełny tekst źródłaChen, Zihao. "Neural Language Models in Natural Language Processing". W 2023 2nd International Conference on Data Analytics, Computing and Artificial Intelligence (ICDACAI). IEEE, 2023. http://dx.doi.org/10.1109/icdacai59742.2023.00104.
Pełny tekst źródłaOba, Miyu, Tatsuki Kuribayashi, Hiroki Ouchi i Taro Watanabe. "Second Language Acquisition of Neural Language Models". W Findings of the Association for Computational Linguistics: ACL 2023. Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.findings-acl.856.
Pełny tekst źródłaLiu, X., M. J. F. Gales i P. C. Woodland. "Paraphrastic language models and combination with neural network language models". W ICASSP 2013 - 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2013. http://dx.doi.org/10.1109/icassp.2013.6639308.
Pełny tekst źródłaJavier Vazquez Martinez, Hector, Annika Lea Heuser, Charles Yang i Jordan Kodner. "Evaluating Neural Language Models as Cognitive Models of Language Acquisition". W Proceedings of the 1st GenBench Workshop on (Benchmarking) Generalisation in NLP. Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.genbench-1.4.
Pełny tekst źródłaHuang, Yinghui, Abhinav Sethy, Kartik Audhkhasi i Bhuvana Ramabhadran. "Whole Sentence Neural Language Models". W ICASSP 2018 - 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018. http://dx.doi.org/10.1109/icassp.2018.8461734.
Pełny tekst źródłaRaporty organizacyjne na temat "Neural language models"
Semerikov, Serhiy O., Illia O. Teplytskyi, Yuliia V. Yechkalo i Arnold E. Kiv. Computer Simulation of Neural Networks Using Spreadsheets: The Dawn of the Age of Camelot. [б. в.], listopad 2018. http://dx.doi.org/10.31812/123456789/2648.
Pełny tekst źródłaApicella, M. L., J. Slaton i B. Levi. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 10. Neutral Data Manipulation Language (NDML) Precompiler Control Module Product Specification. Fort Belvoir, VA: Defense Technical Information Center, wrzesień 1990. http://dx.doi.org/10.21236/ada250451.
Pełny tekst źródłaAlthoff, J. L., M. L. Apicella i S. Singh. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 5. Neutral Data Definition Language (NDDL) Development Specification. Fort Belvoir, VA: Defense Technical Information Center, wrzesień 1990. http://dx.doi.org/10.21236/ada252450.
Pełny tekst źródłaApicella, M. L., J. Slaton i B. Levi. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 13. Neutral Data Manipulation Language (NDML) Precompiler Parse NDML Product Specification. Fort Belvoir, VA: Defense Technical Information Center, wrzesień 1990. http://dx.doi.org/10.21236/ada250453.
Pełny tekst źródłaAlthoff, J., i M. Apicella. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 9. Neutral Data Manipulation Language (NDML) Precompiler Development Specification. Section 2. Fort Belvoir, VA: Defense Technical Information Center, wrzesień 1990. http://dx.doi.org/10.21236/ada252526.
Pełny tekst źródłaApicella, M. L., J. Slaton i B. Levi. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 12. Neutral Data Manipulation Language (NDML) Precompiler Parse Procedure Division Product Specification. Fort Belvoir, VA: Defense Technical Information Center, wrzesień 1990. http://dx.doi.org/10.21236/ada250452.
Pełny tekst źródłaApicella, M. L., J. Slaton, B. Levi i A. Pashak. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 23. Neutral Data Manipulation Language (NDML) Precompiler Build Source Code Product Specification. Fort Belvoir, VA: Defense Technical Information Center, wrzesień 1990. http://dx.doi.org/10.21236/ada250460.
Pełny tekst źródłaApicella, M. L., J. Slaton, B. Levi i A. Pashak. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 24. Neutral Data Manipulation Language (NDML) Precompiler Generator Support Routines Product Specification. Fort Belvoir, VA: Defense Technical Information Center, wrzesień 1990. http://dx.doi.org/10.21236/ada250461.
Pełny tekst źródłaAlthoff, J., M. Apicella i S. Singh. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 6. Neutral Data Definition Language (NDDL) Product Specification. Section 3 of 6. Fort Belvoir, VA: Defense Technical Information Center, wrzesień 1990. http://dx.doi.org/10.21236/ada251997.
Pełny tekst źródłaAlthoff, J., M. Apicella i S. Singh. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 6. Neutral Data Definition Language (NDDL) Product Specification. Section 4 of 6. Fort Belvoir, VA: Defense Technical Information Center, wrzesień 1990. http://dx.doi.org/10.21236/ada251998.
Pełny tekst źródła