Gotowa bibliografia na temat „Data-to-text generation”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Spis treści
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Data-to-text generation”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Data-to-text generation"
Yang, Sen, i Yang Liu. "Data-to-text Generation via Planning". Journal of Physics: Conference Series 1827, nr 1 (1.03.2021): 012190. http://dx.doi.org/10.1088/1742-6596/1827/1/012190.
Pełny tekst źródłaPuduppully, Ratish, Yao Fu i Mirella Lapata. "Data-to-text Generation with Variational Sequential Planning". Transactions of the Association for Computational Linguistics 10 (2022): 697–715. http://dx.doi.org/10.1162/tacl_a_00484.
Pełny tekst źródłaGong, Heng, Xiaocheng Feng i Bing Qin. "DiffuD2T: Empowering Data-to-Text Generation with Diffusion". Electronics 12, nr 9 (7.05.2023): 2136. http://dx.doi.org/10.3390/electronics12092136.
Pełny tekst źródłaPuduppully, Ratish, i Mirella Lapata. "Data-to-text Generation with Macro Planning". Transactions of the Association for Computational Linguistics 9 (2021): 510–27. http://dx.doi.org/10.1162/tacl_a_00381.
Pełny tekst źródłaZhang, Dell, Jiahao Yuan, Xiaoling Wang i Adam Foster. "Probabilistic Verb Selection for Data-to-Text Generation". Transactions of the Association for Computational Linguistics 6 (grudzień 2018): 511–27. http://dx.doi.org/10.1162/tacl_a_00038.
Pełny tekst źródłaLi, Shujie, Liang Li, Ruiying Geng, Min Yang, Binhua Li, Guanghu Yuan, Wanwei He i in. "Unifying Structured Data as Graph for Data-to-Text Pre-Training". Transactions of the Association for Computational Linguistics 12 (2024): 210–28. http://dx.doi.org/10.1162/tacl_a_00641.
Pełny tekst źródłaGong, Heng, Xiaocheng Feng i Bing Qin. "Quality Control for Distantly-Supervised Data-to-Text Generation via Meta Learning". Applied Sciences 13, nr 9 (30.04.2023): 5573. http://dx.doi.org/10.3390/app13095573.
Pełny tekst źródłaPuduppully, Ratish, Li Dong i Mirella Lapata. "Data-to-Text Generation with Content Selection and Planning". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 6908–15. http://dx.doi.org/10.1609/aaai.v33i01.33016908.
Pełny tekst źródłaGkatzia, Dimitra, Oliver Lemon i Verena Rieser. "Data-to-Text Generation Improves Decision-Making Under Uncertainty". IEEE Computational Intelligence Magazine 12, nr 3 (sierpień 2017): 10–17. http://dx.doi.org/10.1109/mci.2017.2708998.
Pełny tekst źródłaRebuffel, Clement, Marco Roberti, Laure Soulier, Geoffrey Scoutheeten, Rossella Cancelliere i Patrick Gallinari. "Controlling hallucinations at word level in data-to-text generation". Data Mining and Knowledge Discovery 36, nr 1 (22.10.2021): 318–54. http://dx.doi.org/10.1007/s10618-021-00801-4.
Pełny tekst źródłaRozprawy doktorskie na temat "Data-to-text generation"
Gkatzia, Dimitra. "Data-driven approaches to content selection for data-to-text generation". Thesis, Heriot-Watt University, 2015. http://hdl.handle.net/10399/3003.
Pełny tekst źródłaHill, Geoffrey. "Sensemaking in Big Data: Conceptual and Empirical Approaches to Actionable Knowledge Generation from Unstructured Text Streams". Kent State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=kent1433597354.
Pełny tekst źródłaPereira, José Casimiro. "Natural language generation in the context of multimodal interaction in Portuguese : Data-to-text based in automatic translation". Doctoral thesis, Universidade de Aveiro, 2017. http://hdl.handle.net/10773/21767.
Pełny tekst źródłaResumo em português não disponivel
To enable the interaction by text and/or speech it is essential that we devise systems capable of translating internal data into sentences or texts that can be shown on screen or heard by users. In this context, it is essential that these natural language generation (NLG) systems provide sentences in the native languages of the users (in our case European Portuguese) and enable an easy development and integration process while providing an output that is perceived as natural. The creation of high quality NLG systems is not an easy task, even for a small domain. The main di culties arise from: classic approaches being very demanding in know-how and development time; a lack of variability in generated sentences of most generation methods; a di culty in easily accessing complete tools; shortage of resources, such as large corpora; and support being available in only a limited number of languages. The main goal of this work was to propose, develop and test a method to convert Data-to-Portuguese, which can be developed with the smallest amount possible of time and resources, but being capable of generating utterances with variability and quality. The thesis defended argues that this goal can be achieved adopting data-driven language generation { more precisely generation based in language translation { and following an Engineering Research Methodology. In this thesis, two Data2Text NLG systems are presented. They were designed to provide a way to quickly develop an NLG system which can generate sentences with good quality. The proposed systems use tools that are freely available and can be developed by people with low linguistic skills. One important characteristic is the use of statistical machine translation techniques and this approach requires only a small natural language corpora resulting in easier and cheaper development when compared to more common approaches. The main result of this thesis is the demonstration that, by following the proposed approach, it is possible to create systems capable of translating information/data into good quality sentences in Portuguese. This is done without major e ort regarding resources creation and with the common knowledge of an experienced application developer. The systems created, particularly the hybrid system, are capable of providing a good solution for problems in data to text conversion.
Shimorina, Anastasia. "Natural Language Generation : From Data Creation to Evaluation via Modelling". Electronic Thesis or Diss., Université de Lorraine, 2021. http://www.theses.fr/2021LORR0080.
Pełny tekst źródłaNatural language generation is a process of generating a natural language text from some input. This input can be texts, documents, images, tables, knowledge graphs, databases, dialogue acts, meaning representations, etc. Recent methods in natural language generation, mostly based on neural modelling, have yielded significant improvements in the field. Despite this recent success, numerous issues with generation prevail, such as faithfulness to the source, developing multilingual models, few-shot generation. This thesis explores several facets of natural language generation from creating training datasets and developing models to evaluating proposed methods and model outputs. In this thesis, we address the issue of multilinguality and propose possible strategies to semi-automatically translate corpora for data-to-text generation. We show that named entities constitute a major stumbling block in translation exemplified by the English-Russian translation pair. We proceed to handle rare entities in data-to-text modelling exploring two mechanisms: copying and delexicalisation. We demonstrate that rare entities strongly impact performance and that the impact of these two mechanisms greatly varies depending on how datasets are constructed. Getting back to multilinguality, we also develop a modular approach for shallow surface realisation in several languages. Our approach splits the surface realisation task into three submodules: word ordering, morphological inflection and contraction generation. We show, via delexicalisation, that the word ordering component mainly depends on syntactic information. Along with the modelling, we also propose a framework for error analysis, focused on word order, for the shallow surface realisation task. The framework enables to provide linguistic insights into model performance on the sentence level and identify patterns where models underperform. Finally, we also touch upon the subject of evaluation design while assessing automatic and human metrics, highlighting the difference between the sentence-level and system-level type of evaluation
Faille, Juliette. "Data-Based Natural Language Generation : Evaluation and Explainability". Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0305.
Pełny tekst źródłaRecent Natural Language Generation (NLG) models achieve very high average performance. Their output texts are generally grammatically and syntactically correct which makes them sound natural. Though the semantics of the texts are right in most cases, even the state-of-the-art NLG models still produce texts with partially incorrect meanings. In this thesis, we propose evaluating and analyzing content-related issues of models used in the NLG tasks of Resource Description Framework (RDF) graphs verbalization and conversational question generation. First, we focus on the task of RDF verbalization and the omissions and hallucinations of RDF entities, i.e. when an automatically generated text does not mention all the input RDF entities or mentions other entities than those in the input. We evaluate 25 RDF verbalization models on the WebNLG dataset. We develop a method to automatically detect omissions and hallucinations of RDF entities in the outputs of these models. We propose a metric based on omissions or hallucination counts to quantify the semantic adequacy of the NLG models. We find that this metric correlates well with what human annotators consider to be semantically correct and show that even state-of-the-art models are subject to omissions and hallucinations. Following this observation about the tendency of RDF verbalization models to generate texts with content-related issues, we propose to analyze the encoder of two such state-of-the-art models, BART and T5. We use the probing explainability method and introduce two probing classifiers (one parametric and one non-parametric) to detect omissions and distortions of RDF input entities in the embeddings of the encoder-decoder models. We find that such probing classifiers are able to detect these mistakes in the encodings, suggesting that the encoder of the models is responsible for some loss of information about omitted and distorted entities. Finally, we propose a T5-based conversational question generation model that in addition to generating a question based on an input RDF graph and a conversational context, generates both a question and its corresponding RDF triples. This setting allows us to introduce a fine-grained evaluation procedure automatically assessing coherence with the conversation context and the semantic adequacy with the input RDF. Our contributions belong to the fields of NLG evaluation and explainability and use techniques and methodologies from these two research fields in order to work towards providing more reliable NLG models
Vaudry, Pierre-Luc. "Narrative generation by associative network extraction from real-life temporal data". Thèse, 2016. http://hdl.handle.net/1866/18473.
Pełny tekst źródłaData about events abounds in our technological society. An attractive way of presenting real-life temporal data to facilitate its interpretation is an automatically generated narrative. Narrative comprehension involves the construction of a causal network by the reader. Narrative data-to-text systems seem to acknowledge causal relations as important. However, they play a secondary role in their document planners and their identification relies mostly on domain knowledge. This thesis proposes an assisted temporal data interpretation model by narrative generation in which narratives are structured with the help of a mix of automatically mined and manually defined association rules. The associations suggest causal hypotheses to the reader who can thus construct more easily a causal representation of the events. This model should be applicable to any repetitive temporal data, preferably including actions or activities, such as Activity of Daily Living (ADL) data. Sequential association rules are selected based on the criteria of confidence and statistical significance as measured in training data. World and domain knowledge association rules are based on the similarity of some aspect of a pair of events or on causal patterns difficult to detect statistically. To interpret a specific period to summarize, pairs of events for which an association rule applies are associated. Some extra associations are then derived. Together the events and associations form an associative network. The most important step of the Natural Language Generation (NLG) pipeline is document planning, comprising event selection and document structuring. For event selection, the model relies on the confidence of sequential associations to select the most unusual facts. The assumption is that an event that is implied by another one with a relatively high probability may be left implicit in the text. The structure of the narrative is called the connecting associative thread because it allows the reader to follow associations from the beginning to the end of the text. It takes the form of a spanning tree over the previously selected associative sub-network. The associations it contains are selected based on association type preferences and relative temporal distance. The connecting associative thread is then segmented into paragraphs, sentences, and phrases and the associations are translated to rhetorical relations. The microplanning step defines lexico-syntactic templates describing each event type. When two event descriptions need to be assembled in the same sentence, a discourse marker expressing the specified rhetorical relation is employed. A main event and a preceding main event are determined for each sentence. When the associative thread parent of the main event is not the preceding main event, an anaphor is added to the sentence front discourse marker. Surface realization can be performed in English or French thanks to bilingual lexico-syntactic specifications and the SimpleNLG-EnFr Java library. The results of a textual quality evaluation show that the texts are understandable and the lexical choices adequate.
Książki na temat "Data-to-text generation"
McKeown, Kathleen R. Text generation: Using discourse strategies and focus constraints to generate natural language text. Cambridge [Cambridgeshire]: Cambridge University Press, 1985.
Znajdź pełny tekst źródłaBizyuk, Aleksandr. Fundamentals of abnormal psychology. ru: INFRA-M Academic Publishing LLC., 2020. http://dx.doi.org/10.12737/974663.
Pełny tekst źródłaMcKeown, Kathleen R. Text Generation: Using Discourse Strategies and Focus Constraints to Generate Natural Language Text (Studies in Natural Language Processing). Cambridge University Press, 1992.
Znajdź pełny tekst źródłaHenderson, Peter A. Southwood's Ecological Methods. Wyd. 5. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198862277.001.0001.
Pełny tekst źródłaOndercin, Heather L. The Evolution of Women’s (and Men’s) Partisan Attachments. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190265144.003.0003.
Pełny tekst źródłaMackenzie, Simon. Transnational Criminology. Policy Press, 2020. http://dx.doi.org/10.1332/policypress/9781529203783.001.0001.
Pełny tekst źródłaBrantingham, Patricia L., Paul J. Brantingham, Justin Song i Valerie Spicer. Advances in Visualization for Theory Testing in Environmental Criminology. Redaktorzy Gerben J. N. Bruinsma i Shane D. Johnson. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780190279707.013.37.
Pełny tekst źródłaLovasi, Gina S., Ana V. Diez Roux i Jennifer Kolker, red. Urban Public Health. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780190885304.001.0001.
Pełny tekst źródłaUfimtseva, Nataliya V., Iosif A. Sternin i Elena Yu Myagkova. Russian psycholinguistics: results and prospects (1966–2021): a research monograph. Institute of Linguistics, Russian Academy of Sciences, 2021. http://dx.doi.org/10.30982/978-5-6045633-7-3.
Pełny tekst źródłaCzęści książek na temat "Data-to-text generation"
Gardent, Claire. "Syntax and Data-to-Text Generation". W Statistical Language and Speech Processing, 3–20. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-11397-5_1.
Pełny tekst źródłaUpadhyay, Ashish, Stewart Massie, Ritwik Kumar Singh, Garima Gupta i Muneendra Ojha. "A Case-Based Approach to Data-to-Text Generation". W Case-Based Reasoning Research and Development, 232–47. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86957-1_16.
Pełny tekst źródłaRebuffel, Clément, Laure Soulier, Geoffrey Scoutheeten i Patrick Gallinari. "A Hierarchical Model for Data-to-Text Generation". W Lecture Notes in Computer Science, 65–80. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-45439-5_5.
Pełny tekst źródłaWang, Mengda, Jianjun Cao, Xu Yu i Zibo Nie. "A Data-to-Text Generation Model with Deduplicated Content Planning". W Big Data, 92–103. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-8331-3_6.
Pełny tekst źródłaMota, Abelardo Vieira, Ticiana Linhares Coelho da Silva i José Antônio Fernandes De Macêdo. "Template-Based Multi-solution Approach for Data-to-Text Generation". W Advances in Databases and Information Systems, 157–70. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54832-2_13.
Pełny tekst źródłaPandey, Abhishek Kumar, i Sanjiban Sekhar Roy. "Attention Based Bidirectional LSTM Model for Data-to-text Generation". W Advances in Computational Intelligence and Its Applications, 228–35. London: CRC Press, 2024. http://dx.doi.org/10.1201/9781003488682-29.
Pełny tekst źródłaUpadhyay, Ashish, i Stewart Massie. "CBR Assisted Context-Aware Surface Realisation for Data-to-Text Generation". W Case-Based Reasoning Research and Development, 34–49. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40177-0_3.
Pełny tekst źródłaBelz, Anja, i Eric Kow. "Assessing the Trade-Off between System Building Cost and Output Quality in Data-to-Text Generation". W Empirical Methods in Natural Language Generation, 180–200. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15573-4_10.
Pełny tekst źródłaRoberti, Marco, Giovanni Bonetta, Rossella Cancelliere i Patrick Gallinari. "Copy Mechanism and Tailored Training for Character-Based Data-to-Text Generation". W Machine Learning and Knowledge Discovery in Databases, 648–64. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-46147-8_39.
Pełny tekst źródłaUpadhyay, Ashish, i Stewart Massie. "A Case-Based Approach for Content Planning in Data-to-Text Generation". W Case-Based Reasoning Research and Development, 380–94. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-14923-8_25.
Pełny tekst źródłaStreszczenia konferencji na temat "Data-to-text generation"
Kale, Mihir, i Abhinav Rastogi. "Text-to-Text Pre-Training for Data-to-Text Tasks". W Proceedings of the 13th International Conference on Natural Language Generation. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.inlg-1.14.
Pełny tekst źródłaKasner, Zdeněk, i Ondřej Dušek. "Data-to-Text Generation with Iterative Text Editing". W Proceedings of the 13th International Conference on Natural Language Generation. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.inlg-1.9.
Pełny tekst źródłaLiu, Mengzhu, Zhaonan Mu, Jieping Sun i Cheng Wang. "Data-to-text Generation with Pointer-Generator Networks". W 2020 IEEE International Conference on Advances in Electrical Engineering and Computer Applications (AEECA). IEEE, 2020. http://dx.doi.org/10.1109/aeeca49918.2020.9213600.
Pełny tekst źródłaPerez-Beltrachini, Laura, i Claire Gardent. "Analysing Data-To-Text Generation Benchmarks". W Proceedings of the 10th International Conference on Natural Language Generation. Stroudsburg, PA, USA: Association for Computational Linguistics, 2017. http://dx.doi.org/10.18653/v1/w17-3537.
Pełny tekst źródłaPuduppully, Ratish, Li Dong i Mirella Lapata. "Data-to-text Generation with Entity Modeling". W Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/p19-1195.
Pełny tekst źródłaLin, Shuai, Wentao Wang, Zichao Yang, Xiaodan Liang, Frank F. Xu, Eric Xing i Zhiting Hu. "Data-to-Text Generation with Style Imitation". W Findings of the Association for Computational Linguistics: EMNLP 2020. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.findings-emnlp.144.
Pełny tekst źródłaXu, Xinnuo, Ivan Titov i Mirella Lapata. "Compositional Generalization for Data-to-Text Generation". W Findings of the Association for Computational Linguistics: EMNLP 2023. Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.findings-emnlp.623.
Pełny tekst źródłaChang, Ernie, Xiaoyu Shen, Dawei Zhu, Vera Demberg i Hui Su. "Neural Data-to-Text Generation with LM-based Text Augmentation". W Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.eacl-main.64.
Pełny tekst źródłaBurgdorf, Andreas, Micaela Barkmann, André Pomp i Tobias Meisen. "Domain-independent Data-to-Text Generation for Open Data". W 11th International Conference on Data Science, Technology and Applications. SCITEPRESS - Science and Technology Publications, 2022. http://dx.doi.org/10.5220/0011272900003269.
Pełny tekst źródłaGONG, Li, Josep Crego i Jean Senellart. "Enhanced Transformer Model for Data-to-Text Generation". W Proceedings of the 3rd Workshop on Neural Generation and Translation. Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/d19-5615.
Pełny tekst źródłaRaporty organizacyjne na temat "Data-to-text generation"
Ma, Yue, i Felix Distel. Learning Formal Definitions for Snomed CT from Text. Technische Universität Dresden, 2013. http://dx.doi.org/10.25368/2022.193.
Pełny tekst źródłaFoundation models such as ChatGPT through the prism of the UNESCO Recommendation on the Ethics of Artificial Intelligence. UNESCO, 2023. http://dx.doi.org/10.54678/bgiv6160.
Pełny tekst źródła