Letteratura scientifica selezionata sul tema "Data-to-text generation"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Data-to-text generation".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Articoli di riviste sul tema "Data-to-text generation":
Yang, Sen, e Yang Liu. "Data-to-text Generation via Planning". Journal of Physics: Conference Series 1827, n. 1 (1 marzo 2021): 012190. http://dx.doi.org/10.1088/1742-6596/1827/1/012190.
Puduppully, Ratish, Yao Fu e Mirella Lapata. "Data-to-text Generation with Variational Sequential Planning". Transactions of the Association for Computational Linguistics 10 (2022): 697–715. http://dx.doi.org/10.1162/tacl_a_00484.
Gong, Heng, Xiaocheng Feng e Bing Qin. "DiffuD2T: Empowering Data-to-Text Generation with Diffusion". Electronics 12, n. 9 (7 maggio 2023): 2136. http://dx.doi.org/10.3390/electronics12092136.
Puduppully, Ratish, e Mirella Lapata. "Data-to-text Generation with Macro Planning". Transactions of the Association for Computational Linguistics 9 (2021): 510–27. http://dx.doi.org/10.1162/tacl_a_00381.
Zhang, Dell, Jiahao Yuan, Xiaoling Wang e Adam Foster. "Probabilistic Verb Selection for Data-to-Text Generation". Transactions of the Association for Computational Linguistics 6 (dicembre 2018): 511–27. http://dx.doi.org/10.1162/tacl_a_00038.
Li, Shujie, Liang Li, Ruiying Geng, Min Yang, Binhua Li, Guanghu Yuan, Wanwei He et al. "Unifying Structured Data as Graph for Data-to-Text Pre-Training". Transactions of the Association for Computational Linguistics 12 (2024): 210–28. http://dx.doi.org/10.1162/tacl_a_00641.
Gong, Heng, Xiaocheng Feng e Bing Qin. "Quality Control for Distantly-Supervised Data-to-Text Generation via Meta Learning". Applied Sciences 13, n. 9 (30 aprile 2023): 5573. http://dx.doi.org/10.3390/app13095573.
Puduppully, Ratish, Li Dong e Mirella Lapata. "Data-to-Text Generation with Content Selection and Planning". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 luglio 2019): 6908–15. http://dx.doi.org/10.1609/aaai.v33i01.33016908.
Gkatzia, Dimitra, Oliver Lemon e Verena Rieser. "Data-to-Text Generation Improves Decision-Making Under Uncertainty". IEEE Computational Intelligence Magazine 12, n. 3 (agosto 2017): 10–17. http://dx.doi.org/10.1109/mci.2017.2708998.
Rebuffel, Clement, Marco Roberti, Laure Soulier, Geoffrey Scoutheeten, Rossella Cancelliere e Patrick Gallinari. "Controlling hallucinations at word level in data-to-text generation". Data Mining and Knowledge Discovery 36, n. 1 (22 ottobre 2021): 318–54. http://dx.doi.org/10.1007/s10618-021-00801-4.
Tesi sul tema "Data-to-text generation":
Gkatzia, Dimitra. "Data-driven approaches to content selection for data-to-text generation". Thesis, Heriot-Watt University, 2015. http://hdl.handle.net/10399/3003.
Hill, Geoffrey. "Sensemaking in Big Data: Conceptual and Empirical Approaches to Actionable Knowledge Generation from Unstructured Text Streams". Kent State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=kent1433597354.
Pereira, José Casimiro. "Natural language generation in the context of multimodal interaction in Portuguese : Data-to-text based in automatic translation". Doctoral thesis, Universidade de Aveiro, 2017. http://hdl.handle.net/10773/21767.
Resumo em português não disponivel
To enable the interaction by text and/or speech it is essential that we devise systems capable of translating internal data into sentences or texts that can be shown on screen or heard by users. In this context, it is essential that these natural language generation (NLG) systems provide sentences in the native languages of the users (in our case European Portuguese) and enable an easy development and integration process while providing an output that is perceived as natural. The creation of high quality NLG systems is not an easy task, even for a small domain. The main di culties arise from: classic approaches being very demanding in know-how and development time; a lack of variability in generated sentences of most generation methods; a di culty in easily accessing complete tools; shortage of resources, such as large corpora; and support being available in only a limited number of languages. The main goal of this work was to propose, develop and test a method to convert Data-to-Portuguese, which can be developed with the smallest amount possible of time and resources, but being capable of generating utterances with variability and quality. The thesis defended argues that this goal can be achieved adopting data-driven language generation { more precisely generation based in language translation { and following an Engineering Research Methodology. In this thesis, two Data2Text NLG systems are presented. They were designed to provide a way to quickly develop an NLG system which can generate sentences with good quality. The proposed systems use tools that are freely available and can be developed by people with low linguistic skills. One important characteristic is the use of statistical machine translation techniques and this approach requires only a small natural language corpora resulting in easier and cheaper development when compared to more common approaches. The main result of this thesis is the demonstration that, by following the proposed approach, it is possible to create systems capable of translating information/data into good quality sentences in Portuguese. This is done without major e ort regarding resources creation and with the common knowledge of an experienced application developer. The systems created, particularly the hybrid system, are capable of providing a good solution for problems in data to text conversion.
Shimorina, Anastasia. "Natural Language Generation : From Data Creation to Evaluation via Modelling". Electronic Thesis or Diss., Université de Lorraine, 2021. http://www.theses.fr/2021LORR0080.
Natural language generation is a process of generating a natural language text from some input. This input can be texts, documents, images, tables, knowledge graphs, databases, dialogue acts, meaning representations, etc. Recent methods in natural language generation, mostly based on neural modelling, have yielded significant improvements in the field. Despite this recent success, numerous issues with generation prevail, such as faithfulness to the source, developing multilingual models, few-shot generation. This thesis explores several facets of natural language generation from creating training datasets and developing models to evaluating proposed methods and model outputs. In this thesis, we address the issue of multilinguality and propose possible strategies to semi-automatically translate corpora for data-to-text generation. We show that named entities constitute a major stumbling block in translation exemplified by the English-Russian translation pair. We proceed to handle rare entities in data-to-text modelling exploring two mechanisms: copying and delexicalisation. We demonstrate that rare entities strongly impact performance and that the impact of these two mechanisms greatly varies depending on how datasets are constructed. Getting back to multilinguality, we also develop a modular approach for shallow surface realisation in several languages. Our approach splits the surface realisation task into three submodules: word ordering, morphological inflection and contraction generation. We show, via delexicalisation, that the word ordering component mainly depends on syntactic information. Along with the modelling, we also propose a framework for error analysis, focused on word order, for the shallow surface realisation task. The framework enables to provide linguistic insights into model performance on the sentence level and identify patterns where models underperform. Finally, we also touch upon the subject of evaluation design while assessing automatic and human metrics, highlighting the difference between the sentence-level and system-level type of evaluation
Faille, Juliette. "Data-Based Natural Language Generation : Evaluation and Explainability". Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0305.
Recent Natural Language Generation (NLG) models achieve very high average performance. Their output texts are generally grammatically and syntactically correct which makes them sound natural. Though the semantics of the texts are right in most cases, even the state-of-the-art NLG models still produce texts with partially incorrect meanings. In this thesis, we propose evaluating and analyzing content-related issues of models used in the NLG tasks of Resource Description Framework (RDF) graphs verbalization and conversational question generation. First, we focus on the task of RDF verbalization and the omissions and hallucinations of RDF entities, i.e. when an automatically generated text does not mention all the input RDF entities or mentions other entities than those in the input. We evaluate 25 RDF verbalization models on the WebNLG dataset. We develop a method to automatically detect omissions and hallucinations of RDF entities in the outputs of these models. We propose a metric based on omissions or hallucination counts to quantify the semantic adequacy of the NLG models. We find that this metric correlates well with what human annotators consider to be semantically correct and show that even state-of-the-art models are subject to omissions and hallucinations. Following this observation about the tendency of RDF verbalization models to generate texts with content-related issues, we propose to analyze the encoder of two such state-of-the-art models, BART and T5. We use the probing explainability method and introduce two probing classifiers (one parametric and one non-parametric) to detect omissions and distortions of RDF input entities in the embeddings of the encoder-decoder models. We find that such probing classifiers are able to detect these mistakes in the encodings, suggesting that the encoder of the models is responsible for some loss of information about omitted and distorted entities. Finally, we propose a T5-based conversational question generation model that in addition to generating a question based on an input RDF graph and a conversational context, generates both a question and its corresponding RDF triples. This setting allows us to introduce a fine-grained evaluation procedure automatically assessing coherence with the conversation context and the semantic adequacy with the input RDF. Our contributions belong to the fields of NLG evaluation and explainability and use techniques and methodologies from these two research fields in order to work towards providing more reliable NLG models
Vaudry, Pierre-Luc. "Narrative generation by associative network extraction from real-life temporal data". Thèse, 2016. http://hdl.handle.net/1866/18473.
Data about events abounds in our technological society. An attractive way of presenting real-life temporal data to facilitate its interpretation is an automatically generated narrative. Narrative comprehension involves the construction of a causal network by the reader. Narrative data-to-text systems seem to acknowledge causal relations as important. However, they play a secondary role in their document planners and their identification relies mostly on domain knowledge. This thesis proposes an assisted temporal data interpretation model by narrative generation in which narratives are structured with the help of a mix of automatically mined and manually defined association rules. The associations suggest causal hypotheses to the reader who can thus construct more easily a causal representation of the events. This model should be applicable to any repetitive temporal data, preferably including actions or activities, such as Activity of Daily Living (ADL) data. Sequential association rules are selected based on the criteria of confidence and statistical significance as measured in training data. World and domain knowledge association rules are based on the similarity of some aspect of a pair of events or on causal patterns difficult to detect statistically. To interpret a specific period to summarize, pairs of events for which an association rule applies are associated. Some extra associations are then derived. Together the events and associations form an associative network. The most important step of the Natural Language Generation (NLG) pipeline is document planning, comprising event selection and document structuring. For event selection, the model relies on the confidence of sequential associations to select the most unusual facts. The assumption is that an event that is implied by another one with a relatively high probability may be left implicit in the text. The structure of the narrative is called the connecting associative thread because it allows the reader to follow associations from the beginning to the end of the text. It takes the form of a spanning tree over the previously selected associative sub-network. The associations it contains are selected based on association type preferences and relative temporal distance. The connecting associative thread is then segmented into paragraphs, sentences, and phrases and the associations are translated to rhetorical relations. The microplanning step defines lexico-syntactic templates describing each event type. When two event descriptions need to be assembled in the same sentence, a discourse marker expressing the specified rhetorical relation is employed. A main event and a preceding main event are determined for each sentence. When the associative thread parent of the main event is not the preceding main event, an anaphor is added to the sentence front discourse marker. Surface realization can be performed in English or French thanks to bilingual lexico-syntactic specifications and the SimpleNLG-EnFr Java library. The results of a textual quality evaluation show that the texts are understandable and the lexical choices adequate.
Libri sul tema "Data-to-text generation":
McKeown, Kathleen R. Text generation: Using discourse strategies and focus constraints to generate natural language text. Cambridge [Cambridgeshire]: Cambridge University Press, 1985.
Bizyuk, Aleksandr. Fundamentals of abnormal psychology. ru: INFRA-M Academic Publishing LLC., 2020. http://dx.doi.org/10.12737/974663.
McKeown, Kathleen R. Text Generation: Using Discourse Strategies and Focus Constraints to Generate Natural Language Text (Studies in Natural Language Processing). Cambridge University Press, 1992.
Henderson, Peter A. Southwood's Ecological Methods. 5a ed. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198862277.001.0001.
Ondercin, Heather L. The Evolution of Women’s (and Men’s) Partisan Attachments. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190265144.003.0003.
Mackenzie, Simon. Transnational Criminology. Policy Press, 2020. http://dx.doi.org/10.1332/policypress/9781529203783.001.0001.
Brantingham, Patricia L., Paul J. Brantingham, Justin Song e Valerie Spicer. Advances in Visualization for Theory Testing in Environmental Criminology. A cura di Gerben J. N. Bruinsma e Shane D. Johnson. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780190279707.013.37.
Lovasi, Gina S., Ana V. Diez Roux e Jennifer Kolker, a cura di. Urban Public Health. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780190885304.001.0001.
Ufimtseva, Nataliya V., Iosif A. Sternin e Elena Yu Myagkova. Russian psycholinguistics: results and prospects (1966–2021): a research monograph. Institute of Linguistics, Russian Academy of Sciences, 2021. http://dx.doi.org/10.30982/978-5-6045633-7-3.
Capitoli di libri sul tema "Data-to-text generation":
Gardent, Claire. "Syntax and Data-to-Text Generation". In Statistical Language and Speech Processing, 3–20. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-11397-5_1.
Upadhyay, Ashish, Stewart Massie, Ritwik Kumar Singh, Garima Gupta e Muneendra Ojha. "A Case-Based Approach to Data-to-Text Generation". In Case-Based Reasoning Research and Development, 232–47. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86957-1_16.
Rebuffel, Clément, Laure Soulier, Geoffrey Scoutheeten e Patrick Gallinari. "A Hierarchical Model for Data-to-Text Generation". In Lecture Notes in Computer Science, 65–80. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-45439-5_5.
Wang, Mengda, Jianjun Cao, Xu Yu e Zibo Nie. "A Data-to-Text Generation Model with Deduplicated Content Planning". In Big Data, 92–103. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-8331-3_6.
Mota, Abelardo Vieira, Ticiana Linhares Coelho da Silva e José Antônio Fernandes De Macêdo. "Template-Based Multi-solution Approach for Data-to-Text Generation". In Advances in Databases and Information Systems, 157–70. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54832-2_13.
Pandey, Abhishek Kumar, e Sanjiban Sekhar Roy. "Attention Based Bidirectional LSTM Model for Data-to-text Generation". In Advances in Computational Intelligence and Its Applications, 228–35. London: CRC Press, 2024. http://dx.doi.org/10.1201/9781003488682-29.
Upadhyay, Ashish, e Stewart Massie. "CBR Assisted Context-Aware Surface Realisation for Data-to-Text Generation". In Case-Based Reasoning Research and Development, 34–49. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40177-0_3.
Belz, Anja, e Eric Kow. "Assessing the Trade-Off between System Building Cost and Output Quality in Data-to-Text Generation". In Empirical Methods in Natural Language Generation, 180–200. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15573-4_10.
Roberti, Marco, Giovanni Bonetta, Rossella Cancelliere e Patrick Gallinari. "Copy Mechanism and Tailored Training for Character-Based Data-to-Text Generation". In Machine Learning and Knowledge Discovery in Databases, 648–64. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-46147-8_39.
Upadhyay, Ashish, e Stewart Massie. "A Case-Based Approach for Content Planning in Data-to-Text Generation". In Case-Based Reasoning Research and Development, 380–94. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-14923-8_25.
Atti di convegni sul tema "Data-to-text generation":
Kale, Mihir, e Abhinav Rastogi. "Text-to-Text Pre-Training for Data-to-Text Tasks". In Proceedings of the 13th International Conference on Natural Language Generation. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.inlg-1.14.
Kasner, Zdeněk, e Ondřej Dušek. "Data-to-Text Generation with Iterative Text Editing". In Proceedings of the 13th International Conference on Natural Language Generation. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.inlg-1.9.
Liu, Mengzhu, Zhaonan Mu, Jieping Sun e Cheng Wang. "Data-to-text Generation with Pointer-Generator Networks". In 2020 IEEE International Conference on Advances in Electrical Engineering and Computer Applications (AEECA). IEEE, 2020. http://dx.doi.org/10.1109/aeeca49918.2020.9213600.
Perez-Beltrachini, Laura, e Claire Gardent. "Analysing Data-To-Text Generation Benchmarks". In Proceedings of the 10th International Conference on Natural Language Generation. Stroudsburg, PA, USA: Association for Computational Linguistics, 2017. http://dx.doi.org/10.18653/v1/w17-3537.
Puduppully, Ratish, Li Dong e Mirella Lapata. "Data-to-text Generation with Entity Modeling". In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/p19-1195.
Lin, Shuai, Wentao Wang, Zichao Yang, Xiaodan Liang, Frank F. Xu, Eric Xing e Zhiting Hu. "Data-to-Text Generation with Style Imitation". In Findings of the Association for Computational Linguistics: EMNLP 2020. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.findings-emnlp.144.
Xu, Xinnuo, Ivan Titov e Mirella Lapata. "Compositional Generalization for Data-to-Text Generation". In Findings of the Association for Computational Linguistics: EMNLP 2023. Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.findings-emnlp.623.
Chang, Ernie, Xiaoyu Shen, Dawei Zhu, Vera Demberg e Hui Su. "Neural Data-to-Text Generation with LM-based Text Augmentation". In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.eacl-main.64.
Burgdorf, Andreas, Micaela Barkmann, André Pomp e Tobias Meisen. "Domain-independent Data-to-Text Generation for Open Data". In 11th International Conference on Data Science, Technology and Applications. SCITEPRESS - Science and Technology Publications, 2022. http://dx.doi.org/10.5220/0011272900003269.
GONG, Li, Josep Crego e Jean Senellart. "Enhanced Transformer Model for Data-to-Text Generation". In Proceedings of the 3rd Workshop on Neural Generation and Translation. Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/d19-5615.
Rapporti di organizzazioni sul tema "Data-to-text generation":
Ma, Yue, e Felix Distel. Learning Formal Definitions for Snomed CT from Text. Technische Universität Dresden, 2013. http://dx.doi.org/10.25368/2022.193.
Foundation models such as ChatGPT through the prism of the UNESCO Recommendation on the Ethics of Artificial Intelligence. UNESCO, 2023. http://dx.doi.org/10.54678/bgiv6160.