Literatura científica selecionada sobre o tema "Explainable fact checking"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Explainable fact checking".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Explainable fact checking"

1

Zeng, Fengzhu, e Wei Gao. "JustiLM: Few-shot Justification Generation for Explainable Fact-Checking of Real-world Claims". Transactions of the Association for Computational Linguistics 12 (2024): 334–54. http://dx.doi.org/10.1162/tacl_a_00649.

Texto completo da fonte
Resumo:
Abstract Justification is an explanation that supports the veracity assigned to a claim in fact-checking. However, the task of justification generation has been previously oversimplified as summarization of a fact-check article authored by fact-checkers. Therefore, we propose a realistic approach to generate justification based on retrieved evidence. We present a new benchmark dataset called ExClaim (for Explainable fact-checking of real-world Claims), and introduce JustiLM, a novel few-shot Justification generation based on retrieval-augmented Language Model by using fact-check articles as an auxiliary resource during training only. Experiments show that JustiLM achieves promising performance in justification generation compared to strong baselines, and can also enhance veracity classification with a straightforward extension.1 Code and dataset are released at https://github.com/znhy1024/JustiLM.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Augenstein, Isabelle. "Habilitation Abstract: Towards Explainable Fact Checking". KI - Künstliche Intelligenz, 13 de setembro de 2022. http://dx.doi.org/10.1007/s13218-022-00774-6.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Linder, Rhema, Sina Mohseni, Fan Yang, Shiva K. Pentyala, Eric D. Ragan e Xia Ben Hu. "How level of explanation detail affects human performance in interpretable intelligent systems: A study on explainable fact checking". Applied AI Letters 2, n.º 4 (26 de novembro de 2021). http://dx.doi.org/10.1002/ail2.49.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "Explainable fact checking"

1

Ahmadi, Naser. "A framework for the continuous curation of a knowledge base system". Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS320.

Texto completo da fonte
Resumo:
Les graphes de connaissances centrés sur les entités sont de plus en plus populaires pour recueillir des informations sur les entités. Les schémas des KG sont complexes, avec de nombreux types et prédicats différents pour définir les entités et leurs relations. Ces KG contiennent des connaissances spécifiques à un domaine, mais pour tirer le maximum de ces données, il faut comprendre la structure et les schémas du KG. Leurs données comprennent des entités et leurs types sémantiques pour un domaine spécifique. En outre, les propriétés des entités et les relations entre les entités sont stockées. En raison de l'émergence de nouveaux faits et entités et de l'existence de déclarations invalides, la création et la maintenance des KG est un processus sans fin. Dans cette thèse, nous présentons d'abord une approche destinée à créer un KG dans le domaine de l'audit en faisant correspondre des documents de différents niveaux. Nous introduisons ensuite des méthodes pour la curation continue des KGs. Nous présentons un algorithme pour la fouille des règles conditionnelles et l'appliquons sur de grands KGs. Ensuite, nous décrivons RuleHub, un corpus extensible de règles pour les KGs publiques qui fournit des fonctionnalités pour l'archivage et la récupération des règles. Nous proposons également des méthodes pour l'exploitation des règles logiques dans deux applications différentes: l'apprentissage de règles souples à des modèles de langage pré-entraînés (RuleBert) et la vérification explicable des faits (ExpClaim)
Entity-centric knowledge graphs (KGs) are becoming increasingly popular for gathering information about entities. The schemas of KGs are semantically rich, with many different types and predicates to define the entities and their relationships. These KGs contain knowledge that requires understanding of the KG’s structure and patterns to be exploited. Their rich data structure can express entities with semantic types and relationships, oftentimes domain-specific, that must be made explicit and understood to get the most out of the data. Although different applications can benefit from such rich structure, this comes at a price. A significant challenge with KGs is the quality of their data. Without high-quality data, the applications cannot use the KG. However, as a result of the automatic creation and update of KGs, there are a lot of noisy and inconsistent data in them and, because of the large number of triples in a KG, manual validation is impossible. In this thesis, we present different tools that can be utilized in the process of continuous creation and curation of KGs. We first present an approach designed to create a KG in the accounting field by matching entities. We then introduce methods for the continuous curation of KGs. We present an algorithm for conditional rule mining and apply it on large graphs. Next, we describe RuleHub, an extensible corpus of rules for public KGs which provides functionalities for the archival and the retrieval of rules. We also report methods for using logical rules in two different applications: teaching soft rules to pre-trained language models (RuleBert) and explainable fact checking (ExpClaim)
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

"Explainable Fact Checking by Combining Automated Rule Discovery with Probabilistic Answer Set Programming". Master's thesis, 2018. http://hdl.handle.net/2286/R.I.50443.

Texto completo da fonte
Resumo:
abstract: The goal of fact checking is to determine if a given claim holds. A promising ap- proach for this task is to exploit reference information in the form of knowledge graphs (KGs), a structured and formal representation of knowledge with semantic descriptions of entities and relations. KGs are successfully used in multiple appli- cations, but the information stored in a KG is inevitably incomplete. In order to address the incompleteness problem, this thesis proposes a new method built on top of recent results in logical rule discovery in KGs called RuDik and a probabilistic extension of answer set programs called LPMLN. This thesis presents the integration of RuDik which discovers logical rules over a given KG and LPMLN to do probabilistic inference to validate a fact. While automatically discovered rules over a KG are for human selection and revision, they can be turned into LPMLN programs with a minor modification. Leveraging the probabilistic inference in LPMLN, it is possible to (i) derive new information which is not explicitly stored in a KG with a probability associated with it, and (ii) provide supporting facts and rules for interpretable explanations for such decisions. Also, this thesis presents experiments and results to show that this approach can label claims with high precision. The evaluation of the system also sheds light on the role played by the quality of the given rules and the quality of the KG.
Dissertation/Thesis
Masters Thesis Computer Science 2018
Estilos ABNT, Harvard, Vancouver, APA, etc.

Capítulos de livros sobre o assunto "Explainable fact checking"

1

Atanasova, Pepa. "Generating Fact Checking Explanations". In Accountable and Explainable Methods for Complex Reasoning over Text, 83–103. Cham: Springer Nature Switzerland, 2020. http://dx.doi.org/10.1007/978-3-031-51518-7_4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Atanasova, Pepa. "Fact Checking with Insufficient Evidence". In Accountable and Explainable Methods for Complex Reasoning over Text, 39–64. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-51518-7_2.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Atanasova, Pepa. "Multi-Hop Fact Checking of Political Claims". In Accountable and Explainable Methods for Complex Reasoning over Text, 131–51. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-51518-7_6.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Atanasova, Pepa. "Generating Fluent Fact Checking Explanations with Unsupervised Post-Editing". In Accountable and Explainable Methods for Complex Reasoning over Text, 105–30. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-51518-7_5.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Althabiti, Saud, Mohammad Ammar Alsalka e Eric Atwell. "Generative AI for Explainable Automated Fact Checking on the FactEx: A New Benchmark Dataset". In Disinformation in Open Online Media, 1–13. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-47896-3_1.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Explainable fact checking"

1

Kotonya, Neema, e Francesca Toni. "Explainable Automated Fact-Checking: A Survey". In Proceedings of the 28th International Conference on Computational Linguistics. Stroudsburg, PA, USA: International Committee on Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.coling-main.474.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Kotonya, Neema, e Francesca Toni. "Explainable Automated Fact-Checking: A Survey". In Proceedings of the 28th International Conference on Computational Linguistics. Stroudsburg, PA, USA: International Committee on Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.coling-main.474.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Yang, Jing, Didier Vega-Oliveros, Tais Seibt e Anderson Rocha. "Explainable Fact-Checking Through Question Answering". In ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022. http://dx.doi.org/10.1109/icassp43922.2022.9747214.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Samarinas, Chris, Wynne Hsu e Mong Li Lee. "Improving Evidence Retrieval for Automated Explainable Fact-Checking". In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Demonstrations. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.naacl-demos.10.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Ahmadi, Naser, Joohyung Lee, Paolo Papotti e Mohammed Saeed. "Explainable Fact Checking with Probabilistic Answer Set Programming". In Conference for Truth and Trust Online 2019. TTO Conference Ltd., 2019. http://dx.doi.org/10.36370/tto.2019.15.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Kotonya, Neema, e Francesca Toni. "Explainable Automated Fact-Checking for Public Health Claims". In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.emnlp-main.623.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Nikopensius, Gustav, Mohit Mayank, Orchid Chetia Phukan e Rajesh Sharma. "Reinforcement Learning-based Knowledge Graph Reasoning for Explainable Fact-checking". In ASONAM '23: International Conference on Advances in Social Networks Analysis and Mining. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3625007.3627593.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Lourenco, Vitor, e Aline Paes. "A Modality-level Explainable Framework for Misinformation Checking in Social Networks". In LatinX in AI at Neural Information Processing Systems Conference 2022. Journal of LatinX in AI Research, 2022. http://dx.doi.org/10.52591/lxai202211283.

Texto completo da fonte
Resumo:
The widespread of false information is a rising concern worldwide with critical social impact, inspiring the emergence of fact-checking organizations to mitigate misinformation dissemination. However, human-driven verification leads to a time-consuming task and a bottleneck to have checked trustworthy information at the same pace they emerge. Since misinformation relates not only to the content itself but also to other social features, this paper addresses automatic misinformation checking in social networks from a multimodal perspective. Moreover, as simply naming a piece of news as incorrect may not convince the citizen and, even worse, strengthen confirmation bias, the proposal is a modality-level explainable-prone misinformation classifier framework. Our framework comprises a misinformation classifier assisted by explainable methods to generate modality-oriented explainable inferences. Preliminary findings show that the misinformation classifier does benefit from multimodal information encoding and the modality-oriented explainable mechanism increases both inferences’ interpretability and completeness.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Althabiti, Saud, Mohammad Ammar Alsalka e Eric Atwell. "TA’KEED the First Generative Fact-Checking System for Arabic Claims". In 11th International Conference on Artificial Intelligence and Applications. Academy & Industry Research Collaboration Center, 2024. http://dx.doi.org/10.5121/csit.2024.140103.

Texto completo da fonte
Resumo:
This paper introduces Ta’keed, an explainable Arabic automatic fact-checking system. While existing research often focuses on classifying claims as "True" or "False," there is a limited exploration of generating explanations for claim credibility, particularly in Arabic. Ta’keed addresses this gap by assessing claim truthfulness based on retrieved snippets, utilizing two main components: information retrieval and LLM-based claim verification. We compiled the ArFactEx, a testing gold-labelled dataset with manually justified references, to evaluate the system. The initial model achieved a promising F1 score of 0.72 in the classification task. Meanwhile, the system's generated explanations are compared with gold-standard explanations syntactically and semantically. The study recommends evaluating using semantic similarities, resulting in an average cosine similarity score of 0.76. Additionally, we explored the impact of varying snippet quantities on claim classification accuracy, revealing a potential correlation, with the model using the top seven hits outperforming others with an F1 score of 0.77
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia