Journal articles on the topic 'Extractive Question-Answering'

To see the other types of publications on this topic, follow the link: Extractive Question-Answering.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Extractive Question-Answering.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Xu, Marie-Anne, and Rahul Khanna. "Evaluation of Single-Span Models on Extractive Multi-Span Question-Answering." International journal of Web & Semantic Technology 12, no. 1 (January 31, 2021): 19–29. http://dx.doi.org/10.5121/ijwest.2021.12102.

Full text
Abstract:
Machine Reading Comprehension (MRC), particularly extractive close-domain question-answering, is a prominent field in Natural Language Processing (NLP). Given a question and a passage or set of passages, a machine must be able to extract the appropriate answer from the passage(s). However, the majority of these existing questions have only one answer, and more substantial testing on questions with multiple answers, or multi-span questions, has not yet been applied. Thus, we introduce a newly compiled dataset consisting of questions with multiple answers that originate from previously existing datasets. In addition, we run BERT-based models pre-trained for question-answering on our constructed dataset to evaluate their reading comprehension abilities. Runtime of base models on the entire dataset is approximately one day while the runtime for all models on a third of the dataset is a little over two days. Among the three of BERT-based models we ran, RoBERTa exhibits the highest consistent performance, regardless of size. We find that all our models perform similarly on this new, multi-span dataset compared to the single-span source datasets. While the models tested on the source datasets were slightly fine-tuned in order to return multiple answers, performance is similar enough to judge that task formulation does not drastically affect question-answering abilities. Our evaluations indicate that these models are indeed capable of adjusting to answer questions that require multiple answers. We hope that our findings will assist future development in question-answering and improve existing question-answering products and methods.
APA, Harvard, Vancouver, ISO, and other styles
2

Guan, Yue, Zhengyi Li, Zhouhan Lin, Yuhao Zhu, Jingwen Leng, and Minyi Guo. "Block-Skim: Efficient Question Answering for Transformer." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 10 (June 28, 2022): 10710–19. http://dx.doi.org/10.1609/aaai.v36i10.21316.

Full text
Abstract:
Transformer models have achieved promising results on natural language processing (NLP) tasks including extractive question answering (QA). Common Transformer encoders used in NLP tasks process the hidden states of all input tokens in the context paragraph throughout all layers. However, different from other tasks such as sequence classification, answering the raised question does not necessarily need all the tokens in the context paragraph. Following this motivation, we propose Block-skim, which learns to skim unnecessary context in higher hidden layers to improve and accelerate the Transformer performance. The key idea of Block-Skim is to identify the context that must be further processed and those that could be safely discarded early on during inference. Critically, we find that such information could be sufficiently derived from the self-attention weights inside the Transformer model. We further prune the hidden states corresponding to the unnecessary positions early in lower layers, achieving significant inference-time speedup. To our surprise, we observe that models pruned in this way outperform their full-size counterparts. Block-Skim improves QA models' accuracy on different datasets and achieves 3 times speedup on BERT-base model.
APA, Harvard, Vancouver, ISO, and other styles
3

Hu, Zhongjian, Peng Yang, Bing Li, Yuankang Sun, and Biao Yang. "Biomedical extractive question answering based on dynamic routing and answer voting." Information Processing & Management 60, no. 4 (July 2023): 103367. http://dx.doi.org/10.1016/j.ipm.2023.103367.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ouyang, Jianquan, and Mengen Fu. "Improving Machine Reading Comprehension with Multi-Task Learning and Self-Training." Mathematics 10, no. 3 (January 19, 2022): 310. http://dx.doi.org/10.3390/math10030310.

Full text
Abstract:
Machine Reading Comprehension (MRC) is an AI challenge that requires machines to determine the correct answer to a question based on a given passage, in which extractive MRC requires extracting an answer span to a question from a given passage, such as the task of span extraction. In contrast, non-extractive MRC infers answers from the content of reference passages, including Yes/No question answering to unanswerable questions. Due to the specificity of the two types of MRC tasks, researchers usually work on one type of task separately, but real-life application situations often require models that can handle many different types of tasks in parallel. Therefore, to meet the comprehensive requirements in such application situations, we construct a multi-task fusion training reading comprehension model based on the BERT pre-training model. The model uses the BERT pre-training model to obtain contextual representations, which is then shared by three downstream sub-modules for span extraction, Yes/No question answering, and unanswerable questions, next we fuse the outputs of the three sub-modules into a new span extraction output and use the fused cross-entropy loss function for global training. In the training phase, since our model requires a large amount of labeled training data, which is often expensive to obtain or unavailable in many tasks, we additionally use self-training to generate pseudo-labeled training data to train our model to improve its accuracy and generalization performance. We evaluated the SQuAD2.0 and CAIL2019 datasets. The experiments show that our model can efficiently handle different tasks. We achieved 83.2EM and 86.7F1 scores on the SQuAD2.0 dataset and 73.0EM and 85.3F1 scores on the CAIL2019 dataset.
APA, Harvard, Vancouver, ISO, and other styles
5

Shinoda, Kazutoshi, Saku Sugawara, and Akiko Aizawa. "Which Shortcut Solution Do Question Answering Models Prefer to Learn?" Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 11 (June 26, 2023): 13564–72. http://dx.doi.org/10.1609/aaai.v37i11.26590.

Full text
Abstract:
Question answering (QA) models for reading comprehension tend to exploit spurious correlations in training sets and thus learn shortcut solutions rather than the solutions intended by QA datasets. QA models that have learned shortcut solutions can achieve human-level performance in shortcut examples where shortcuts are valid, but these same behaviors degrade generalization potential on anti-shortcut examples where shortcuts are invalid. Various methods have been proposed to mitigate this problem, but they do not fully take the characteristics of shortcuts themselves into account. We assume that the learnability of shortcuts, i.e., how easy it is to learn a shortcut, is useful to mitigate the problem. Thus, we first examine the learnability of the representative shortcuts on extractive and multiple-choice QA datasets. Behavioral tests using biased training sets reveal that shortcuts that exploit answer positions and word-label correlations are preferentially learned for extractive and multiple-choice QA, respectively. We find that the more learnable a shortcut is, the flatter and deeper the loss landscape is around the shortcut solution in the parameter space. We also find that the availability of the preferred shortcuts tends to make the task easier to perform from an information-theoretic viewpoint. Lastly, we experimentally show that the learnability of shortcuts can be utilized to construct an effective QA training set; the more learnable a shortcut is, the smaller the proportion of anti-shortcut examples required to achieve comparable performance on shortcut and anti-shortcut examples. We claim that the learnability of shortcuts should be considered when designing mitigation methods.
APA, Harvard, Vancouver, ISO, and other styles
6

Longpre, Shayne, Yi Lu, and Joachim Daiber. "MKQA: A Linguistically Diverse Benchmark for Multilingual Open Domain Question Answering." Transactions of the Association for Computational Linguistics 9 (2021): 1389–406. http://dx.doi.org/10.1162/tacl_a_00433.

Full text
Abstract:
Abstract Progress in cross-lingual modeling depends on challenging, realistic, and diverse evaluation sets. We introduce Multilingual Knowledge Questions and Answers (MKQA), an open- domain question answering evaluation set comprising 10k question-answer pairs aligned across 26 typologically diverse languages (260k question-answer pairs in total). Answers are based on heavily curated, language- independent data representation, making results comparable across languages and independent of language-specific passages. With 26 languages, this dataset supplies the widest range of languages to-date for evaluating question answering. We benchmark a variety of state- of-the-art methods and baselines for generative and extractive question answering, trained on Natural Questions, in zero shot and translation settings. Results indicate this dataset is challenging even in English, but especially in low-resource languages.1
APA, Harvard, Vancouver, ISO, and other styles
7

Gholami, Sia, and Mehdi Noori. "You Don’t Need Labeled Data for Open-Book Question Answering." Applied Sciences 12, no. 1 (December 23, 2021): 111. http://dx.doi.org/10.3390/app12010111.

Full text
Abstract:
Open-book question answering is a subset of question answering (QA) tasks where the system aims to find answers in a given set of documents (open-book) and common knowledge about a topic. This article proposes a solution for answering natural language questions from a corpus of Amazon Web Services (AWS) technical documents with no domain-specific labeled data (zero-shot). These questions have a yes–no–none answer and a text answer which can be short (a few words) or long (a few sentences). We present a two-step, retriever–extractor architecture in which a retriever finds the right documents and an extractor finds the answers in the retrieved documents. To test our solution, we are introducing a new dataset for open-book QA based on real customer questions on AWS technical documentation. In this paper, we conducted experiments on several information retrieval systems and extractive language models, attempting to find the yes–no–none answers and text answers in the same pass. Our custom-built extractor model is created from a pretrained language model and fine-tuned on the the Stanford Question Answering Dataset—SQuAD and Natural Questions datasets. We were able to achieve 42% F1 and 39% exact match score (EM) end-to-end with no domain-specific training.
APA, Harvard, Vancouver, ISO, and other styles
8

LI, SHASHA, and ZHOUJUN LI. "QUESTION-ORIENTED ANSWER SUMMARIZATION VIA TERM HIERARCHICAL STRUCTURE." International Journal of Software Engineering and Knowledge Engineering 21, no. 06 (September 2011): 877–89. http://dx.doi.org/10.1142/s0218194011005475.

Full text
Abstract:
In the research area of community-based question and answering (cQA) services such as Yahoo! Answers, reuse of the answers has attracted more and more interest. Most researchers focus on the correctness of answers and pay little attention to the completeness of answers. In this paper, we try to address the answer completeness problem for "survey questions" for which answer completeness is crucial. We propose to generate a more complete answer from replies in cQA services through a term hierarchical structure based question-oriented extractive summarization which is different from traditional query-based extractive summarization. The experimental results are very promising in terms of recall, precision and conciseness.
APA, Harvard, Vancouver, ISO, and other styles
9

Moon, Sungrim, Huan He, Heling Jia, Hongfang Liu, and Jungwei Wilfred Fan. "Extractive Clinical Question-Answering With Multianswer and Multifocus Questions: Data Set Development and Evaluation Study." JMIR AI 2 (June 20, 2023): e41818. http://dx.doi.org/10.2196/41818.

Full text
Abstract:
Background Extractive question-answering (EQA) is a useful natural language processing (NLP) application for answering patient-specific questions by locating answers in their clinical notes. Realistic clinical EQA can yield multiple answers to a single question and multiple focus points in 1 question, which are lacking in existing data sets for the development of artificial intelligence solutions. Objective This study aimed to create a data set for developing and evaluating clinical EQA systems that can handle natural multianswer and multifocus questions. Methods We leveraged the annotated relations from the 2018 National NLP Clinical Challenges corpus to generate an EQA data set. Specifically, the 1-to-N, M-to-1, and M-to-N drug-reason relations were included to form the multianswer and multifocus question-answering entries, which represent more complex and natural challenges in addition to the basic 1-drug-1-reason cases. A baseline solution was developed and tested on the data set. Results The derived RxWhyQA data set contains 96,939 QA entries. Among the answerable questions, 25% of them require multiple answers, and 2% of them ask about multiple drugs within 1 question. Frequent cues were observed around the answers in the text, and 90% of the drug and reason terms occurred within the same or an adjacent sentence. The baseline EQA solution achieved a best F1-score of 0.72 on the entire data set, and on specific subsets, it was 0.93 for the unanswerable questions, 0.48 for single-drug questions versus 0.60 for multidrug questions, and 0.54 for the single-answer questions versus 0.43 for multianswer questions. Conclusions The RxWhyQA data set can be used to train and evaluate systems that need to handle multianswer and multifocus questions. Specifically, multianswer EQA appears to be challenging and therefore warrants more investment in research. We created and shared a clinical EQA data set with multianswer and multifocus questions that would channel future research efforts toward more realistic scenarios.
APA, Harvard, Vancouver, ISO, and other styles
10

Siblini, Wissam, Mohamed Challal, and Charlotte Pasqual. "Efficient Open Domain Question Answering With Delayed Attention in Transformer-Based Models." International Journal of Data Warehousing and Mining 18, no. 2 (April 2022): 1–16. http://dx.doi.org/10.4018/ijdwm.298005.

Full text
Abstract:
Open Domain Question Answering (ODQA) on a large-scale corpus of documents (e.g. Wikipedia) is a key challenge in computer science. Although Transformer-based language models such as Bert have shown an ability to outperform humans to extract answers from small pre-selected passages of text, they suffer from their high complexity if the search space is much larger. The most common way to deal with this problem is to add a preliminary information retrieval step to strongly filter the corpus and keep only the relevant passages. In this article, the authors consider a more direct and complementary solution which consists in restricting the attention mechanism in Transformer-based models to allow a more efficient management of computations. The resulting variants are competitive with the original models on the extractive task and allow, in the ODQA setting, a significant acceleration of predictions and sometimes even an improvement in the quality of response.
APA, Harvard, Vancouver, ISO, and other styles
11

Li, Shaobo, Chengjie Sun, Bingquan Liu, Yuanchao Liu, and Zhenzhou Ji. "Modeling Extractive Question Answering Using Encoder-Decoder Models with Constrained Decoding and Evaluation-Based Reinforcement Learning." Mathematics 11, no. 7 (March 27, 2023): 1624. http://dx.doi.org/10.3390/math11071624.

Full text
Abstract:
Extractive Question Answering, also known as machine reading comprehension, can be used to evaluate how well a computer comprehends human language. It is a valuable topic with many applications, such as in chatbots and personal assistants. End-to-end neural-network-based models have achieved remarkable performance on these tasks. The most frequently used approach to extract answers with neural networks is to predict the answer’s start and end positions in the document, independently or jointly. In this paper, we propose another approach that considers all words in an answer jointly. We introduce an encoder-decoder model to learn from all words in the answer. This differs from previous works. which usually focused on the start and end and ignored the words in the middle. To help the encoder-decoder model to perform this task better, we employ evaluation-based reinforcement learning with different reward functions. The results of an experiment on the SQuAD dataset show that the proposed method can outperform the baseline in terms of F1 scores, offering another potential approach to solve the extractive QA task.
APA, Harvard, Vancouver, ISO, and other styles
12

Liu, Shuang, Nannan Tan, Yaqian Ge, and Niko Lukač. "Research on Automatic Question Answering of Generative Knowledge Graph Based on Pointer Network." Information 12, no. 3 (March 21, 2021): 136. http://dx.doi.org/10.3390/info12030136.

Full text
Abstract:
Question-answering systems based on knowledge graphs are extremely challenging tasks in the field of natural language processing. Most of the existing Chinese Knowledge Base Question Answering(KBQA) can only return the knowledge stored in the knowledge base by extractive methods. Nevertheless, this processing does not conform to the reading habits and cannot solve the Out-of-vocabulary(OOV) problem. In this paper, a new generative question answering method based on knowledge graph is proposed, including three parts of knowledge vocabulary construction, data pre-processing, and answer generation. In the word list construction, BiLSTM-CRF is used to identify the entity in the source text, finding the triples contained in the entity, counting the word frequency, and constructing it. In the part of data pre-processing, a pre-trained language model BERT combining word frequency semantic features is adopted to obtain word vectors. In the answer generation part, one combination of a vocabulary constructed by the knowledge graph and a pointer generator network(PGN) is proposed to point to the corresponding entity for generating answer. The experimental results show that the proposed method can achieve superior performance on WebQA datasets than other methods.
APA, Harvard, Vancouver, ISO, and other styles
13

McCarley, Scott, Mihaela Bornea, Sara Rosenthal, Anthony Ferritto, Md Arafat Sultan, Avirup Sil, and Radu Florian. "GAAMA 2.0: An Integrated System That Answers Boolean and Extractive Questions." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (June 26, 2023): 16461–63. http://dx.doi.org/10.1609/aaai.v37i13.27079.

Full text
Abstract:
Recent machine reading comprehension datasets include extractive and boolean questions but current approaches do not offer integrated support for answering both question types. We present a front-end demo to a multilingual machine reading comprehension system that handles boolean and extractive questions. It provides a yes/no answer and highlights the supporting evidence for boolean questions. It provides an answer for extractive questions and highlights the answer in the passage. Our system, GAAMA 2.0, achieved first place on the TyDi QA leaderboard at the time of submission. We contrast two different implementations of our approach: including multiple transformer models for easy deployment, and a shared transformer model utilizing adapters to reduce GPU memory footprint for a resource-constrained environment.
APA, Harvard, Vancouver, ISO, and other styles
14

González, José Ángel, Encarna Segarra, Fernando García-Granada, Emilio Sanchis, and Lluís-F. Hurtado. "Attentional Extractive Summarization." Applied Sciences 13, no. 3 (January 22, 2023): 1458. http://dx.doi.org/10.3390/app13031458.

Full text
Abstract:
In this work, a general theoretical framework for extractive summarization is proposed—the Attentional Extractive Summarization framework. Although abstractive approaches are generally used in text summarization today, extractive methods can be especially suitable for some applications, and they can help with other tasks such as Text Classification, Question Answering, and Information Extraction. The proposed approach is based on the interpretation of the attention mechanisms of hierarchical neural networks, which compute document-level representations of documents and summaries from sentence-level representations, which, in turn, are computed from word-level representations. The models proposed under this framework are able to automatically learn relationships among document and summary sentences, without requiring Oracle systems to compute the reference labels for each sentence before the training phase. These relationships are obtained as a result of a binary classification process, the goal of which is to distinguish correct summaries for documents. Two different systems, formalized under the proposed framework, were evaluated on the CNN/DailyMail and the NewsRoom corpora, which are some of the reference corpora in the most relevant works on text summarization. The results obtained during the evaluation support the adequacy of our proposal and suggest that there is still room for the improvement of our attentional framework.
APA, Harvard, Vancouver, ISO, and other styles
15

Alruqi, Tahani N., and Salha M. Alzahrani. "Evaluation of an Arabic Chatbot Based on Extractive Question-Answering Transfer Learning and Language Transformers." AI 4, no. 3 (August 16, 2023): 667–91. http://dx.doi.org/10.3390/ai4030035.

Full text
Abstract:
Chatbots are programs with the ability to understand and respond to natural language in a way that is both informative and engaging. This study explored the current trends of using transformers and transfer learning techniques on Arabic chatbots. The proposed methods used various transformers and semantic embedding models from AraBERT, CAMeLBERT, AraElectra-SQuAD, and AraElectra (Generator/Discriminator). Two datasets were used for the evaluation: one with 398 questions, and the other with 1395 questions and 365,568 documents sourced from Arabic Wikipedia. Extensive experimental works were conducted, evaluating both manually crafted questions and the entire set of questions by using confidence and similarity metrics. Our experimental results demonstrate that combining the power of transformer architecture with extractive chatbots can provide more accurate and contextually relevant answers to questions in Arabic. Specifically, our experimental results showed that the AraElectra-SQuAD model consistently outperformed other models. It achieved an average confidence score of 0.6422 and an average similarity score of 0.9773 on the first dataset, and an average confidence score of 0.6658 and similarity score of 0.9660 on the second dataset. The study concludes that the AraElectra-SQuAD showed remarkable performance, high confidence, and robustness, which highlights its potential for practical applications in natural language processing tasks for Arabic chatbots. The study suggests that the language transformers can be further enhanced and used for various tasks, such as specialized chatbots, virtual assistants, and information retrieval systems for Arabic-speaking users.
APA, Harvard, Vancouver, ISO, and other styles
16

Qazvinian, V., D. R. Radev, S. M. Mohammad, B. Dorr, D. Zajic, M. Whidby, and T. Moon. "Generating Extractive Summaries of Scientific Paradigms." Journal of Artificial Intelligence Research 46 (February 20, 2013): 165–201. http://dx.doi.org/10.1613/jair.3732.

Full text
Abstract:
Researchers and scientists increasingly find themselves in the position of having to quickly understand large amounts of technical material. Our goal is to effectively serve this need by using bibliometric text mining and summarization techniques to generate summaries of scientific literature. We show how we can use citations to produce automatically generated, readily consumable, technical extractive summaries. We first propose C-LexRank, a model for summarizing single scientific articles based on citations, which employs community detection and extracts salient information-rich sentences. Next, we further extend our experiments to summarize a set of papers, which cover the same scientific topic. We generate extractive summaries of a set of Question Answering (QA) and Dependency Parsing (DP) papers, their abstracts, and their citation sentences and show that citations have unique information amenable to creating a summary.
APA, Harvard, Vancouver, ISO, and other styles
17

Zhu, Wenhao, Xiaoyu Zhang, Qiuhong Zhai, and Chenyun Liu. "A Hybrid Text Generation-Based Query Expansion Method for Open-Domain Question Answering." Future Internet 15, no. 5 (May 12, 2023): 180. http://dx.doi.org/10.3390/fi15050180.

Full text
Abstract:
In the two-stage open-domain question answering (OpenQA) systems, the retriever identifies a subset of relevant passages, which the reader then uses to extract or generate answers. However, the performance of OpenQA systems is often hindered by issues such as short and semantically ambiguous queries, making it challenging for the retriever to find relevant passages quickly. This paper introduces Hybrid Text Generation-Based Query Expansion (HTGQE), an effective method to improve retrieval efficiency. HTGQE combines large language models with Pseudo-Relevance Feedback techniques to enhance the input for generative models, improving text generation speed and quality. Building on this foundation, HTGQE employs multiple query expansion generators, each trained to provide query expansion contexts from distinct perspectives. This enables the retriever to explore relevant passages from various angles for complementary retrieval results. As a result, under an extractive and generative QA setup, HTGQE achieves promising results on both Natural Questions (NQ) and TriviaQA (Trivia) datasets for passage retrieval and reading tasks.
APA, Harvard, Vancouver, ISO, and other styles
18

Parikh, Soham, Quaizar Vohra, and Mitul Tiwari. "Automated Utterance Generation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 08 (April 3, 2020): 13344–49. http://dx.doi.org/10.1609/aaai.v34i08.7047.

Full text
Abstract:
Conversational AI assistants are becoming popular and question-answering is an important part of any conversational assistant. Using relevant utterances as features in question-answering has shown to improve both the precision and recall for retrieving the right answer by a conversational assistant. Hence, utterance generation has become an important problem with the goal of generating relevant utterances (sentences or phrases) from a knowledge base article that consists of a title and a description. However, generating good utterances usually requires a lot of manual effort, creating the need for an automated utterance generation. In this paper, we propose an utterance generation system which 1) uses extractive summarization to extract important sentences from the description, 2) uses multiple paraphrasing techniques to generate a diverse set of paraphrases of the title and summary sentences, and 3) selects good candidate paraphrases with the help of a novel candidate selection algorithm.
APA, Harvard, Vancouver, ISO, and other styles
19

Simpson, Edwin, Yang Gao, and Iryna Gurevych. "Interactive Text Ranking with Bayesian Optimization: A Case Study on Community QA and Summarization." Transactions of the Association for Computational Linguistics 8 (December 2020): 759–75. http://dx.doi.org/10.1162/tacl_a_00344.

Full text
Abstract:
For many NLP applications, such as question answering and summarization, the goal is to select the best solution from a large space of candidates to meet a particular user’s needs. To address the lack of user or task-specific training data, we propose an interactive text ranking approach that actively selects pairs of candidates, from which the user selects the best. Unlike previous strategies, which attempt to learn a ranking across the whole candidate space, our method uses Bayesian optimization to focus the user’s labeling effort on high quality candidates and integrate prior knowledge to cope better with small data scenarios. We apply our method to community question answering (cQA) and extractive multidocument summarization, finding that it significantly outperforms existing interactive approaches. We also show that the ranking function learned by our method is an effective reward function for reinforcement learning, which improves the state of the art for interactive summarization.
APA, Harvard, Vancouver, ISO, and other styles
20

Won, Kwanghee, Youngsun Jang, Hyung-do Choi, and Sung Shin. "Design and implementation of information extraction system for scientific literature using fine-tuned deep learning models." ACM SIGAPP Applied Computing Review 22, no. 1 (March 2022): 31–38. http://dx.doi.org/10.1145/3530043.3530047.

Full text
Abstract:
This paper presents an overview of a quality scoring system that utilizes pre-trained deep neural network models. Two types of DL models, a classification and extractive question answering (EQA) models are used to implement components of the system. The abstracts of the scientific literature are classified into two groups, in-vivo and in-vitro, and a question and answering model architecture is constructed for extracting the following types of information (animal type, the number of animals, exposure dose, and signal frequency). The Bidirectional Encoder Representations of Transformers (BERT) model pre-trained with a large text corpus is used as our baseline model for classification and EQA tasks. The models are fine-tuned with 455 EMF-related research papers. In our experiments, the fine-tuned model showed improved performance on EQA tasks for the four-categories of questions compared to the baseline, and it also showed improvements on similar questions that were not used in training. This suggests the importance of retraining of deep learning model specifically in some areas requiring domain expertise such as scientific papers. However, additional research is needed on some implementation issues, in such cases where there are still multiple answers, or where there is no answer given in a context.
APA, Harvard, Vancouver, ISO, and other styles
21

Gollapalli, Sujatha Das, Mingzhe Du, and See-Kiong Ng. "Generating Reflective Questions for Engaging Gallery Visitors in ArtMuse." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (June 26, 2023): 16434–36. http://dx.doi.org/10.1609/aaai.v37i13.27070.

Full text
Abstract:
Human guides in museums and galleries are professionally trained to stimulate informal learning in visitors by asking low-risk, open-ended reflective questions that enable them to focus on specific features of artifacts, relate to prior experiences, and elicit curiosity as well as further thought. We present ArtMuse, our AI-powered chatbot for asking reflective questions in context of paintings. Our reflective question generation model in ArtMuse was trained by applying a novel combination of existing models for extractive question answering and open-domain chitchat. User evaluation studies indicate that we are able to generate fluent and specific reflective questions for paintings that are highly-engaging.
APA, Harvard, Vancouver, ISO, and other styles
22

Bi, Bin, Chen Wu, Ming Yan, Wei Wang, Jiangnan Xia, and Chenliang Li. "Generating Well-Formed Answers by Machine Reading with Stochastic Selector Networks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 7424–31. http://dx.doi.org/10.1609/aaai.v34i05.6238.

Full text
Abstract:
Question answering (QA) based on machine reading comprehension has been a recent surge in popularity, yet most work has focused on extractive methods. We instead address a more challenging QA problem of generating a well-formed answer by reading and summarizing the paragraph for a given question.For the generative QA task, we introduce a new neural architecture, LatentQA, in which a novel stochastic selector network composes a well-formed answer with words selected from the question, the paragraph and the global vocabulary, based on a sequence of discrete latent variables. Bayesian inference for the latent variables is performed to train the LatentQA model. The experiments on public datasets of natural answer generation confirm the effectiveness of LatentQA in generating high-quality well-formed answers.
APA, Harvard, Vancouver, ISO, and other styles
23

Narayan, Shashi, Shay B. Cohen, and Mirella Lapata. "What is this Article about? Extreme Summarization with Topic-aware Convolutional Neural Networks." Journal of Artificial Intelligence Research 66 (September 19, 2019): 243–78. http://dx.doi.org/10.1613/jair.1.11315.

Full text
Abstract:
We introduce "extreme summarization," a new single-document summarization task which aims at creating a short, one-sentence news summary answering the question "What is the article about?". We argue that extreme summarization, by nature, is not amenable to extractive strategies and requires an abstractive modeling approach. In the hope of driving research on this task further: (a) we collect a real-world, large scale dataset by harvesting online articles from the British Broadcasting Corporation (BBC); and (b) propose a novel abstractive model which is conditioned on the article's topics and based entirely on convolutional neural networks. We demonstrate experimentally that this architecture captures long-range dependencies in a document and recognizes pertinent content, outperforming an oracle extractive system and state-of-the-art abstractive approaches when evaluated automatically and by humans on the extreme summarization dataset.
APA, Harvard, Vancouver, ISO, and other styles
24

Khattab, Omar, Christopher Potts, and Matei Zaharia. "Relevance-guided Supervision for OpenQA with ColBERT." Transactions of the Association for Computational Linguistics 9 (2021): 929–44. http://dx.doi.org/10.1162/tacl_a_00405.

Full text
Abstract:
Abstract Systems for Open-Domain Question Answering (OpenQA) generally depend on a retriever for finding candidate passages in a large corpus and a reader for extracting answers from those passages. In much recent work, the retriever is a learned component that uses coarse-grained vector representations of questions and passages. We argue that this modeling choice is insufficiently expressive for dealing with the complexity of natural language questions. To address this, we define ColBERT-QA, which adapts the scalable neural retrieval model ColBERT to OpenQA. ColBERT creates fine-grained interactions between questions and passages. We propose an efficient weak supervision strategy that iteratively uses ColBERT to create its own training data. This greatly improves OpenQA retrieval on Natural Questions, SQuAD, and TriviaQA, and the resulting system attains state-of-the-art extractive OpenQA performance on all three datasets.
APA, Harvard, Vancouver, ISO, and other styles
25

Yan, Ming, Jiangnan Xia, Chen Wu, Bin Bi, Zhongzhou Zhao, Ji Zhang, Luo Si, Rui Wang, Wei Wang, and Haiqing Chen. "A Deep Cascade Model for Multi-Document Reading Comprehension." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 7354–61. http://dx.doi.org/10.1609/aaai.v33i01.33017354.

Full text
Abstract:
A fundamental trade-off between effectiveness and efficiency needs to be balanced when designing an online question answering system. Effectiveness comes from sophisticated functions such as extractive machine reading comprehension (MRC), while efficiency is obtained from improvements in preliminary retrieval components such as candidate document selection and paragraph ranking. Given the complexity of the real-world multi-document MRC scenario, it is difficult to jointly optimize both in an end-to-end system. To address this problem, we develop a novel deep cascade learning model, which progressively evolves from the documentlevel and paragraph-level ranking of candidate texts to more precise answer extraction with machine reading comprehension. Specifically, irrelevant documents and paragraphs are first filtered out with simple functions for efficiency consideration. Then we jointly train three modules on the remaining texts for better tracking the answer: the document extraction, the paragraph extraction and the answer extraction. Experiment results show that the proposed method outperforms the previous state-of-the-art methods on two large-scale multidocument benchmark datasets, i.e., TriviaQA and DuReader. In addition, our online system can stably serve typical scenarios with millions of daily requests in less than 50ms.
APA, Harvard, Vancouver, ISO, and other styles
26

Jin, Di, Shuyang Gao, Jiun-Yu Kao, Tagyoung Chung, and Dilek Hakkani-tur. "MMM: Multi-Stage Multi-Task Learning for Multi-Choice Reading Comprehension." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 8010–17. http://dx.doi.org/10.1609/aaai.v34i05.6310.

Full text
Abstract:
Machine Reading Comprehension (MRC) for question answering (QA), which aims to answer a question given the relevant context passages, is an important way to test the ability of intelligence systems to understand human language. Multiple-Choice QA (MCQA) is one of the most difficult tasks in MRC because it often requires more advanced reading comprehension skills such as logical reasoning, summarization, and arithmetic operations, compared to the extractive counterpart where answers are usually spans of text within given passages. Moreover, most existing MCQA datasets are small in size, making the task even harder. We introduce MMM, a Multi-stage Multi-task learning framework for Multi-choice reading comprehension. Our method involves two sequential stages: coarse-tuning stage using out-of-domain datasets and multi-task learning stage using a larger in-domain dataset to help model generalize better with limited data. Furthermore, we propose a novel multi-step attention network (MAN) as the top-level classifier for this task. We demonstrate MMM significantly advances the state-of-the-art on four representative MCQA datasets.
APA, Harvard, Vancouver, ISO, and other styles
27

Veisi, Hadi, and Hamed Fakour Shandi. "A Persian Medical Question Answering System." International Journal on Artificial Intelligence Tools 29, no. 06 (September 2020): 2050019. http://dx.doi.org/10.1142/s0218213020500190.

Full text
Abstract:
A question answering system is a type of information retrieval that takes a question from a user in natural language as the input and returns the best answer to it as the output. In this paper, a medical question answering system in the Persian language is designed and implemented. During this research, a dataset of diseases and drugs is collected and structured. The proposed system includes three main modules: question processing, document retrieval, and answer extraction. For the question processing module, a sequential architecture is designed which retrieves the main concept of a question by using different components. In these components, rule-based methods, natural language processing, and dictionary-based techniques are used. In the document retrieval module, the documents are indexed and searched using the Lucene library. The retrieved documents are ranked using similarity detection algorithms and the highest-ranked document is selected to be used by the answer extraction module. This module is responsible for extracting the most relevant section of the text in the retrieved document. During this research, different customized language processing tools such as part of speech tagger and lemmatizer are also developed for Persian. Evaluation results show that this system performs well for answering different questions about diseases and drugs. The accuracy of the system for 500 sample questions is 83.6%.
APA, Harvard, Vancouver, ISO, and other styles
28

Yang, Lei, Haonan Guo, Yu Dai, and Wanheng Chen. "A Method for Complex Question-Answering over Knowledge Graph." Applied Sciences 13, no. 8 (April 18, 2023): 5055. http://dx.doi.org/10.3390/app13085055.

Full text
Abstract:
Knowledge Graph Question-Answering (KGQA) has gained popularity as an effective approach for information retrieval systems. However, answering complex questions involving multiple topic entities and multi-hop relations presents a significant challenge for model training. Moreover, existing KGQA models face difficulties in extracting constraint information from complex questions, leading to reduced accuracy. To overcome these challenges, we propose a three-part pipelined framework comprising question decomposition, constraint extraction, and question reasoning. Our approach employs a novel question decomposition model that uses dual encoders and attention mechanisms to enhance question representation. We define temporal, spatial, and numerical constraint types and propose a constraint extraction model to mitigate the impact of constraint interference on downstream question reasoning. The question reasoning model uses beam search to reduce computational effort and enhance exploration, facilitating the identification of the optimal path. Experimental results on the ComplexWebQuestions dataset demonstrate the efficacy of our proposed model, achieving an F1 score of 72.0% and highlighting the effectiveness of our approach in decomposing complex questions into simple subsets and improving the accuracy of question reasoning.
APA, Harvard, Vancouver, ISO, and other styles
29

Bach, Ngo Xuan, Phan Duc Thanh, and Tran Thi Oanh. "Question Analysis towards a Vietnamese Question Answering System in the Education Domain." Cybernetics and Information Technologies 20, no. 1 (March 1, 2020): 112–28. http://dx.doi.org/10.2478/cait-2020-0008.

Full text
Abstract:
AbstractBuilding a computer system, which can automatically answer questions in the human language, speech or text, is a long-standing goal of the Artificial Intelligence (AI) field. Question analysis, the task of extracting important information from the input question, is the first and crucial step towards a question answering system. In this paper, we focus on the task of Vietnamese question analysis in the education domain. Our goal is to extract important information expressed by named entities in an input question, such as university names, campus names, major names, and teacher names. We present several extraction models that utilize the advantages of both traditional statistical methods with handcrafted features and more recent advanced deep neural networks with automatically learned features. Our best model achieves 88.11% in the F1 score on a corpus consisting of 3,600 Vietnamese questions collected from the fan page of the International School, Vietnam National University, Hanoi.
APA, Harvard, Vancouver, ISO, and other styles
30

Xianfeng, Yang, and Liu Pengfei. "Question Recommendation and Answer Extraction in Question Answering Community." International Journal of Database Theory and Application 9, no. 1 (January 31, 2016): 35–44. http://dx.doi.org/10.14257/ijdta.2016.9.1.04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Zhang, Zhongfeng, Qiudan Li, Daniel Zeng, and Heng Gao. "Extracting evolutionary communities in community question answering." Journal of the Association for Information Science and Technology 65, no. 6 (January 29, 2014): 1170–86. http://dx.doi.org/10.1002/asi.23003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Nguyen, Van-Tu, Anh-Cuong Le, and Ha-Nam Nguyen. "A Model of Convolutional Neural Network Combined with External Knowledge to Measure the Question Similarity for Community Question Answering Systems." International Journal of Machine Learning and Computing 11, no. 3 (May 2021): 194–201. http://dx.doi.org/10.18178/ijmlc.2021.11.3.1035.

Full text
Abstract:
Automatically determining similar questions and ranking the obtained questions according to their similarities to each input question is a very important task to any community Question Answering system (cQA). Various methods have applied for this task including conventional machine learning methods with feature extraction and some recent studies using deep learning methods. This paper addresses the problem of how to combine advantages of different methods into one unified model. Moreover, deep learning models are usually only effective for large data, while training data sets in cQA problems are often small, so the idea of integrating external knowledge into deep learning models for this cQA problem becomes more important. To this objective, we propose a neural network-based model which combines a Convolutional Neural Network (CNN) with features from other methods so that the deep learning model is enhanced with addtional knowledge sources. In our proposed model, the CNN component will learn the representation of two given questions, then combined additional features through a Multilayer Perceptron (MLP) to measure similarity between the two questions. We tested our proposed model on the SemEval 2016 task-3 data set and obtain better results in comparison with previous studies on the same task.
APA, Harvard, Vancouver, ISO, and other styles
33

Li, Yaliang, Chaochun Liu, Nan Du, Wei Fan, Qi Li, Jing Gao, Chenwei Zhang, and Hao Wu. "Extracting Medical Knowledge from Crowdsourced Question Answering Website." IEEE Transactions on Big Data 6, no. 2 (June 1, 2020): 309–21. http://dx.doi.org/10.1109/tbdata.2016.2612236.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Liang, Jianpeng, Tianjiao Xu, Shihong Chen, and Zhuopan Ao. "JGRCAN: A Visual Question Answering Co-Attention Network via Joint Grid-Region Features." Mathematical Problems in Engineering 2022 (October 15, 2022): 1–11. http://dx.doi.org/10.1155/2022/4554074.

Full text
Abstract:
In recent years, region features extracted from target detection networks have played an important role in visual question answering. The region features only extract the areas that are related to the target, but they lose a lot of nontarget context information and fine-grained details. On the contrary, the grid feature does not lose the details of nontargets but is not conducive to the recognition of the counting question of multiple small targets in the image. To solve this problem, this paper proposes a visual question answering network via joint grid-region features (JGRCAN), which consists of a feature extraction layer, co-attention layer, and fusion layer. The feature extraction layer includes extracting grid features and region features from the image and text features from the question and extracting multivisual feature representation and question feature representation through the co-attention layer to output attention weight and attention feature representation, respectively. The proposed approach effectively integrates grid features and region features, realizes the complementary advantages of region features and grid features, and is able to accurately focus on areas of the image that are relevant to the answer to the question. The results show that the overall classification accuracy of the algorithm on the test-dev and test-std subsets of VQA-v2 is 70.87% and 71.18%, respectively. Compared with baseline models, our proposed JGRCAN has good performance.
APA, Harvard, Vancouver, ISO, and other styles
35

HARABAGIU, SANDA M., STEVEN J. MAIORANO, and MARIUS A. PAŞCA. "Open-domain textual question answering techniques." Natural Language Engineering 9, no. 3 (August 14, 2003): 231–67. http://dx.doi.org/10.1017/s1351324903003176.

Full text
Abstract:
Textual question answering is a technique of extracting a sentence or text snippet from a document or document collection that responds directly to a query. Open-domain textual question answering presupposes that questions are natural and unrestricted with respect to topic. The question answering (Q/A) techniques, as embodied in today's systems, can be roughly divided into two types: (1) techniques for Information Seeking (IS), which localize the answer in vast document collections; and (2) techniques for Reading Comprehension (RC) that answer a series of questions related to a given document. Although these two types of techniques and systems are different, it is desirable to combine them for enabling more advanced forms of Q/A. This paper discusses an approach that successfully enhanced an existing IS system with RC capabilities. This enhancement is important because advanced Q/A, as exemplified by the ARDA AQUAINT program, is moving towards Q/A systems that incorporate semantic and pragmatic knowledge enabling dialogue-based Q/A. Because today's RC systems involve a short series of questions in context, they represent a rudimentary form of interactive Q/A which constitutes a possible foundation for more advanced forms of dialogue-based Q/A.
APA, Harvard, Vancouver, ISO, and other styles
36

UZZAMAN, NAUSHAD, and JAMES F. ALLEN. "EVENT AND TEMPORAL EXPRESSION EXTRACTION FROM RAW TEXT: FIRST STEP TOWARDS A TEMPORALLY AWARE SYSTEM." International Journal of Semantic Computing 04, no. 04 (December 2010): 487–508. http://dx.doi.org/10.1142/s1793351x10001097.

Full text
Abstract:
Extracting temporal information from raw text is fundamental for deep language understanding, and key to many applications like question answering, information extraction, and document summarization. Our long-term goal is to build complete temporal structure of documents and use the temporal structure in other applications like textual entailment, question answering, visualization, or others. In this paper, we present a first step, a system for extracting events, event features, main events, temporal expressions and their normalized values from raw text. Our system is a combination of deep semantic parsing with extraction rules, Markov Logic Network classifiers and Conditional Random Field classifiers. To compare with existing systems, we evaluated our system on the TempEval-1 and TempEval-2 corpus. Our system outperforms or performs competitively with existing systems that evaluate on the TimeBank, TempEval-1 and TempEval-2 corpus and our performance is very close to inter-annotator agreement of the TimeBank annotators.
APA, Harvard, Vancouver, ISO, and other styles
37

Wang, Jing, and Yang Dong. "Improve Visual Question Answering Based On Text Feature Extraction." Journal of Physics: Conference Series 1856, no. 1 (April 1, 2021): 012025. http://dx.doi.org/10.1088/1742-6596/1856/1/012025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Shelmanov, A. O., M. A. Kamenskaya, M. I. Ananyeva, and I. V. Smirnov. "Semantic-Syntactic Analysis for Question Answering and Definition Extraction." Scientific and Technical Information Processing 44, no. 6 (December 2017): 412–23. http://dx.doi.org/10.3103/s0147688217060089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Sharma, Lokesh Kumar, and Namita Mittal. "Prominent feature extraction for evidence gathering in question answering." Journal of Intelligent & Fuzzy Systems 32, no. 4 (March 29, 2017): 2923–32. http://dx.doi.org/10.3233/jifs-169235.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

MONZ, CHRISTOF. "Machine learning for query formulation in question answering." Natural Language Engineering 17, no. 4 (January 5, 2011): 425–54. http://dx.doi.org/10.1017/s1351324910000276.

Full text
Abstract:
AbstractResearch on question answering dates back to the 1960s but has more recently been revisited as part of TREC's evaluation campaigns, where question answering is addressed as a subarea of information retrieval that focuses on specific answers to a user's information need. Whereas document retrieval systems aim to return the documents that are most relevant to a user's query, question answering systems aim to return actual answers to a users question. Despite this difference, question answering systems rely on information retrieval components to identify documents that contain an answer to a user's question. The computationally more expensive answer extraction methods are then applied only to this subset of documents that are likely to contain an answer. As information retrieval methods are used to filter the documents in the collection, the performance of this component is critical as documents that are not retrieved are not analyzed by the answer extraction component. The formulation of queries that are used for retrieving those documents has a strong impact on the effectiveness of the retrieval component. In this paper, we focus on predicting the importance of terms from the original question. We use model tree machine learning techniques in order to assign weights to query terms according to their usefulness for identifying documents that contain an answer. Term weights are learned by inspecting a large number of query formulation variations and their respective accuracy in identifying documents containing an answer. Several linguistic features are used for building the models, including part-of-speech tags, degree of connectivity in the dependency parse tree of the question, and ontological information. All of these features are extracted automatically by using several natural language processing tools. Incorporating the learned weights into a state-of-the-art retrieval system results in statistically significant improvements in identifying answer-bearing documents.
APA, Harvard, Vancouver, ISO, and other styles
41

Li, Ming, Lisheng Chen, and Yingcheng Xu. "Extracting core questions in community question answering based on particle swarm optimization." Data Technologies and Applications 53, no. 4 (September 3, 2019): 456–83. http://dx.doi.org/10.1108/dta-02-2019-0025.

Full text
Abstract:
Purpose A large number of questions are posted on community question answering (CQA) websites every day. Providing a set of core questions will ease the question overload problem. These core questions should cover the main content of the original question set. There should be low redundancy within the core questions and a consistent distribution with the original question set. The paper aims to discuss these issues. Design/methodology/approach In the paper, a method named QueExt method for extracting core questions is proposed. First, questions are modeled using a biterm topic model. Then, these questions are clustered based on particle swarm optimization (PSO). With the clustering results, the number of core questions to be extracted from each cluster can be determined. Afterwards, the multi-objective PSO algorithm is proposed to extract the core questions. Both PSO algorithms are integrated with operators in genetic algorithms to avoid the local optimum. Findings Extensive experiments on real data collected from the famous CQA website Zhihu have been conducted and the experimental results demonstrate the superior performance over other benchmark methods. Research limitations/implications The proposed method provides new insight into and enriches research on information overload in CQA. It performs better than other methods in extracting core short text documents, and thus provides a better way to extract core data. The PSO is a novel method used for selecting core questions. The research on the application of the PSO model is expanded. The study also contributes to research on PSO-based clustering. With the integration of K-means++, the key parameter number of clusters is optimized. Originality/value The novel core question extraction method in CQA is proposed, which provides a novel and efficient way to alleviate the question overload. The PSO model is extended and novelty used in selecting core questions. The PSO model is integrated with K-means++ method to optimize the number of clusters, which is just the key parameter in text clustering based on PSO. It provides a new way to cluster texts.
APA, Harvard, Vancouver, ISO, and other styles
42

Fukumoto, Junichi, Noriaki Aburai, and Ryosuke Yamanishi. "Interactive Document Expansion for Answer Extraction of Question Answering System." Procedia Computer Science 22 (2013): 991–1000. http://dx.doi.org/10.1016/j.procs.2013.09.184.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Younes, Yousef, and Ansgar Scherp. "Question Answering Versus Named Entity Recognition for Extracting Unknown Datasets." IEEE Access 11 (2023): 92775–87. http://dx.doi.org/10.1109/access.2023.3309148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Ahmed, Waheeb, and Babu Anto. "AN AUTOMATIC WEB-BASED QUESTION ANSWERING SYSTEM FOR E-LEARNING." Information Technologies and Learning Tools 58, no. 2 (April 29, 2017): 1. http://dx.doi.org/10.33407/itlt.v58i2.1567.

Full text
Abstract:
An automatic web based Question Answering (QA) system is a valuable tool for improving e-learning and education. Several approaches employ natural language processing technology to understand questions given in natural language text, which is incomplete and error-prone. In addition, instead of extracting exact answer, many approaches simply return hyperlinks to documents containing the answers, which is inconvenient for the students or learners. In this paper we develop technique to detect the type of a question, based on which the proper technique for extracting the answer is used. The system returns only blocks or phrases of data containing the answer rather than full documents. Therefore, we can highly improve the efficiency of Web QA systems for e-learning.
APA, Harvard, Vancouver, ISO, and other styles
45

Karyawati, A. A. I. N. Eka. "O ONTOLOGY-BASED PARAGRAPH EXTRACTION AND CAUSALITY DETECTION-BASED SIMILARITY FOR ANSWERING WHY-QUESTION." Jurnal Ilmu Komputer 11, no. 1 (May 21, 2018): 9. http://dx.doi.org/10.24843/jik.2018.v11.i01.p02.

Full text
Abstract:
Paragraph extraction is a main part of an automatic question answering system, especially in answering why-question. It is because the answer of a why-question usually contained in one paragraph instead of one or two sentences. There have been some researches on paragraph extraction approaches, but there are still few studies focusing on involving the domain ontology as a knowledge base. Most of the paragraph extraction studies used keyword-based method with small portion of semantic approaches. Thus, the question answering system faces a typical problem often occuring in keyword-based method that is word mismatches problem. The main contribution of this research is a paragraph scoring method that incorporates the TFIDF-based and causality-detection-based similarity. This research is a part of the ontology-based why-question answering method, where ontology is used as a knowledge base for each steps of the method including indexing, question analyzing, document retrieval, and paragraph extraction/selection. For measuring the method performance, the evaluations were conducted by comparing the proposed method over two baselines methods that did not use causality-detection-based similarity. The proposed method shown improvements over the baseline methods regarding MRR (95%, 0.82-0.42), P@1 (105%, 0.78-0.38), P@5(91%, 0.88-0.46), Precision (95%, 0.80-0.41), and Recall (66%, 0.88-0.53).
APA, Harvard, Vancouver, ISO, and other styles
46

Cowell, Andrew J., Alan R. Chappell, and David A. Thurmanb. "Towards an Adaptive Question Answering System for Intelligence Analysts." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 49, no. 10 (September 2005): 927–31. http://dx.doi.org/10.1177/154193120504901012.

Full text
Abstract:
Battelle is working in partnership with Stanford University's Knowledge Systems Laboratory (KSL) and IBM's T.J. Watson Research Center to develop a suite of technologies for knowledge discovery, knowledge extraction, knowledge representation, automated reasoning, and human information interaction, in unison entitled “Knowledge Associates for Novel Intelligence” (KANI). We have developed an integrated analytic environment composed of a collection of analyst associates, software components that aid the analyst at different stages of the analytical process. In this paper, we discuss our efforts in the research, design and implementation of the question answering elements of the Information Interaction Associate. Specifically, we focus on the techniques employed to produce an effective user interface to these elements. In addition, we touch upon the methodologies we intend to use to empirically evaluate our approach with active intelligence analysts.
APA, Harvard, Vancouver, ISO, and other styles
47

Christanno, Ivan, Priscilla Priscilla, Jody Johansyah Maulana, Derwin Suhartono, and Rini Wongso. "Eve: An Automated Question Answering System for Events Information." ComTech: Computer, Mathematics and Engineering Applications 8, no. 1 (March 31, 2017): 15. http://dx.doi.org/10.21512/comtech.v8i1.3781.

Full text
Abstract:
The objective of this research was to create a closed-domain of automated question answering system specifically for events called Eve. Automated Question Answering System (QAS) is a system that accepts question input in the form of natural language. The question will be processed through modules to finally return the most appropriate answer to the corresponding question instead of returning a full document as an output. Thescope of the events was those which were organized by Students Association of Computer Science (HIMTI) in Bina Nusantara University. It consisted of 3 main modules namely query processing, information retrieval, and information extraction. Meanwhile, the approaches used in this system included question classification, document indexing, named entity recognition and others. For the results, the system can answer 63 questions for word matching technique, and 32 questions for word similarity technique out of 94 questions correctly.
APA, Harvard, Vancouver, ISO, and other styles
48

Zhang, Pufen, Hong Lan, and Muhammad Asim Khan. "Multiple Context Learning Networks for Visual Question Answering." Scientific Programming 2022 (February 9, 2022): 1–11. http://dx.doi.org/10.1155/2022/4378553.

Full text
Abstract:
A novel Multiple Context Learning Network (MCLN) is proposed to model multiple contexts for visual question answering (VQA), aiming to learn comprehensive contexts. Three kinds of contexts are discussed and the corresponding three context learning modules are proposed based on a uniform context learning strategy. Specifically, the proposed context learning modules are visual context learning module (VCL), textual context learning module (TCL), and visual-textual context learning module (VTCL). The VCL and TCL, respectively, learn the context of objects in an image and the context of words in a question, allowing object and word features to own intra-modal context information. The VTCL is performed on the concatenated visual-textual features that endows the output features with synergic visual-textual context information. These modules work together to form a multiple context learning layer (MCL) and MCL can be stacked in depth for deep context learning. Furthermore, a contextualized text encoder based on the pretrained BERT is introduced and fine-tuned, which enhances the textual context learning at the feature extraction stage of text. The approach is evaluated by using two benchmark datasets: VQA v2.0 dataset and GQA dataset. The MCLN achieves 71.05% and 71.48% overall accuracy on the test-dev and test-std sets of VQA v2.0, respectively. And an accuracy of 57.0% is gained by the MCLN on the test-standard split of GQA dataset. The MCLN outperforms the previous state-of-the-art models and the extensive ablation studies examine the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
49

Zhang, Yunyan, Guangluan Xu, Yang Wang, Daoyu Lin, Feng Li, Chenglong Wu, Jingyuan Zhang, and Tinglei Huang. "A Question Answering-Based Framework for One-Step Event Argument Extraction." IEEE Access 8 (2020): 65420–31. http://dx.doi.org/10.1109/access.2020.2985126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Hu, Kai, Zhuoyuan Wu, Zhuoyao Zhong, Weihong Lin, Lei Sun, and Qiang Huo. "A Question-Answering Approach to Key Value Pair Extraction from Form-Like Document Images." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 11 (June 26, 2023): 12899–906. http://dx.doi.org/10.1609/aaai.v37i11.26516.

Full text
Abstract:
In this paper, we present a new question-answering (QA) based key-value pair extraction approach, called KVPFormer, to robustly extracting key-value relationships between entities from form-like document images. Specifically, KVPFormer first identifies key entities from all entities in an image with a Transformer encoder, then takes these key entities as questions and feeds them into a Transformer decoder to predict their corresponding answers (i.e., value entities) in parallel. To achieve higher answer prediction accuracy, we propose a coarse-to-fine answer prediction approach further, which first extracts multiple answer candidates for each identified question in the coarse stage and then selects the most likely one among these candidates in the fine stage. In this way, the learning difficulty of answer prediction can be effectively reduced so that the prediction accuracy can be improved. Moreover, we introduce a spatial compatibility attention bias into the self-attention/cross-attention mechanism for KVPFormer to better model the spatial interactions between entities. With these new techniques, our proposed KVPFormer achieves state-of-the-art results on FUNSD and XFUND datasets, outperforming the previous best-performing method by 7.2% and 13.2% in F1 score, respectively.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography