Journal articles on the topic 'Question-answering systems'

To see the other types of publications on this topic, follow the link: Question-answering systems.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Question-answering systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Visser, Ubbo. "Question/Answering Systems." KI - Künstliche Intelligenz 26, no. 2 (February 23, 2012): 191–95. http://dx.doi.org/10.1007/s13218-012-0172-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Usbeck, Ricardo, Michael Röder, Michael Hoffmann, Felix Conrads, Jonathan Huthmann, Axel-Cyrille Ngonga-Ngomo, Christian Demmler, and Christina Unger. "Benchmarking question answering systems." Semantic Web 10, no. 2 (January 21, 2019): 293–304. http://dx.doi.org/10.3233/sw-180312.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Singh, Vaishali, and Sanjay K. Dwivedi. "Question Answering." International Journal of Information Retrieval Research 4, no. 3 (July 2014): 14–33. http://dx.doi.org/10.4018/ijirr.2014070102.

Full text
Abstract:
With the huge amount of data available on web, it has turned out to be a fertile area for Question Answering (QA) research. Question answering, an instance of information retrieval research is at the cross road from several research communities such as, machine learning, statistical learning, natural language processing and pattern learning. In this paper, the authors survey the research in area of question answering with respect to different prospects of NLP, machine learning, statistical learning and pattern learning. Then they situate some of the prominent QA systems concerning these prospects and present a comparative study on the basis of question types.
APA, Harvard, Vancouver, ISO, and other styles
4

Ahmed, Waheeb, and Babu Anto P. "Question Analysis for Arabic Question Answering Systems." International Journal on Natural Language Computing 5, no. 6 (December 30, 2016): 21–30. http://dx.doi.org/10.5121/ijnlc.2016.5603.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ahmed, Waheeb, and Dr Babu Anto P. "Question Focus Recognition in Question Answering Systems." IJARCCE 5, no. 12 (December 30, 2016): 25–28. http://dx.doi.org/10.17148/ijarcce.2016.51205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lapshin, V. A. "Question-answering systems: Development and prospects." Automatic Documentation and Mathematical Linguistics 46, no. 3 (May 2012): 138–45. http://dx.doi.org/10.3103/s0005105512030053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Biltawi, Mariam M., Sara Tedmori, and Arafat Awajan. "Arabic Question Answering Systems: Gap Analysis." IEEE Access 9 (2021): 63876–904. http://dx.doi.org/10.1109/access.2021.3074950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gupta, Poonam, Ruchi Garg, and Amandeep Kaur. "Question Answering Systems for Covid-19." Journal of Physics: Conference Series 2062, no. 1 (November 1, 2021): 012027. http://dx.doi.org/10.1088/1742-6596/2062/1/012027.

Full text
Abstract:
Abstract In the present scenario COVID-19 pandemic has ruined the entire world. This situation motivates the researchers to resolve the query raised by the people around the world in an efficient manner. However, less number of resources available in order to gain the information and knowledge about COVID-19 arises a need to evaluate the existing Question Answering (QA) systems on COVID-19. In this paper, we compare the various QA systems available in order to answer the questions raised by the people like doctors, medical researchers etc. related to corona virus. QA systems process the queries submitted in natural language to find the best relevant answer among all the candidate answers for the COVID-19 related questions. These systems utilize the text mining and information retrieval on COVID-19 literature. This paper describes the survey of QA systems-CovidQA, CAiRE (Center for Artificial Intelligence Research)-COVID system, CO-search semantic search engine, COVIDASK, RECORD (Research Engine for COVID Open Research Dataset) available for COVID-19. All these QA systems are also compared in terms of their significant parameters-like Precision at rank 1 (P@1), Recall at rank 3(R@3), Mean Reciprocal Rank(MRR), F1-Score, Exact Match(EM), Mean Average Precision, Score metric etc.; on which efficiency of these systems relies.
APA, Harvard, Vancouver, ISO, and other styles
9

Nabil Alkholy, Eman Mohamed, Mohamed Hassan Haggag, and Amal Aboutabl. "Question Answering Systems: Analysis and Survey." International Journal of Computer Science & Engineering Survey 09, no. 06 (December 31, 2018): 1–13. http://dx.doi.org/10.5121/ijcses.2018.9601.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kratzwald, Bernhard, and Stefan Feuerriegel. "Putting Question-Answering Systems into Practice." ACM Transactions on Management Information Systems 9, no. 4 (March 12, 2019): 1–20. http://dx.doi.org/10.1145/3309706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Ojokoh, Bolanle, and Emmanuel Adebisi. "A Review of Question Answering Systems." Journal of Web Engineering 17, no. 8 (2019): 717–58. http://dx.doi.org/10.13052/jwe1540-9589.1785.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Bouziane, Abdelghani, Djelloul Bouchiha, Noureddine Doumi, and Mimoun Malki. "Question Answering Systems: Survey and Trends." Procedia Computer Science 73 (2015): 366–75. http://dx.doi.org/10.1016/j.procs.2015.12.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Prager, John. "Open-Domain Question–Answering." Foundations and Trends® in Information Retrieval 1, no. 2 (2006): 91–231. http://dx.doi.org/10.1561/1500000001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

VOORHEES, ELLEN M. "The TREC question answering track." Natural Language Engineering 7, no. 4 (December 2001): 361–78. http://dx.doi.org/10.1017/s1351324901002789.

Full text
Abstract:
The Text REtrieval Conference (TREC) question answering track is an effort to bring the benefits of large-scale evaluation to bear on a question answering (QA) task. The track has run twice so far, first in TREC-8 and again in TREC-9. In each case, the goal was to retrieve small snippets of text that contain the actual answer to a question rather than the document lists traditionally returned by text retrieval systems. The best performing systems were able to answer about 70% of the questions in TREC-8 and about 65% of the questions in TREC-9. While the 65% score is a slightly worse result than the TREC-8 scores in absolute terms, it represents a very significant improvement in question answering systems. The TREC-9 task was considerably harder than the TREC-8 task because TREC-9 used actual users’ questions while TREC-8 used questions constructed for the track. Future tracks will continue to challenge the QA community with more difficult, and more realistic, question answering tasks.
APA, Harvard, Vancouver, ISO, and other styles
15

Caballero, Michael. "A Brief Survey of Question Answering Systems." International Journal of Artificial Intelligence & Applications 12, no. 5 (September 30, 2021): 01–07. http://dx.doi.org/10.5121/ijaia.2021.12501.

Full text
Abstract:
Question Answering (QA) is a subfield of Natural Language Processing (NLP) and computer science focused on building systems that automatically answer questions from humans in natural language. This survey summarizes the history and current state of the field and is intended as an introductory overview of QA systems. After discussing QA history, this paper summarizes the different approaches to the architecture of QA systems -- whether they are closed or open-domain and whether they are text-based, knowledge-based, or hybrid systems. Lastly, some common datasets in this field are introduced and different evaluation metrics are discussed.
APA, Harvard, Vancouver, ISO, and other styles
16

Mittal, Sparsh, and Ankush Mittal. "Versatile question answering systems: seeing in synthesis." International Journal of Intelligent Information and Database Systems 5, no. 2 (2011): 119. http://dx.doi.org/10.1504/ijiids.2011.038968.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Atkinson, John, and Alvaro Maurelia. "Redundancy-Based Trust in Question-Answering Systems." Computer 50, no. 1 (January 2017): 58–65. http://dx.doi.org/10.1109/mc.2017.18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Moreda, Paloma, Hector Llorens, Estela Saquete, and Manuel Palomar. "Combining semantic information in question answering systems." Information Processing & Management 47, no. 6 (November 2011): 870–85. http://dx.doi.org/10.1016/j.ipm.2010.03.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Martinez-Gil, Jorge. "A survey on legal question–answering systems." Computer Science Review 48 (May 2023): 100552. http://dx.doi.org/10.1016/j.cosrev.2023.100552.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Li, Haonan, Ehsan Hamzei, Ivan Majic, Hua Hua, Jochen Renz, Martin Tomko, Maria Vasardani, Stephan Winter, and Timothy Baldwin. "Neural factoid geospatial question answering." Journal of Spatial Information Science, no. 23 (December 24, 2021): 65–90. http://dx.doi.org/10.5311/josis.2021.23.159.

Full text
Abstract:
Existing question answering systems struggle to answer factoid questions when geospatial information is involved. This is because most systems cannot accurately detect the geospatial semantic elements from the natural language questions, or capture the semantic relationships between those elements. In this paper, we propose a geospatial semantic encoding schema and a semantic graph representation which captures the semantic relations and dependencies in geospatial questions. We demonstrate that our proposed graph representation approach aids in the translation from natural language to a formal, executable expression in a query language. To decrease the need for people to provide explanatory information as part of their question and make the translation fully automatic, we treat the semantic encoding of the question as a sequential tagging task, and the graph generation of the query as a semantic dependency parsing task. We apply neural network approaches to automatically encode the geospatial questions into spatial semantic graph representations. Compared with current template-based approaches, our method generalises to a broader range of questions, including those with complex syntax and semantics. Our proposed approach achieves better results on GeoData201 than existing methods.
APA, Harvard, Vancouver, ISO, and other styles
21

HARABAGIU, SANDA M., STEVEN J. MAIORANO, and MARIUS A. PAŞCA. "Open-domain textual question answering techniques." Natural Language Engineering 9, no. 3 (August 14, 2003): 231–67. http://dx.doi.org/10.1017/s1351324903003176.

Full text
Abstract:
Textual question answering is a technique of extracting a sentence or text snippet from a document or document collection that responds directly to a query. Open-domain textual question answering presupposes that questions are natural and unrestricted with respect to topic. The question answering (Q/A) techniques, as embodied in today's systems, can be roughly divided into two types: (1) techniques for Information Seeking (IS), which localize the answer in vast document collections; and (2) techniques for Reading Comprehension (RC) that answer a series of questions related to a given document. Although these two types of techniques and systems are different, it is desirable to combine them for enabling more advanced forms of Q/A. This paper discusses an approach that successfully enhanced an existing IS system with RC capabilities. This enhancement is important because advanced Q/A, as exemplified by the ARDA AQUAINT program, is moving towards Q/A systems that incorporate semantic and pragmatic knowledge enabling dialogue-based Q/A. Because today's RC systems involve a short series of questions in context, they represent a rudimentary form of interactive Q/A which constitutes a possible foundation for more advanced forms of dialogue-based Q/A.
APA, Harvard, Vancouver, ISO, and other styles
22

NAKOV, PRESLAV, LLUÍS MÀRQUEZ, ALESSANDRO MOSCHITTI, and HAMDY MUBARAK. "Arabic community question answering." Natural Language Engineering 25, no. 1 (December 19, 2018): 5–41. http://dx.doi.org/10.1017/s1351324918000426.

Full text
Abstract:
AbstractWe analyze resources and models for Arabic community Question Answering (cQA). In particular, we focus on CQA-MD, our cQA corpus for Arabic in the domain of medical forums. We describe the corpus and the main challenges it poses due to its mix of informal and formal language, and of different Arabic dialects, as well as due to its medical nature. We further present a shared task on cQA at SemEval, the International Workshop on Semantic Evaluation, based on this corpus. We discuss the features and the machine learning approaches used by the teams who participated in the task, with focus on the models that exploit syntactic information using convolutional tree kernels and neural word embeddings. We further analyze and extend the outcome of the SemEval challenge by training a meta-classifier combining the output of several systems. This allows us to compare different features and different learning algorithms in an indirect way. Finally, we analyze the most frequent errors common to all approaches, categorizing them into prototypical cases, and zooming into the way syntactic information in tree kernel approaches can help solve some of the most difficult cases. We believe that our analysis and the lessons learned from the process of corpus creation as well as from the shared task analysis will be helpful for future research on Arabic cQA.
APA, Harvard, Vancouver, ISO, and other styles
23

Azzam, Saliha, and Kevin Humphreys. "New Directions in Question Answering." Information Retrieval 9, no. 3 (June 2006): 383–86. http://dx.doi.org/10.1007/s10791-006-8702-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Kodra, Lorena. "A Review on Neural Network Question Answering Systems." International Journal of Artificial Intelligence & Applications 8, no. 2 (March 31, 2017): 59–74. http://dx.doi.org/10.5121/ijaia.2017.8206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Andy, Anietie, Mugizi Robert, and Mohamed Chouikha. "Exploiting Synonyms to Improve Question and Answering Systems." International Journal of Computer Applications 108, no. 18 (December 18, 2014): 24–27. http://dx.doi.org/10.5120/19012-0523.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Krishnamoorthy, Venkatesh. "Evolution of Reading Comprehension and Question Answering Systems." Procedia Computer Science 185 (2021): 231–38. http://dx.doi.org/10.1016/j.procs.2021.05.024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Mishra, Amit, and Sanjay Kumar Jain. "A survey on question answering systems with classification." Journal of King Saud University - Computer and Information Sciences 28, no. 3 (July 2016): 345–61. http://dx.doi.org/10.1016/j.jksuci.2014.10.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Androshchuk, Maksym, and Oksana Kyriienko. "Comparison of Services for Creating Question-Answering Systems." NaUKMA Research Papers. Computer Science 3 (December 28, 2020): 132–37. http://dx.doi.org/10.18523/2617-3808.2020.3.132-137.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Yager, Ronald R. "Knowledge trees and protoforms in question-answering systems." Journal of the American Society for Information Science and Technology 57, no. 4 (2006): 550–63. http://dx.doi.org/10.1002/asi.20309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Quan, Xiaojun, Yao Lu, Feifei Xu, Jingsheng Lei, and Wenyin Liu. "Mathematical Modeling of Question Popularity in User-Interactive Question Answering Systems." Journal of Advanced Mathematics and Applications 2, no. 1 (June 1, 2013): 24–31. http://dx.doi.org/10.1166/jama.2013.1027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

HIRSCHMAN, L., and R. GAIZAUSKAS. "Natural language question answering: the view from here." Natural Language Engineering 7, no. 4 (December 2001): 275–300. http://dx.doi.org/10.1017/s1351324901002807.

Full text
Abstract:
As users struggle to navigate the wealth of on-line information now available, the need for automated question answering systems becomes more urgent. We need systems that allow a user to ask a question in everyday language and receive an answer quickly and succinctly, with sufficient context to validate the answer. Current search engines can return ranked lists of documents, but they do not deliver answers to the user.Question answering systems address this problem. Recent successes have been reported in a series of question-answering evaluations that started in 1999 as part of the Text Retrieval Conference (TREC). The best systems are now able to answer more than two thirds of factual questions in this evaluation.
APA, Harvard, Vancouver, ISO, and other styles
32

Zhao, Yiming, Jin Zhang, Xue Xia, and Taowen Le. "Evaluation of Google question-answering quality." Library Hi Tech 37, no. 2 (June 17, 2019): 312–28. http://dx.doi.org/10.1108/lht-10-2017-0218.

Full text
Abstract:
Purpose The purpose of this paper is to evaluate Google question-answering (QA) quality. Design/methodology/approach Given the large variety and complexity of Google answer boxes in search result pages, existing evaluation criteria for both search engines and QA systems seemed unsuitable. This study developed an evaluation criteria system for the evaluation of Google QA quality by coding and analyzing search results of questions from a representative question set. The study then evaluated Google’s overall QA quality as well as QA quality across four target types and across six question types, using the newly developed criteria system. ANOVA and Tukey tests were used to compare QA quality among different target types and question types. Findings It was found that Google provided significantly higher-quality answers to person-related questions than to thing-related, event-related and organization-related questions. Google also provided significantly higher-quality answers to where- questions than to who-, what- and how-questions. The more specific a question is, the higher the QA quality would be. Research limitations/implications Suggestions for both search engine users and designers are presented to help enhance user experience and QA quality. Originality/value Particularly suitable for search engine QA quality analysis, the newly developed evaluation criteria system expanded and enriched assessment metrics of both search engines and QA systems.
APA, Harvard, Vancouver, ISO, and other styles
33

Sun, Yibo, Duyu Tang, Nan Duan, Tao Qin, Shujie Liu, Zhao Yan, Ming Zhou, et al. "Joint Learning of Question Answering and Question Generation." IEEE Transactions on Knowledge and Data Engineering 32, no. 5 (May 1, 2020): 971–82. http://dx.doi.org/10.1109/tkde.2019.2897773.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Yang, Jonghyeon, Hanme Jang, and Kiyun Yu. "Geographic Knowledge Base Question Answering over OpenStreetMap." ISPRS International Journal of Geo-Information 13, no. 1 (December 26, 2023): 10. http://dx.doi.org/10.3390/ijgi13010010.

Full text
Abstract:
In recent years, question answering on knowledge bases (KBQA) has emerged as a promising approach for providing unified, user-friendly access to knowledge bases. Nevertheless, existing KBQA systems struggle to answer spatial-related questions, prompting the introduction of geographic knowledge ba se question answering (GeoKBQA) to address such challenges. Current GeoKBQA systems face three primary issues: (1) the limited scale of questions, restricting the effective application of neural networks; (2) reliance on rule-based approaches dependent on predefined templates, resulting in coverage and scalability challenges; and (3) the assumption of the availability of a golden entity, limiting the practicality of GeoKBQA systems. In this work, we aim to address these three critical issues to develop a practical GeoKBQA system. We construct a large-scale, high-quality GeoKBQA dataset and link mentions in the questions to entities in OpenStreetMap using an end-to-end entity-linking method. Additionally, we develop a query generator that translates natural language questions, along with the entities predicted by entity linking into corresponding GeoSPARQL queries. To the best of our knowledge, this work presents the first purely neural-based GeoKBQA system with potential for real-world application.
APA, Harvard, Vancouver, ISO, and other styles
35

Lu, Jinting, Xiaobing Sun, Bin Li, Lili Bo, and Tao Zhang. "BEAT: Considering question types for bug question answering via templates." Knowledge-Based Systems 225 (August 2021): 107098. http://dx.doi.org/10.1016/j.knosys.2021.107098.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Fu, Chaogang. "User intimacy model for question recommendation in community question answering." Knowledge-Based Systems 188 (January 2020): 104844. http://dx.doi.org/10.1016/j.knosys.2019.07.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Liou, Mei-Ling Teresa, and Chen-Sheng Luther Liu. "An analysis of focus and its role in the answering systems of polar questions in Chinese and English." International Journal of Chinese Linguistics 7, no. 1 (June 30, 2020): 45–70. http://dx.doi.org/10.1075/ijchl.19018.lio.

Full text
Abstract:
Abstract Instead of classifying natural languages in terms of their answering systems for polar questions, this study investigates how languages construct the answering system for the polar questions with a special concentration on the answering system of the Chinese ma particle question and English polar questions. We argue that the primarily mechanism that natural languages adopt to construct an answering system is the focus mechanism which is based on the relationship between a focus sensitive marker and its association of focus. The different answering patterns to polar questions result from different scopes of focus. In a polar question, what is being focused by the focus sensitive marker or focus operator falls into question scope (focus association). The respondent answers the polar question based on the proposition in the question scope. Answering with a positive particle expresses agreement with that question proposition while answering with a negative particle conveys that the question proposition is not true.
APA, Harvard, Vancouver, ISO, and other styles
38

Ma, Xin. "Intelligent Question and Answer and Dialogue System." Highlights in Science, Engineering and Technology 103 (June 26, 2024): 190–96. http://dx.doi.org/10.54097/bvzjfa70.

Full text
Abstract:
This article discusses the development and application of question answering systems in the context of artificial intelligence and natural language processing. It highlights the importance of these systems in resolving user queries and providing accurate and relevant answers. This paper highlights the need to design question answering systems according to established rules and guidelines and use machine learning techniques to improve their performance. It also mentions the emergence of advanced question-answering systems such as IBM Watson, which leverage natural language processing and machine learning to provide more complex answers.
APA, Harvard, Vancouver, ISO, and other styles
39

Hao, Tianyong, Feifei Xu, Jingsheng Lei, Liu Wenyin, and Qing Li. "Toward Automatic Answers in User-Interactive Question Answering Systems." International Journal of Software Science and Computational Intelligence 3, no. 4 (October 2011): 52–66. http://dx.doi.org/10.4018/jssci.2011100104.

Full text
Abstract:
A strategy of automatic answer retrieval for repeated or similar questions in user-interactive systems by employing semantic question patterns is proposed in this paper. The used semantic question pattern is a generalized representation of a group of questions with both similar structure and relevant semantics. Specifically, it consists of semantic annotations (or constraints) for the variable components in the pattern and hence enhances the semantic representation and greatly reduces the ambiguity of a question instance when asked by a user using such pattern. The proposed method consists of four major steps: structure processing, similar pattern matching and filtering, automatic pattern generation, question similarity evaluation and answer retrieval. Preliminary experiments in a real question answering system show a precision of more than 90% of the method.
APA, Harvard, Vancouver, ISO, and other styles
40

Zhao, Xuanzheng. "Research on Methods and Applications Related to Question-and-Answer Dialogue Systems." Highlights in Science, Engineering and Technology 57 (July 11, 2023): 9–14. http://dx.doi.org/10.54097/hset.v57i.9885.

Full text
Abstract:
In the face of more and more network data information, search engines have gradually become the main retrieval method to obtain relevant information knowledge. However, in today's increasingly explosive development of information on the Internet, by contrast, traditional search engines have problems such as semantic understanding and complicated answers. Therefore, question answering systems are more important. The automatic question answering system generally adopts natural language processing related technologies. When users ask questions, the system automatically judges and gives answers. It involves computer linguistics, machine learning, artificial intelligence and other popular technology research. According to different classification criteria, the automatic question answering system is roughly divided into open field automatic question answering system and stereotyped automatic question answering system.. This thesis investigates methods and applications related to question-and-answer dialogue systems. On the methodological side, we introduce commonly used datasets and the principles and techniques of text, speech and visual question and answer systems, and analyse in detail the excellent example ChatGPT. In terms of applications, we present the application of Q&A dialogue systems in search engines, smart campuses. There is some reference value.
APA, Harvard, Vancouver, ISO, and other styles
41

Zhang, Jin, Liye Wang, and Kanliang Wang. "Identifying comparable entities from online question-answering contents." Information & Management 58, no. 3 (April 2021): 103449. http://dx.doi.org/10.1016/j.im.2021.103449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Rosso, Paolo, Lluís-F. Hurtado, Encarna Segarra, and Emilio Sanchis. "On the Voice-Activated Question Answering." IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 42, no. 1 (January 2012): 75–85. http://dx.doi.org/10.1109/tsmcc.2010.2089620.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Wang, Jiuniu, Wenjia Xu, Xingyu Fu, Yang Wei, Li Jin, Ziyan Chen, Guangluan Xu, and Yirong Wu. "SRQA: Synthetic Reader for Factoid Question Answering." Knowledge-Based Systems 193 (April 2020): 105415. http://dx.doi.org/10.1016/j.knosys.2019.105415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Zhang, Weifeng, Jing Yu, Yuxia Wang, and Wei Wang. "Multimodal deep fusion for image question answering." Knowledge-Based Systems 212 (January 2021): 106639. http://dx.doi.org/10.1016/j.knosys.2020.106639.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Reddy, Siva, Danqi Chen, and Christopher D. Manning. "CoQA: A Conversational Question Answering Challenge." Transactions of the Association for Computational Linguistics 7 (November 2019): 249–66. http://dx.doi.org/10.1162/tacl_a_00266.

Full text
Abstract:
Humans gather information through conversations involving a series of interconnected questions and answers. For machines to assist in information gathering, it is therefore essential to enable them to answer conversational questions. We introduce CoQA, a novel dataset for building Conversational Question Answering systems. Our dataset contains 127k questions with answers, obtained from 8k conversations about text passages from seven diverse domains. The questions are conversational, and the answers are free-form text with their corresponding evidence highlighted in the passage. We analyze CoQA in depth and show that conversational questions have challenging phenomena not present in existing reading comprehension datasets (e.g., coreference and pragmatic reasoning). We evaluate strong dialogue and reading comprehension models on CoQA. The best system obtains an F1 score of 65.4%, which is 23.4 points behind human performance (88.8%), indicating that there is ample room for improvement. We present CoQA as a challenge to the community at https://stanfordnlp.github.io/coqa .
APA, Harvard, Vancouver, ISO, and other styles
46

Tu, Kaiyi, Mingyue Jiang, and Zuohua Ding. "A Metamorphic Testing Approach for Assessing Question Answering Systems." Mathematics 9, no. 7 (March 28, 2021): 726. http://dx.doi.org/10.3390/math9070726.

Full text
Abstract:
Question Answering (QA) enables the machine to understand and answer questions posed in natural language, which has emerged as a powerful tool in various domains. However, QA is a challenging task and there is an increasing concern about its quality. In this paper, we propose to apply the technique of metamorphic testing (MT) to evaluate QA systems from the users’ perspectives, in order to help the users to better understand the capabilities of these systems and then to select appropriate QA systems for their specific needs. Two typical categories of QA systems, namely, the textual QA (TQA) and visual QA (VQA), are studied, and a total number of 17 metamorphic relations (MRs) are identified for them. These MRs respectively focus on some characteristics of different aspects of QA. We further apply MT to four QA systems (including two APIs from the AllenNLP platform, one API from the Transformers platform, and one API from CloudCV) by using all of the MRs. Our experimental results demonstrate the capabilities of the four subject QA systems from various aspects, revealing their strengths and weaknesses. These results further suggest that MT can be an effective method for assessing QA systems.
APA, Harvard, Vancouver, ISO, and other styles
47

Sung, Cheng-Lung, Cheng-Wei Lee, Hsu-Chun Yen, and Wen-Lian Hsu. "Alignment-based surface patterns for factoid question answering systems." Integrated Computer-Aided Engineering 16, no. 3 (June 22, 2009): 259–69. http://dx.doi.org/10.3233/ica-2009-0313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Suwarningsih, Wiwin, Raka Aditya Pramata, Fadhil Yusuf Rahadika, and Mochamad Havid Albar Purnomo. "RoBERTa: language modelling in building Indonesian question-answering systems." TELKOMNIKA (Telecommunication Computing Electronics and Control) 20, no. 6 (December 1, 2022): 1248. http://dx.doi.org/10.12928/telkomnika.v20i6.24248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Hosseini, Mohammad Mehdi Hosseini, and Alireza Jalai. "Determining Reliability in Interactive Question Answering Systems by Regression." Iranian Journal of Information Processing and Management 36, no. 3 (April 1, 2021): 817–34. http://dx.doi.org/10.52547/jipm.36.3.817.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Kuligowska, Karolina, and Bartłomiej Kowalczuk. "Pseudo-labeling with transformers for improving Question Answering systems." Procedia Computer Science 192 (2021): 1162–69. http://dx.doi.org/10.1016/j.procs.2021.08.119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography