Gotowa bibliografia na temat „Extractive Question-Answering”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Extractive Question-Answering”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Extractive Question-Answering"

1

Xu, Marie-Anne, i Rahul Khanna. "Evaluation of Single-Span Models on Extractive Multi-Span Question-Answering". International journal of Web & Semantic Technology 12, nr 1 (31.01.2021): 19–29. http://dx.doi.org/10.5121/ijwest.2021.12102.

Pełny tekst źródła
Streszczenie:
Machine Reading Comprehension (MRC), particularly extractive close-domain question-answering, is a prominent field in Natural Language Processing (NLP). Given a question and a passage or set of passages, a machine must be able to extract the appropriate answer from the passage(s). However, the majority of these existing questions have only one answer, and more substantial testing on questions with multiple answers, or multi-span questions, has not yet been applied. Thus, we introduce a newly compiled dataset consisting of questions with multiple answers that originate from previously existing datasets. In addition, we run BERT-based models pre-trained for question-answering on our constructed dataset to evaluate their reading comprehension abilities. Runtime of base models on the entire dataset is approximately one day while the runtime for all models on a third of the dataset is a little over two days. Among the three of BERT-based models we ran, RoBERTa exhibits the highest consistent performance, regardless of size. We find that all our models perform similarly on this new, multi-span dataset compared to the single-span source datasets. While the models tested on the source datasets were slightly fine-tuned in order to return multiple answers, performance is similar enough to judge that task formulation does not drastically affect question-answering abilities. Our evaluations indicate that these models are indeed capable of adjusting to answer questions that require multiple answers. We hope that our findings will assist future development in question-answering and improve existing question-answering products and methods.
Style APA, Harvard, Vancouver, ISO itp.
2

Guan, Yue, Zhengyi Li, Zhouhan Lin, Yuhao Zhu, Jingwen Leng i Minyi Guo. "Block-Skim: Efficient Question Answering for Transformer". Proceedings of the AAAI Conference on Artificial Intelligence 36, nr 10 (28.06.2022): 10710–19. http://dx.doi.org/10.1609/aaai.v36i10.21316.

Pełny tekst źródła
Streszczenie:
Transformer models have achieved promising results on natural language processing (NLP) tasks including extractive question answering (QA). Common Transformer encoders used in NLP tasks process the hidden states of all input tokens in the context paragraph throughout all layers. However, different from other tasks such as sequence classification, answering the raised question does not necessarily need all the tokens in the context paragraph. Following this motivation, we propose Block-skim, which learns to skim unnecessary context in higher hidden layers to improve and accelerate the Transformer performance. The key idea of Block-Skim is to identify the context that must be further processed and those that could be safely discarded early on during inference. Critically, we find that such information could be sufficiently derived from the self-attention weights inside the Transformer model. We further prune the hidden states corresponding to the unnecessary positions early in lower layers, achieving significant inference-time speedup. To our surprise, we observe that models pruned in this way outperform their full-size counterparts. Block-Skim improves QA models' accuracy on different datasets and achieves 3 times speedup on BERT-base model.
Style APA, Harvard, Vancouver, ISO itp.
3

Hu, Zhongjian, Peng Yang, Bing Li, Yuankang Sun i Biao Yang. "Biomedical extractive question answering based on dynamic routing and answer voting". Information Processing & Management 60, nr 4 (lipiec 2023): 103367. http://dx.doi.org/10.1016/j.ipm.2023.103367.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Ouyang, Jianquan, i Mengen Fu. "Improving Machine Reading Comprehension with Multi-Task Learning and Self-Training". Mathematics 10, nr 3 (19.01.2022): 310. http://dx.doi.org/10.3390/math10030310.

Pełny tekst źródła
Streszczenie:
Machine Reading Comprehension (MRC) is an AI challenge that requires machines to determine the correct answer to a question based on a given passage, in which extractive MRC requires extracting an answer span to a question from a given passage, such as the task of span extraction. In contrast, non-extractive MRC infers answers from the content of reference passages, including Yes/No question answering to unanswerable questions. Due to the specificity of the two types of MRC tasks, researchers usually work on one type of task separately, but real-life application situations often require models that can handle many different types of tasks in parallel. Therefore, to meet the comprehensive requirements in such application situations, we construct a multi-task fusion training reading comprehension model based on the BERT pre-training model. The model uses the BERT pre-training model to obtain contextual representations, which is then shared by three downstream sub-modules for span extraction, Yes/No question answering, and unanswerable questions, next we fuse the outputs of the three sub-modules into a new span extraction output and use the fused cross-entropy loss function for global training. In the training phase, since our model requires a large amount of labeled training data, which is often expensive to obtain or unavailable in many tasks, we additionally use self-training to generate pseudo-labeled training data to train our model to improve its accuracy and generalization performance. We evaluated the SQuAD2.0 and CAIL2019 datasets. The experiments show that our model can efficiently handle different tasks. We achieved 83.2EM and 86.7F1 scores on the SQuAD2.0 dataset and 73.0EM and 85.3F1 scores on the CAIL2019 dataset.
Style APA, Harvard, Vancouver, ISO itp.
5

Shinoda, Kazutoshi, Saku Sugawara i Akiko Aizawa. "Which Shortcut Solution Do Question Answering Models Prefer to Learn?" Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 11 (26.06.2023): 13564–72. http://dx.doi.org/10.1609/aaai.v37i11.26590.

Pełny tekst źródła
Streszczenie:
Question answering (QA) models for reading comprehension tend to exploit spurious correlations in training sets and thus learn shortcut solutions rather than the solutions intended by QA datasets. QA models that have learned shortcut solutions can achieve human-level performance in shortcut examples where shortcuts are valid, but these same behaviors degrade generalization potential on anti-shortcut examples where shortcuts are invalid. Various methods have been proposed to mitigate this problem, but they do not fully take the characteristics of shortcuts themselves into account. We assume that the learnability of shortcuts, i.e., how easy it is to learn a shortcut, is useful to mitigate the problem. Thus, we first examine the learnability of the representative shortcuts on extractive and multiple-choice QA datasets. Behavioral tests using biased training sets reveal that shortcuts that exploit answer positions and word-label correlations are preferentially learned for extractive and multiple-choice QA, respectively. We find that the more learnable a shortcut is, the flatter and deeper the loss landscape is around the shortcut solution in the parameter space. We also find that the availability of the preferred shortcuts tends to make the task easier to perform from an information-theoretic viewpoint. Lastly, we experimentally show that the learnability of shortcuts can be utilized to construct an effective QA training set; the more learnable a shortcut is, the smaller the proportion of anti-shortcut examples required to achieve comparable performance on shortcut and anti-shortcut examples. We claim that the learnability of shortcuts should be considered when designing mitigation methods.
Style APA, Harvard, Vancouver, ISO itp.
6

Longpre, Shayne, Yi Lu i Joachim Daiber. "MKQA: A Linguistically Diverse Benchmark for Multilingual Open Domain Question Answering". Transactions of the Association for Computational Linguistics 9 (2021): 1389–406. http://dx.doi.org/10.1162/tacl_a_00433.

Pełny tekst źródła
Streszczenie:
Abstract Progress in cross-lingual modeling depends on challenging, realistic, and diverse evaluation sets. We introduce Multilingual Knowledge Questions and Answers (MKQA), an open- domain question answering evaluation set comprising 10k question-answer pairs aligned across 26 typologically diverse languages (260k question-answer pairs in total). Answers are based on heavily curated, language- independent data representation, making results comparable across languages and independent of language-specific passages. With 26 languages, this dataset supplies the widest range of languages to-date for evaluating question answering. We benchmark a variety of state- of-the-art methods and baselines for generative and extractive question answering, trained on Natural Questions, in zero shot and translation settings. Results indicate this dataset is challenging even in English, but especially in low-resource languages.1
Style APA, Harvard, Vancouver, ISO itp.
7

Gholami, Sia, i Mehdi Noori. "You Don’t Need Labeled Data for Open-Book Question Answering". Applied Sciences 12, nr 1 (23.12.2021): 111. http://dx.doi.org/10.3390/app12010111.

Pełny tekst źródła
Streszczenie:
Open-book question answering is a subset of question answering (QA) tasks where the system aims to find answers in a given set of documents (open-book) and common knowledge about a topic. This article proposes a solution for answering natural language questions from a corpus of Amazon Web Services (AWS) technical documents with no domain-specific labeled data (zero-shot). These questions have a yes–no–none answer and a text answer which can be short (a few words) or long (a few sentences). We present a two-step, retriever–extractor architecture in which a retriever finds the right documents and an extractor finds the answers in the retrieved documents. To test our solution, we are introducing a new dataset for open-book QA based on real customer questions on AWS technical documentation. In this paper, we conducted experiments on several information retrieval systems and extractive language models, attempting to find the yes–no–none answers and text answers in the same pass. Our custom-built extractor model is created from a pretrained language model and fine-tuned on the the Stanford Question Answering Dataset—SQuAD and Natural Questions datasets. We were able to achieve 42% F1 and 39% exact match score (EM) end-to-end with no domain-specific training.
Style APA, Harvard, Vancouver, ISO itp.
8

LI, SHASHA, i ZHOUJUN LI. "QUESTION-ORIENTED ANSWER SUMMARIZATION VIA TERM HIERARCHICAL STRUCTURE". International Journal of Software Engineering and Knowledge Engineering 21, nr 06 (wrzesień 2011): 877–89. http://dx.doi.org/10.1142/s0218194011005475.

Pełny tekst źródła
Streszczenie:
In the research area of community-based question and answering (cQA) services such as Yahoo! Answers, reuse of the answers has attracted more and more interest. Most researchers focus on the correctness of answers and pay little attention to the completeness of answers. In this paper, we try to address the answer completeness problem for "survey questions" for which answer completeness is crucial. We propose to generate a more complete answer from replies in cQA services through a term hierarchical structure based question-oriented extractive summarization which is different from traditional query-based extractive summarization. The experimental results are very promising in terms of recall, precision and conciseness.
Style APA, Harvard, Vancouver, ISO itp.
9

Moon, Sungrim, Huan He, Heling Jia, Hongfang Liu i Jungwei Wilfred Fan. "Extractive Clinical Question-Answering With Multianswer and Multifocus Questions: Data Set Development and Evaluation Study". JMIR AI 2 (20.06.2023): e41818. http://dx.doi.org/10.2196/41818.

Pełny tekst źródła
Streszczenie:
Background Extractive question-answering (EQA) is a useful natural language processing (NLP) application for answering patient-specific questions by locating answers in their clinical notes. Realistic clinical EQA can yield multiple answers to a single question and multiple focus points in 1 question, which are lacking in existing data sets for the development of artificial intelligence solutions. Objective This study aimed to create a data set for developing and evaluating clinical EQA systems that can handle natural multianswer and multifocus questions. Methods We leveraged the annotated relations from the 2018 National NLP Clinical Challenges corpus to generate an EQA data set. Specifically, the 1-to-N, M-to-1, and M-to-N drug-reason relations were included to form the multianswer and multifocus question-answering entries, which represent more complex and natural challenges in addition to the basic 1-drug-1-reason cases. A baseline solution was developed and tested on the data set. Results The derived RxWhyQA data set contains 96,939 QA entries. Among the answerable questions, 25% of them require multiple answers, and 2% of them ask about multiple drugs within 1 question. Frequent cues were observed around the answers in the text, and 90% of the drug and reason terms occurred within the same or an adjacent sentence. The baseline EQA solution achieved a best F1-score of 0.72 on the entire data set, and on specific subsets, it was 0.93 for the unanswerable questions, 0.48 for single-drug questions versus 0.60 for multidrug questions, and 0.54 for the single-answer questions versus 0.43 for multianswer questions. Conclusions The RxWhyQA data set can be used to train and evaluate systems that need to handle multianswer and multifocus questions. Specifically, multianswer EQA appears to be challenging and therefore warrants more investment in research. We created and shared a clinical EQA data set with multianswer and multifocus questions that would channel future research efforts toward more realistic scenarios.
Style APA, Harvard, Vancouver, ISO itp.
10

Siblini, Wissam, Mohamed Challal i Charlotte Pasqual. "Efficient Open Domain Question Answering With Delayed Attention in Transformer-Based Models". International Journal of Data Warehousing and Mining 18, nr 2 (kwiecień 2022): 1–16. http://dx.doi.org/10.4018/ijdwm.298005.

Pełny tekst źródła
Streszczenie:
Open Domain Question Answering (ODQA) on a large-scale corpus of documents (e.g. Wikipedia) is a key challenge in computer science. Although Transformer-based language models such as Bert have shown an ability to outperform humans to extract answers from small pre-selected passages of text, they suffer from their high complexity if the search space is much larger. The most common way to deal with this problem is to add a preliminary information retrieval step to strongly filter the corpus and keep only the relevant passages. In this article, the authors consider a more direct and complementary solution which consists in restricting the attention mechanism in Transformer-based models to allow a more efficient management of computations. The resulting variants are competitive with the original models on the extractive task and allow, in the ODQA setting, a significant acceleration of predictions and sometimes even an improvement in the quality of response.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Extractive Question-Answering"

1

Bergkvist, Alexander, Nils Hedberg, Sebastian Rollino i Markus Sagen. "Surmize: An Online NLP System for Close-Domain Question-Answering and Summarization". Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-412247.

Pełny tekst źródła
Streszczenie:
The amount of data available and consumed by people globally is growing. To reduce mental fatigue and increase the general ability to gain insight into complex texts or documents, we have developed an application to aid in this task. The application allows users to upload documents and ask domain-specific questions about them using our web application. A summarized version of each document is presented to the user, which could further facilitate their understanding of the document and guide them towards what types of questions could be relevant to ask. Our application allows users flexibility with the types of documents that can be processed, it is publicly available, stores no user data, and uses state-of-the-art models for its summaries and answers. The result is an application that yields near human-level intuition for answering questions in certain isolated cases, such as Wikipedia and news articles, as well as some scientific texts. The application shows a decrease in reliability and its prediction as to the complexity of the subject, the number of words in the document, and grammatical inconsistency in the questions increases. These are all aspects that can be improved further if used in production.
Mängden data som är tillgänglig och konsumeras av människor växer globalt. För att minska den mentala trötthet och öka den allmänna förmågan att få insikt i komplexa, massiva texter eller dokument, har vi utvecklat en applikation för att bistå i de uppgifterna. Applikationen tillåter användare att ladda upp dokument och fråga kontextspecifika frågor via vår webbapplikation. En sammanfattad version av varje dokument presenteras till användaren, vilket kan ytterligare förenkla förståelsen av ett dokument och vägleda dem mot vad som kan vara relevanta frågor att ställa. Vår applikation ger användare möjligheten att behandla olika typer av dokument, är tillgänglig för alla, sparar ingen personlig data, och använder de senaste modellerna inom språkbehandling för dess sammanfattningar och svar. Resultatet är en applikation som når en nära mänsklig intuition för vissa domäner och frågor, som exempelvis Wikipedia- och nyhetsartiklar, samt viss vetensaplig text. Noterade undantag för tillämpningen härrör från ämnets komplexitet, grammatiska korrekthet för frågorna och dokumentets längd. Dessa är områden som kan förbättras ytterligare om den används i produktionen.
Style APA, Harvard, Vancouver, ISO itp.
2

Usbeck, Ricardo. "Knowledge Extraction for Hybrid Question Answering". Doctoral thesis, Universitätsbibliothek Leipzig, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-225097.

Pełny tekst źródła
Streszczenie:
Since the proposal of hypertext by Tim Berners-Lee to his employer CERN on March 12, 1989 the World Wide Web has grown to more than one billion Web pages and still grows. With the later proposed Semantic Web vision,Berners-Lee et al. suggested an extension of the existing (Document) Web to allow better reuse, sharing and understanding of data. Both the Document Web and the Web of Data (which is the current implementation of the Semantic Web) grow continuously. This is a mixed blessing, as the two forms of the Web grow concurrently and most commonly contain different pieces of information. Modern information systems must thus bridge a Semantic Gap to allow a holistic and unified access to information about a particular information independent of the representation of the data. One way to bridge the gap between the two forms of the Web is the extraction of structured data, i.e., RDF, from the growing amount of unstructured and semi-structured information (e.g., tables and XML) on the Document Web. Note, that unstructured data stands for any type of textual information like news, blogs or tweets. While extracting structured data from unstructured data allows the development of powerful information system, it requires high-quality and scalable knowledge extraction frameworks to lead to useful results. The dire need for such approaches has led to the development of a multitude of annotation frameworks and tools. However, most of these approaches are not evaluated on the same datasets or using the same measures. The resulting Evaluation Gap needs to be tackled by a concise evaluation framework to foster fine-grained and uniform evaluations of annotation tools and frameworks over any knowledge bases. Moreover, with the constant growth of data and the ongoing decentralization of knowledge, intuitive ways for non-experts to access the generated data are required. Humans adapted their search behavior to current Web data by access paradigms such as keyword search so as to retrieve high-quality results. Hence, most Web users only expect Web documents in return. However, humans think and most commonly express their information needs in their natural language rather than using keyword phrases. Answering complex information needs often requires the combination of knowledge from various, differently structured data sources. Thus, we observe an Information Gap between natural-language questions and current keyword-based search paradigms, which in addition do not make use of the available structured and unstructured data sources. Question Answering (QA) systems provide an easy and efficient way to bridge this gap by allowing to query data via natural language, thus reducing (1) a possible loss of precision and (2) potential loss of time while reformulating the search intention to transform it into a machine-readable way. Furthermore, QA systems enable answering natural language queries with concise results instead of links to verbose Web documents. Additionally, they allow as well as encourage the access to and the combination of knowledge from heterogeneous knowledge bases (KBs) within one answer. Consequently, three main research gaps are considered and addressed in this work: First, addressing the Semantic Gap between the unstructured Document Web and the Semantic Gap requires the development of scalable and accurate approaches for the extraction of structured data in RDF. This research challenge is addressed by several approaches within this thesis. This thesis presents CETUS, an approach for recognizing entity types to populate RDF KBs. Furthermore, our knowledge base-agnostic disambiguation framework AGDISTIS can efficiently detect the correct URIs for a given set of named entities. Additionally, we introduce REX, a Web-scale framework for RDF extraction from semi-structured (i.e., templated) websites which makes use of the semantics of the reference knowledge based to check the extracted data. The ongoing research on closing the Semantic Gap has already yielded a large number of annotation tools and frameworks. However, these approaches are currently still hard to compare since the published evaluation results are calculated on diverse datasets and evaluated based on different measures. On the other hand, the issue of comparability of results is not to be regarded as being intrinsic to the annotation task. Indeed, it is now well established that scientists spend between 60% and 80% of their time preparing data for experiments. Data preparation being such a tedious problem in the annotation domain is mostly due to the different formats of the gold standards as well as the different data representations across reference datasets. We tackle the resulting Evaluation Gap in two ways: First, we introduce a collection of three novel datasets, dubbed N3, to leverage the possibility of optimizing NER and NED algorithms via Linked Data and to ensure a maximal interoperability to overcome the need for corpus-specific parsers. Second, we present GERBIL, an evaluation framework for semantic entity annotation. The rationale behind our framework is to provide developers, end users and researchers with easy-to-use interfaces that allow for the agile, fine-grained and uniform evaluation of annotation tools and frameworks on multiple datasets. The decentral architecture behind the Web has led to pieces of information being distributed across data sources with varying structure. Moreover, the increasing the demand for natural-language interfaces as depicted by current mobile applications requires systems to deeply understand the underlying user information need. In conclusion, the natural language interface for asking questions requires a hybrid approach to data usage, i.e., simultaneously performing a search on full-texts and semantic knowledge bases. To close the Information Gap, this thesis presents HAWK, a novel entity search approach developed for hybrid QA based on combining structured RDF and unstructured full-text data sources.
Style APA, Harvard, Vancouver, ISO itp.
3

Glinos, Demetrios. "SYNTAX-BASED CONCEPT EXTRACTION FOR QUESTION ANSWERING". Doctoral diss., University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3565.

Pełny tekst źródła
Streszczenie:
Question answering (QA) stands squarely along the path from document retrieval to text understanding. As an area of research interest, it serves as a proving ground where strategies for document processing, knowledge representation, question analysis, and answer extraction may be evaluated in real world information extraction contexts. The task is to go beyond the representation of text documents as "bags of words" or data blobs that can be scanned for keyword combinations and word collocations in the manner of internet search engines. Instead, the goal is to recognize and extract the semantic content of the text, and to organize it in a manner that supports reasoning about the concepts represented. The issue presented is how to obtain and query such a structure without either a predefined set of concepts or a predefined set of relationships among concepts. This research investigates a means for acquiring from text documents both the underlying concepts and their interrelationships. Specifically, a syntax-based formalism for representing atomic propositions that are extracted from text documents is presented, together with a method for constructing a network of concept nodes for indexing such logical forms based on the discourse entities they contain. It is shown that meaningful questions can be decomposed into Boolean combinations of question patterns using the same formalism, with free variables representing the desired answers. It is further shown that this formalism can be used for robust question answering using the concept network and WordNet synonym, hypernym, hyponym, and antonym relationships. This formalism was implemented in the Semantic Extractor (SEMEX) research tool and was tested against the factoid questions from the 2005 Text Retrieval Conference (TREC), which operated upon the AQUAINT corpus of newswire documents. After adjusting for the limitations of the tool and the document set, correct answers were found for approximately fifty percent of the questions analyzed, which compares favorably with other question answering systems.
Ph.D.
School of Computer Science
Engineering and Computer Science
Computer Science
Style APA, Harvard, Vancouver, ISO itp.
4

Mur, Jori. "Off-line answer extraction for question answering". [S.l. : [Groningen : s.n.] ; University Library Groningen] [Host], 2008. http://irs.ub.rug.nl/ppn/.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Konstantinova, Natalia. "Knowledge acquisition from user reviews for interactive question answering". Thesis, University of Wolverhampton, 2013. http://hdl.handle.net/2436/297401.

Pełny tekst źródła
Streszczenie:
Nowadays, the effective management of information is extremely important for all spheres of our lives and applications such as search engines and question answering systems help users to find the information that they need. However, even when assisted by these various applications, people sometimes struggle to find what they want. For example, when choosing a product customers can be confused by the need to consider many features before they can reach a decision. Interactive question answering (IQA) systems can help customers in this process, by answering questions about products and initiating a dialogue with the customers when their needs are not clearly defined. The focus of this thesis is how to design an interactive question answering system that will assist users in choosing a product they are looking for, in an optimal way, when a large number of similar products are available. Such an IQA system will be based on selecting a set of characteristics (also referred to as product features in this thesis), that describe the relevant product, and narrowing the search space. We believe that the order in which these characteristics are presented in terms of these IQA sessions is of high importance. Therefore, they need to be ranked in order to have a dialogue which selects the product in an efficient manner. The research question investigated in this thesis is whether product characteristics mentioned in user reviews are important for a person who is likely to purchase a product and can therefore be used when designing an IQA system. We focus our attention on products such as mobile phones; however, the proposed techniques can be adapted for other types of products if the data is available. Methods from natural language processing (NLP) fields such as coreference resolution, relation extraction and opinion mining are combined to produce various rankings of phone features. The research presented in this thesis employs two corpora which contain texts related to mobile phones specifically collected for this thesis: a corpus of Wikipedia articles about mobile phones and a corpus of mobile phone reviews published on the Epinions.com website. Parts of these corpora were manually annotated with coreference relations, mobile phone features and relations between mentions of the phone and its features. The annotation is used to develop a coreference resolution module as well as a machine learning-based relation extractor. Rule-based methods for identification of coreference chains describing the phone are designed and thoroughly evaluated against the annotated gold standard. Machine learning is used to find links between mentions of the phone (identified by coreference resolution) and phone features. It determines whether some phone feature belong to the phone mentioned in the same sentence or not. In order to find the best rankings, this thesis investigates several settings. One of the hypotheses tested here is that the relatively low results of the proposed baseline are caused by noise introduced by sentences which are not directly related to the phone and phone feature. To test this hypothesis, only sentences which contained mentions of the mobile phone and a phone feature linked to it were processed to produce rankings of the phones features. Selection of the relevant sentences is based on the results of coreference resolution and relation extraction. Another hypothesis is that opinionated sentences are a good source for ranking the phone features. In order to investigate this, a sentiment classification system is also employed to distinguish between features mentioned in positive and negative contexts. The detailed evaluation and error analysis of the methods proposed form an important part of this research and ensure that the results provided in this thesis are reliable.
Style APA, Harvard, Vancouver, ISO itp.
6

Almansa, Luciana Farina. "Uma arquitetura de question-answering instanciada no domínio de doenças crônicas". Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/95/95131/tde-10102016-121606/.

Pełny tekst źródła
Streszczenie:
Nos ambientes médico e de saúde, especificamente no tratamento clínico do paciente, o papel da informação descrita nos prontuários médicos é registrar o estado de saúde do paciente e auxiliar os profissionais diretamente ligados ao tratamento. A investigação dessas informações de estado clínico em pesquisas científicas na área de biomedicina podem suportar o desenvolvimento de padrões de prevenção e tratamento de enfermidades. Porém, ler artigos científicos é uma tarefa que exige tempo e disposição, uma vez que realizar buscas por informações específicas não é uma tarefa simples e a área médica e de saúde está em constante atualização. Além disso, os profissionais desta área, em sua grande maioria, possuem uma rotina estressante, trabalhando em diversos empregos e atendendo muitos pacientes em um único dia. O objetivo deste projeto é o desenvolvimento de um Framework de Question Answering (QA) para suportar o desenvolvimento de sistemas de QA, que auxiliem profissionais da área da saúde na busca rápida por informações, especificamente, em epigenética e doenças crônicas. Durante o processo de construção do framework, estão sendo utilizados dois frameworks desenvolvidos anteriormente pelo grupo de pesquisa da mestranda: o SisViDAS e o FREDS, além de desenvolver os demais módulos de processamento de pergunta e de respostas. O QASF foi avaliado por meio de uma coleção de referências e medidas estatísticas de desempenho e os resultados apontam valores de precisão em torno de 0.7 quando a revocação era 0.3, para ambos o número de artigos recuperados e analisados eram 200. Levando em consideração que as perguntas inseridas no QASF são longas, com 70 termos por pergunta em média, e complexas, o QASF apresentou resultados satisfatórios. Este projeto pretende contribuir na diminuição do tempo gasto por profissionais da saúde na busca por informações de interesse, uma vez que sistemas de QA fornecem respostas diretas e precisas sobre uma pergunta feita pelo usuário
The medical record describes health conditions of patients helping experts to make decisions about the treatment. The biomedical scientific knowledge can improve the prevention and the treatment of diseases. However, the search for relevant knowledge may be a hard task because it is necessary time and the healthcare research is constantly updating. Many healthcare professionals have a stressful routine, because they work in different hospitals or medical offices, taking care many patients per day. The goal of this project is to design a Question Answering Framework to support faster and more precise searches for information in epigenetic, chronic disease and thyroid images. To develop the proposal, we are reusing two frameworks that have already developed: SisViDAS and FREDS. These two frameworks are being exploited to compose a document processing module. The other modules (question and answer processing) are being completely developed. The QASF was evaluated by a reference collection and performance measures. The results show 0.7 of precision and 0.3 of recall for two hundred articles retrieved. Considering that the questions inserted on the framework have an average of seventy terms, the QASF shows good results. This project intends to decrease search time once QA systems provide straight and precise answers in a process started by a user question in natural language
Style APA, Harvard, Vancouver, ISO itp.
7

Usbeck, Ricardo [Verfasser], Klaus-Peter [Gutachter] Fähnrich, Philipp [Gutachter] Cimiano, Ngomo Axel-Cyrille [Akademischer Betreuer] Ngonga i Klaus-Peter [Akademischer Betreuer] Fähnrich. "Knowledge Extraction for Hybrid Question Answering / Ricardo Usbeck ; Gutachter: Klaus-Peter Fähnrich, Philipp Cimiano ; Akademische Betreuer: Axel-Cyrille Ngonga Ngomo, Klaus-Peter Fähnrich". Leipzig : Universitätsbibliothek Leipzig, 2017. http://d-nb.info/1173734775/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Krč, Martin. "Znalec encyklopedie". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2009. http://www.nusl.cz/ntk/nusl-236707.

Pełny tekst źródła
Streszczenie:
This project focuses on a system that answers questions formulated in natural language. Firstly, the report discusses problems associated with question answering systems and some commonly employed approaches. Emphasis is laid on shallow methods, which do not require many linguistic resources. The second part describes our work on a system that answers factoid questions, utilizing Czech Wikipedia as a source of information. Answer extraction is partly based on specific features of Wikipedia and partly on pre-defined patterns. Results show that for answering simple questions, the system provides significant improvements in comparison with a standard search engine.
Style APA, Harvard, Vancouver, ISO itp.
9

Deyab, Rodwan Bakkar. "Ontology-based information extraction from learning management systems". Master's thesis, Universidade de Évora, 2017. http://hdl.handle.net/10174/20996.

Pełny tekst źródła
Streszczenie:
In this work we present a system for information extraction from Learning Management Systems. This system is ontology-based. It retrieves information according to the structure of the ontology to populate the ontology. We graphically present statistics about the ontology data. These statistics present latent knowledge which is difficult to see in the traditional Learning Management System. To answer questions about the ontology, a question answering system was developed using Natural Language Processing in the conversion of the natural language question into an ontology query language; Sumário: Extração de Informação de Sistemas de Gestão para Educação Usando Ontologias Neste dissertação apresentamos um sistema de extracção de informação de sistemas de gestão para educação (Learning Management Systems). Este sistema é baseado em ontologias e extrai informação de acordo com a estrutura da ontologia para a popular. Também permite apresentar graficamente algumas estatísticas sobre os dados da ontologia. Estas estatísticas revelam o conhecimento latente que é difícil de ver num sistema tradicional de gestão para a educação. Para poder responder a perguntas sobre os dados da ontologia, um sistema de resposta automática a perguntas em língua natural foi desenvolvido usando Processamento de Língua Natural para converter as perguntas para linguagem de interrogação de ontologias.
Style APA, Harvard, Vancouver, ISO itp.
10

Ben, Abacha Asma. "Recherche de réponses précises à des questions médicales : le système de questions-réponses MEANS". Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00735612.

Pełny tekst źródła
Streszczenie:
La recherche de réponses précises à des questions formulées en langue naturelle renouvelle le champ de la recherche d'information. De nombreux travaux ont eu lieu sur la recherche de réponses à des questions factuelles en domaine ouvert. Moins de travaux ont porté sur la recherche de réponses en domaine de spécialité, en particulier dans le domaine médical ou biomédical. Plusieurs conditions différentes sont rencontrées en domaine de spécialité comme les lexiques et terminologies spécialisés, les types particuliers de questions, entités et relations du domaine ou les caractéristiques des documents ciblés. Dans une première partie, nous étudions les méthodes permettant d'analyser sémantiquement les questions posées par l'utilisateur ainsi que les textes utilisés pour trouver les réponses. Pour ce faire nous utilisons des méthodes hybrides pour deux tâches principales : (i) la reconnaissance des entités médicales et (ii) l'extraction de relations sémantiques. Ces méthodes combinent des règles et patrons construits manuellement, des connaissances du domaine et des techniques d'apprentissage statistique utilisant différents classifieurs. Ces méthodes hybrides, expérimentées sur différents corpus, permettent de pallier les inconvénients des deux types de méthodes d'extraction d'information, à savoir le manque de couverture potentiel des méthodes à base de règles et la dépendance aux données annotées des méthodes statistiques. Dans une seconde partie, nous étudions l'apport des technologies du web sémantique pour la portabilité et l'expressivité des systèmes de questions-réponses. Dans le cadre de notre approche, nous exploitons les technologies du web sémantique pour annoter les informations extraites en premier lieu et pour interroger sémantiquement ces annotations en second lieu. Enfin, nous présentons notre système de questions-réponses, appelé MEANS, qui utilise à la fois des techniques de TAL, des connaissances du domaine et les technologies du web sémantique pour répondre automatiquement aux questions médicales.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Extractive Question-Answering"

1

Harabagiu, Sanda, i Dan Moldovan. Question Answering. Redaktor Ruslan Mitkov. Oxford University Press, 2012. http://dx.doi.org/10.1093/oxfordhb/9780199276349.013.0031.

Pełny tekst źródła
Streszczenie:
Textual Question Answering (QA) identifies the answer to a question in large collections of on-line documents. By providing a small set of exact answers to questions, QA takes a step closer to information retrieval rather than document retrieval. A QA system comprises three modules: a question-processing module, a document-processing module, and an answer extraction and formulation module. Questions may be asked about any topic, in contrast with Information Extraction (IE), which identifies textual information relevant only to a predefined set of events and entities. The natural language processing (NLP) techniques used in open-domain QA systems may range from simple lexical and semantic disambiguation of question stems to complex processing that combines syntactic and semantic features of the questions with pragmatic information derived from the context of candidate answers. This article reviews current research in integrating knowledge-based NLP methods with shallow processing techniques for QA.
Style APA, Harvard, Vancouver, ISO itp.
2

Pazienza, Maria Teresa. Information Extraction in the Web Era: Natural Language Communication for Knowledge Acquisition and Intelligent Information Agents. Springer London, Limited, 2006.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Information Extraction in the Web Era: Natural Language Communication for Knowledge Acquisition and Intelligent Information Agents (Lecture Notes in Computer Science). Springer, 2003.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Extractive Question-Answering"

1

Jha, Raj, i V. Susheela Devi. "Extractive Question Answering Using Transformer-Based LM". W Communications in Computer and Information Science, 373–84. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1642-9_32.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Ferreira, Bruno Carlos Luís, Hugo Gonçalo Oliveira, Hugo Amaro, Ângela Laranjeiro i Catarina Silva. "Evaluating the Extraction of Toxicological Properties with Extractive Question Answering". W Engineering Applications of Neural Networks, 599–606. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-34204-2_48.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Kabber, Anusha, V. M. Dhruthi, Raghav Pandit i S. Natarajan. "Extractive Long-Form Question Answering for Annual Reports Using BERT". W Proceedings of Emerging Trends and Technologies on Intelligent Systems, 295–304. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-4182-5_23.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Du, Mingzhe, Mouad Hakam, See-Kiong Ng i Stéphane Bressan. "Constituency-Informed and Constituency-Constrained Extractive Question Answering with Heterogeneous Graph Transformer". W Transactions on Large-Scale Data- and Knowledge-Centered Systems LIII, 90–106. Berlin, Heidelberg: Springer Berlin Heidelberg, 2023. http://dx.doi.org/10.1007/978-3-662-66863-4_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Sang, Erik Tjong Kim, Katja Hofmann i Maarten de Rijke. "Extraction of Hypernymy Information from Text∗". W Interactive Multi-modal Question-Answering, 223–45. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-17525-1_10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Bouma, Gosse, Ismail Fahmi i Jori Mur. "Relation Extraction for Open and Closed Domain Question Answering". W Interactive Multi-modal Question-Answering, 171–97. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-17525-1_8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

van der Plas, Lonneke, Jörg Tiedemann i Ismail Fahmi. "Automatic Extraction of Medical Term Variants from Multilingual Parallel Translations". W Interactive Multi-modal Question-Answering, 149–70. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-17525-1_7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Srihari, Rohini K., Wei Li i Xiaoge Li. "Question Answering Supported By Multiple Levels Of Information Extraction". W Advances in Open Domain Question Answering, 349–82. Dordrecht: Springer Netherlands, 2008. http://dx.doi.org/10.1007/978-1-4020-4746-6_11.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Lehmann, Jens, Tim Furche, Giovanni Grasso, Axel-Cyrille Ngonga Ngomo, Christian Schallhart, Andrew Sellers, Christina Unger i in. "deqa: Deep Web Extraction for Question Answering". W The Semantic Web – ISWC 2012, 131–47. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-35173-0_9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Kontos, J., i I. Malagardi. "Question Answering and Information Extraction from Texts". W Advances in Intelligent Systems, 121–30. Dordrecht: Springer Netherlands, 1999. http://dx.doi.org/10.1007/978-94-011-4840-5_11.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Extractive Question-Answering"

1

Wang, Luqi, Kaiwen Zheng, Liyin Qian i Sheng Li. "A Survey of Extractive Question Answering". W 2022 International Conference on High Performance Big Data and Intelligent Systems (HDIS). IEEE, 2022. http://dx.doi.org/10.1109/hdis56859.2022.9991478.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Shymbayev, Magzhan, i Yermek Alimzhanov. "Extractive Question Answering for Kazakh Language". W 2023 IEEE International Conference on Smart Information Systems and Technologies (SIST). IEEE, 2023. http://dx.doi.org/10.1109/sist58284.2023.10223508.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Fajcik, Martin, Josef Jon i Pavel Smrz. "Rethinking the Objectives of Extractive Question Answering". W Proceedings of the 3rd Workshop on Machine Reading for Question Answering. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.mrqa-1.2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Arumae, Kristjan, i Fei Liu. "Guiding Extractive Summarization with Question-Answering Rewards". W Proceedings of the 2019 Conference of the North. Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/n19-1264.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Lewis, Patrick, Barlas Oguz, Ruty Rinott, Sebastian Riedel i Holger Schwenk. "MLQA: Evaluating Cross-lingual Extractive Question Answering". W Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.acl-main.653.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Prasad, Archiki, Trung Bui, Seunghyun Yoon, Hanieh Deilamsalehy, Franck Dernoncourt i Mohit Bansal. "MeetingQA: Extractive Question-Answering on Meeting Transcripts". W Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg, PA, USA: Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.acl-long.837.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Rakotoson, Loïc, Charles Letaillieur, Sylvain Massip i Fréjus A. A. Laleye. "Extractive-Boolean Question Answering for Scientific Fact Checking". W ICMR '22: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3512732.3533580.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Varanasi, Stalin, Saadullah Amin i Guenter Neumann. "AutoEQA: Auto-Encoding Questions for Extractive Question Answering". W Findings of the Association for Computational Linguistics: EMNLP 2021. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.findings-emnlp.403.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Frermann, Lea. "Extractive NarrativeQA with Heuristic Pre-Training". W Proceedings of the 2nd Workshop on Machine Reading for Question Answering. Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/d19-5823.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Demner-Fushman, Dina, i Jimmy Lin. "Answer extraction, semantic clustering, and extractive summarization for clinical question answering". W the 21st International Conference. Morristown, NJ, USA: Association for Computational Linguistics, 2006. http://dx.doi.org/10.3115/1220175.1220281.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Extractive Question-Answering"

1

Srihari, Rohini, i Wei Li. Information Extraction Supported Question Answering. Fort Belvoir, VA: Defense Technical Information Center, październik 1999. http://dx.doi.org/10.21236/ada460042.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii