Journal articles on the topic 'Natural language processing, question answering, software engineering'

To see the other types of publications on this topic, follow the link: Natural language processing, question answering, software engineering.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Natural language processing, question answering, software engineering.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Nabavi, Armin, Issa Ramaji, Naimeh Sadeghi, and Anne Anderson. "Leveraging Natural Language Processing for Automated Information Inquiry from Building Information Models." Journal of Information Technology in Construction 28 (April 4, 2023): 266–85. http://dx.doi.org/10.36680/j.itcon.2023.013.

Full text
Abstract:
Building Information Modeling (BIM) is a trending technology in the building industry that can increase efficiency throughout construction. Various practical information can be obtained from BIM models during the project life cycle. However, accessing this information could be tedious and time-consuming for non-technical users, who might have limited or no knowledge of working with BIM software. Automating the information inquiry process can potentially address this need. This research proposes an Artificial Intelligence-based framework to facilitate accessing information in BIM models. First, the framework uses a support vector machine (SVM) algorithm to determine the user's question type. Simultaneously, it employs natural language processing (NLP) for syntactic analysis to find the main keywords of the user's question. Then it utilizes an ontology database such as IfcOWL and an NLP method (latent semantic analysis (LSA)) for a semantic understanding of the question. The keywords are expanded through the semantic relationship in the ontologies, and eventually, a final query is formed based on keywords and their expanded concepts. A Navisworks API is developed that employs the identified question type and its parameters to extract the results from BIM and display them to the users. The proposed platform also includes a speech recognition module for a more user-friendly interface. The results show that the speed of answering the questions on the platform is up to 5 times faster than the manual use by experts while maintaining high accuracy.
APA, Harvard, Vancouver, ISO, and other styles
2

Goar, Vishal, Nagendra Singh Yadav, and Pallavi Singh Yadav. "Conversational AI for Natural Language Processing: An Review of ChatGPT." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 3s (March 11, 2023): 109–17. http://dx.doi.org/10.17762/ijritcc.v11i3s.6161.

Full text
Abstract:
ChatGPT is a conversational artificial intelligence model developed by OpenAI, which was introduced in 2019. It employs a transformer-based neural mesh to produce human being responses in real-time, allowing for natural language conversations with a machine. ChatGPT is instructed on huge quantities of data captured using the internet, making it knowledgeable in an extensive span of topics, from news & entertainment to politics and sports. This allows it to generate contextually relevant responses to questions and statements, making the conversation seem more lifelike. The model can be used in various applications, including customer service, personal assistants, and virtual assistants. ChatGPT has also shown promising results in generating creative content, such as jokes and poetry, showcasing its versatility and potential for future applications.This paper provides a comprehensive review of the existing literature on ChatGPT, highlighting its key advantages, such as improved accuracy and flexibility compared to traditional NLP tools, as well as its limitations and the need for further research to address potential ethical concerns. The review also highlights the potential for ChatGPT to be used in NLP applications, including question-answering and dialogue generation, and highlights the need for further research and development in these areas.
APA, Harvard, Vancouver, ISO, and other styles
3

Mima, Hideki, Susumu Ota, and Koji Nagatsuna. "Ontology-based query processing for understanding intentions of indirect speech acts in natural-language question answering." International Journal of Computer Applications in Technology 35, no. 2/3/4 (2009): 271. http://dx.doi.org/10.1504/ijcat.2009.026603.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Moholkar, Kavita, and S. H. Patil. "Lioness Adapted GWO-Based Deep Belief Network Enabled with Multiple Features for a Novel Question Answering System." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 30, no. 01 (February 2022): 93–114. http://dx.doi.org/10.1142/s0218488522500052.

Full text
Abstract:
Recently, the researches on Question Answering (QA) systems attract progressive attention with the enlargement of data and the advances on machine learning. Selection of answers from QA system is a significant task for enhancing the automatic QA systems. However, the major complexity relies in the designing of contextual factors and semantic matching. Motivation: Question Answering is a specialized form of Information Retrieval which seeks knowledge. We are not only interested in getting the relevant pages but we are interested in getting specific answer to queries. Question Answering is in itself intersection of Natural Language Processing, Information Retrieval, Machine Learning, Knowledge Representation, Logic and Inference and Semantic Search. Contribution: Feature extraction plays a major role for accurate classification, where the learned features get extracted for enhancing the capability of sequence learning. Optimized Deep Belief network model is adopted for the precise question answering system, which could handle both objective and subjective questions. A new hybrid optimization algorithm known as Lioness Adapted GWO (LA-GWO) algorithm is introduced, which mainly concentrates on high reliability and convergence rate. This paper intends to formulate a novel QA system, and the process starts with word embedding. From the embedded results, some of the features get extracted, and subsequently, the classification is carried out using the hybrid optimization enabled Deep Belief Network (DBN). Specifically, the hidden neurons in DBN will be optimally tuned using a new Lioness Adapted GWO (LA-GWO) algorithm, which is the hybridization of both Lion Algorithm (LA) and Grey Wolf optimization (GWO) models. Finally, the performance of proposed work is compared over other conventional methods with respect to accuracy, sensitivity, specificity, and precision, respectively.
APA, Harvard, Vancouver, ISO, and other styles
5

Church, Kenneth Ward, Zeyu Chen, and Yanjun Ma. "Emerging trends: A gentle introduction to fine-tuning." Natural Language Engineering 27, no. 6 (October 26, 2021): 763–78. http://dx.doi.org/10.1017/s1351324921000322.

Full text
Abstract:
AbstractThe previous Emerging Trends article (Church et al., 2021. Natural Language Engineering27(5), 631–645.) introduced deep nets to poets. Poets is an imperfect metaphor, intended as a gesture toward inclusion. The future for deep nets will benefit by reaching out to a broad audience of potential users, including people with little or no programming skills, and little interest in training models. That paper focused on inference, the use of pre-trained models, as is, without fine-tuning. The goal of this paper is to make fine-tuning more accessible to a broader audience. Since fine-tuning is more challenging than inference, the examples in this paper will require modest programming skills, as well as access to a GPU. Fine-tuning starts with a general purpose base (foundation) model and uses a small training set of labeled data to produce a model for a specific downstream application. There are many examples of fine-tuning in natural language processing (question answering (SQuAD) and GLUE benchmark), as well as vision and speech.
APA, Harvard, Vancouver, ISO, and other styles
6

Deshmukh, Prof Anushree, Smit Shah, Heena Puthran, and Naisargi Shah. "Virtual Shopping Assistant for Online Fashion Store." International Journal for Research in Applied Science and Engineering Technology 10, no. 5 (May 31, 2022): 110–17. http://dx.doi.org/10.22214/ijraset.2022.42099.

Full text
Abstract:
Abstract: A new way for individuals to interact with computer systems will be done through chatbots or conversational interfaces. Historically, introducing a matter answered by a software package involves employing a program or filling out a type. The technology at the core of the increase of the chatbot is NLP i.e., Natural Language Processing. Sequence to Sequence (often abbreviated to seq2seq) fashions is a specific type of Recurrent Neural Network architectures that we commonly use (but no longer restricted) to clear up complicated Language issues like Machine Translation, Question Answering, growing Chatbots, Text Summarization, and so forth. Recent advances in machine learning have greatly improved the accuracy and effectiveness of NLP, creating chatbots a viable choice for several organizations like e-commerce, Customer service, Conversational apps, social media, Sales/Marketing/Branding, as Voice modules, Travel industry, Medicine, Hospitality, Human Resources etc. An NLP primarily based chatbot is a pc software or synthetic brain that communicates with a patron by means of textual or sound strategies This improvement in NLP is firing a great deal of additional analysis and research which should lead to continued growth in the effectiveness of chatbots in the years to come. Stochastic gradient descent (often abbreviated SGD) is an iterative approach for optimizing an goal characteristic with appropriate smoothness homes (e.g. differentiable or sub differentiable). Usage of Chatbots can also prove to be beneficial in ways like economically offering 24/7 service, improving customer satisfaction, reaching a younger demographic, reducing costs, increasing revenue and much more. Keywords: Chatbots, Natural Language Processing (NLP), Stochastic Gradient Descent (SGD), Sequential Model, Machine Learning
APA, Harvard, Vancouver, ISO, and other styles
7

Nguyen, Bao-An, and Don-Lin Yang. "A semi-automatic approach to construct Vietnamese ontology from online text." International Review of Research in Open and Distributed Learning 13, no. 5 (November 15, 2012): 148. http://dx.doi.org/10.19173/irrodl.v13i5.1250.

Full text
Abstract:
An ontology is an effective formal representation of knowledge used commonly in artificial intelligence, semantic web, software engineering, and information retrieval. In open and distance learning, ontologies are used as knowledge bases for e-learning supplements, educational recommenders, and question answering systems that support students with much needed resources. In such systems, ontology construction is one of the most important phases. Since there are abundant documents on the Internet, useful learning materials can be acquired openly with the use of an ontology. However, due to the lack of system support for ontology construction, it is difficult to construct self-instructional materials for Vietnamese people. In general, the cost of manual acquisition of ontologies from domain documents and expert knowledge is too high. Therefore, we present a support system for Vietnamese ontology construction using pattern-based mechanisms to discover Vietnamese concepts and conceptual relations from Vietnamese text documents. In this system, we use the combination of statistics-based, data mining, and Vietnamese natural language processing methods to develop concept and conceptual relation extraction algorithms to discover knowledge from Vietnamese text documents. From the experiments, we show that our approach provides a feasible solution to build Vietnamese ontologies used for supporting systems in education.<br /><br />
APA, Harvard, Vancouver, ISO, and other styles
8

A, Hlybovets, and Tsaruk A. "Software architecture of the question-answering subsystem with elements of self-learning." Artificial Intelligence 26, jai2021.26(2) (December 1, 2021): 88–95. http://dx.doi.org/10.15407/jai2021.02.088.

Full text
Abstract:
Within the framework of this paper, the analysis of software systems of question-answering type and their basic architectures has been carried out. With the development of machine learning technologies, creation of natural language processing (NLP) engines, as well as the rising popularity of virtual personal assistant programs that use the capabilities of speech synthesis (text-to-speech), there is a growing need in developing question-answering systems which can provide personalized answers to users' questions. All modern cloud providers proposed frameworks for organization of question answering systems but still we have a problem with personalized dialogs. Personalization is very important, it can put forward additional demands to a question-answering system’s capabilities to take this information into account while processing users’ questions. Traditionally, a question-answering system (QAS) is developed in the form of an application that contains a knowledge base and a user interface, which provides a user with answers to questions, and a means of interaction with an expert. In this article we analyze modern approaches to architecture development and try to build system from the building blocks that already exist on the market. Main criteria for the NLP modules were: support of the Ukrainian language, natural language understanding, functions of automatic definition of entities (attributes), ability to construct a dialogue flow, quality and completeness of documentation, API capabilities and integration with external systems, possibilities of external knowledge bases integration After provided analyses article propose the detailed architecture of the question-answering subsystem with elements of self-learning in the Ukrainian language. In the work you can find detailed description of main semantic components of the system (architecture components)
APA, Harvard, Vancouver, ISO, and other styles
9

Ali, Miss Aliya Anam Shoukat. "AI-Natural Language Processing (NLP)." International Journal for Research in Applied Science and Engineering Technology 9, no. VIII (August 10, 2021): 135–40. http://dx.doi.org/10.22214/ijraset.2021.37293.

Full text
Abstract:
Natural Language Processing (NLP) could be a branch of Artificial Intelligence (AI) that allows machines to know the human language. Its goal is to form systems that can make sense of text and automatically perform tasks like translation, spell check, or topic classification. Natural language processing (NLP) has recently gained much attention for representing and analysing human language computationally. It's spread its applications in various fields like computational linguistics, email spam detection, information extraction, summarization, medical, and question answering etc. The goal of the Natural Language Processing is to style and build software system which will analyze, understand, and generate languages that humans use naturally, so as that you just could also be ready to address your computer as if you were addressing another person. Because it’s one amongst the oldest area of research in machine learning it’s employed in major fields like artificial intelligence speech recognition and text processing. Natural language processing has brought major breakthrough within the sector of COMPUTATION AND AI.
APA, Harvard, Vancouver, ISO, and other styles
10

Dong, Wei Jun, and Guo Hua Geng. "Research and Implementation of Intelligent Question Answering System in MOOC." Applied Mechanics and Materials 678 (October 2014): 639–43. http://dx.doi.org/10.4028/www.scientific.net/amm.678.639.

Full text
Abstract:
Massive Online Open Course which based on Open Educational Resource might be the most effective method to large-scale quality education, which can realize passive learning to active learning. Analyzing the status and shortages of Intelligent Answering System, propose and design an intelligent question answering system based on agent-model. System use software agents to implement and improve MOOC system’s Intelligent Answering System performance, which has capacity of natural language processing, and good versatility. It can provide an efficient online problem answer environment for thousands of learners, and can effectively promote students' autonomous learning and self-development.
APA, Harvard, Vancouver, ISO, and other styles
11

Dong, Wei Jun, and Guo Hua Geng. "Research and Implementation of Intelligent Question Answering System in MOOC." Applied Mechanics and Materials 678 (October 2014): 684–88. http://dx.doi.org/10.4028/www.scientific.net/amm.678.684.

Full text
Abstract:
Massive Online Open Course which based on Open Educational Resource might be the most effective method to large-scale quality education, which can realize passive learning to active learning. Analyzing the status and shortages of Intelligent Answering System, propose and design an intelligent question answering system based on agent-model. System use software agents to implement and improve MOOC system’s Intelligent Answering System performance, which has capacity of natural language processing, and good versatility. It can provide an efficient online problem answer environment for thousands of learners, and can effectively promote students' autonomous learning and self-development.
APA, Harvard, Vancouver, ISO, and other styles
12

Vasuki, M. "The Role of Machine Learning in Natural Language Understanding." International Journal for Research in Applied Science and Engineering Technology 11, no. 7 (July 31, 2023): 280–84. http://dx.doi.org/10.22214/ijraset.2023.54611.

Full text
Abstract:
Abstract: This paper shows deeply the algorithms used in Natural Language (NLU) using Machine Learning (ML) in order to develop Natural Language applications like sentimental analysis, text classification and question answering. The paper thoroughly investigates the diverse applications, inherent challenges, and promising future prospects of machine learning in NLU, providing valuable insights into its revolutionary influence on language processing and comprehension.
APA, Harvard, Vancouver, ISO, and other styles
13

Zheng, Weiguo, Hong Cheng, Jeffrey Xu Yu, Lei Zou, and Kangfei Zhao. "Interactive natural language question answering over knowledge graphs." Information Sciences 481 (May 2019): 141–59. http://dx.doi.org/10.1016/j.ins.2018.12.032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Guarasci, Raffaele, Giuseppe De Pietro, and Massimo Esposito. "Quantum Natural Language Processing: Challenges and Opportunities." Applied Sciences 12, no. 11 (June 2, 2022): 5651. http://dx.doi.org/10.3390/app12115651.

Full text
Abstract:
The meeting between Natural Language Processing (NLP) and Quantum Computing has been very successful in recent years, leading to the development of several approaches of the so-called Quantum Natural Language Processing (QNLP). This is a hybrid field in which the potential of quantum mechanics is exploited and applied to critical aspects of language processing, involving different NLP tasks. Approaches developed so far span from those that demonstrate the quantum advantage only at the theoretical level to the ones implementing algorithms on quantum hardware. This paper aims to list the approaches developed so far, categorizing them by type, i.e., theoretical work and those implemented on classical or quantum hardware; by task, i.e., general purpose such as syntax-semantic representation or specific NLP tasks, like sentiment analysis or question answering; and by the resource used in the evaluation phase, i.e., whether a benchmark dataset or a custom one has been used. The advantages offered by QNLP are discussed, both in terms of performance and methodology, and some considerations about the possible usage QNLP approaches in the place of state-of-the-art deep learning-based ones are given.
APA, Harvard, Vancouver, ISO, and other styles
15

MONZ, CHRISTOF. "Machine learning for query formulation in question answering." Natural Language Engineering 17, no. 4 (January 5, 2011): 425–54. http://dx.doi.org/10.1017/s1351324910000276.

Full text
Abstract:
AbstractResearch on question answering dates back to the 1960s but has more recently been revisited as part of TREC's evaluation campaigns, where question answering is addressed as a subarea of information retrieval that focuses on specific answers to a user's information need. Whereas document retrieval systems aim to return the documents that are most relevant to a user's query, question answering systems aim to return actual answers to a users question. Despite this difference, question answering systems rely on information retrieval components to identify documents that contain an answer to a user's question. The computationally more expensive answer extraction methods are then applied only to this subset of documents that are likely to contain an answer. As information retrieval methods are used to filter the documents in the collection, the performance of this component is critical as documents that are not retrieved are not analyzed by the answer extraction component. The formulation of queries that are used for retrieving those documents has a strong impact on the effectiveness of the retrieval component. In this paper, we focus on predicting the importance of terms from the original question. We use model tree machine learning techniques in order to assign weights to query terms according to their usefulness for identifying documents that contain an answer. Term weights are learned by inspecting a large number of query formulation variations and their respective accuracy in identifying documents containing an answer. Several linguistic features are used for building the models, including part-of-speech tags, degree of connectivity in the dependency parse tree of the question, and ontological information. All of these features are extracted automatically by using several natural language processing tools. Incorporating the learned weights into a state-of-the-art retrieval system results in statistically significant improvements in identifying answer-bearing documents.
APA, Harvard, Vancouver, ISO, and other styles
16

Gaidamavičius, Dainius, and Tomas Iešmantas. "Deep learning method for visual question answering in the digital radiology domain." Mathematical Models in Engineering 8, no. 2 (June 26, 2022): 58–71. http://dx.doi.org/10.21595/mme.2022.22737.

Full text
Abstract:
Computer vision applications in the medical field are widespread, and language processing models have gained more and more interest as well. However, these two different tasks often go separately: disease or pathology detection is often based purely on image models, while for example patient notes are treated only from the natural language processing perspective. However, there is an important task between: given a medical image, describe what is inside it – organs, modality, pathology, location, and stage of the pathology, etc. This type of area falls into the so-called VQA area – Visual Question Answering. In this work, we concentrate on blending deep features extracted from image and language models into a single representation. A new method of feature fusion is proposed and shown to be superior in terms of accuracy compared to summation and concatenation methods. For the radiology image dataset VQA-2019 Med [1], the new method achieves 84.8 % compared to 82.2 % for other considered feature fusion methods. In addition to increased accuracy, the proposed model does not become more difficult to train as the number of unknown parameters does not increase, as compared with the simple addition operation for fusing features.
APA, Harvard, Vancouver, ISO, and other styles
17

Zhou, Shuohua, and Yanping Zhang. "DATLMedQA: A Data Augmentation and Transfer Learning Based Solution for Medical Question Answering." Applied Sciences 11, no. 23 (November 26, 2021): 11251. http://dx.doi.org/10.3390/app112311251.

Full text
Abstract:
With the outbreak of COVID-19 that has prompted an increased focus on self-care, more and more people hope to obtain disease knowledge from the Internet. In response to this demand, medical question answering and question generation tasks have become an important part of natural language processing (NLP). However, there are limited samples of medical questions and answers, and the question generation systems cannot fully meet the needs of non-professionals for medical questions. In this research, we propose a BERT medical pretraining model, using GPT-2 for question augmentation and T5-Small for topic extraction, calculating the cosine similarity of the extracted topic and using XGBoost for prediction. With augmentation using GPT-2, the prediction accuracy of our model outperforms the state-of-the-art (SOTA) model performance. Our experiment results demonstrate the outstanding performance of our model in medical question answering and question generation tasks, and its great potential to solve other biomedical question answering challenges.
APA, Harvard, Vancouver, ISO, and other styles
18

Su, Lei, Jiazhi Guo, Liping Wu, and Han Deng. "BamnetTL: Bidirectional Attention Memory Network with Transfer Learning for Question Answering Matching." International Journal of Intelligent Systems 2023 (August 3, 2023): 1–11. http://dx.doi.org/10.1155/2023/7434058.

Full text
Abstract:
In KBQA (knowledge base question answering), questions are processed using NLP (natural language processing), and knowledge base technology is used to generate the corresponding answers. KBQA is one of the most challenging tasks in the field of NLP. Q&A (question and answer) matching is an important part of knowledge base QA (question answering), in which the correct answer is selected from candidate answers. At present, Q&A matching task faces the problem of lacking training data in new fields, which leads to poor performance and low efficiency of the question answering system. The paper puts forward a KBQA Q&A matching model for deep feature transfer based on a bidirectional attention memory network, BamnetTL. It uses biattention to collect information from the knowledge base and question sentences in both directions in order to improve the accuracy of Q&A matching and transfers knowledge from different fields through a deep dynamic adaptation network. BamnetTL improves the accuracy of Q&A matching in the target domain by transferring the knowledge in the source domain with more training resources to the target domain with fewer training resources. The experimental results show that the proposed method is effective.
APA, Harvard, Vancouver, ISO, and other styles
19

BELZ, A., T. L. BERG, and L. YU. "From image to language and back again." Natural Language Engineering 24, no. 3 (April 23, 2018): 325–62. http://dx.doi.org/10.1017/s1351324918000086.

Full text
Abstract:
Work in computer vision and natural language processing involving images and text has been experiencing explosive growth over the past decade, with a particular boost coming from the neural network revolution. The present volume brings together five research articles from several different corners of the area: multilingual multimodal image description (Franket al.), multimodal machine translation (Madhyasthaet al., Franket al.), image caption generation (Madhyasthaet al., Tantiet al.), visual scene understanding (Silbereret al.), and multimodal learning of high-level attributes (Sorodocet al.). In this article, we touch upon all of these topics as we review work involving images and text under the three main headings of image description (Section 2), visually grounded referring expression generation (REG) and comprehension (Section 3), and visual question answering (VQA) (Section 4).
APA, Harvard, Vancouver, ISO, and other styles
20

Sathish Dhanasegar, Sathish. "QUESTION ANSWERING SYSTEM FOR HOSPITALITY DOMAIN USING TRANSFORMER-BASED LANGUAGE MODELS." International Research Journal of Computer Science 9, no. 5 (May 31, 2022): 110–34. http://dx.doi.org/10.26562/irjcs.2022.v0905.003.

Full text
Abstract:
Recent research demonstrates significant success on a wide range of Natural Language Processing (NLP) tasks by utilizing Transformer architectures. Question answering (QA) is an important aspect of the NLP task. The systems enable users to ask a question in natural language and receive an answer accordingly. Most questions in the hospitality industry are content-based, with the expected response being accurate data rather than”yes” or ”no.” Therefore, it requires the system to understand the semantics of the questions and return relevant answers. Despite several advancements in transformer-based models for QA, we are interested in evaluating how it performs with unlabeled data using a pre-trained model, which could also define-tune. This project aims to develop a Question-Answering system for the hospitality domain, in which text will have hospitality content, and the user will be able to ask a question about them. We use an Attention mechanism to train a span-based model that predicts the position of the start and end tokens in a paragraph. By using the model, the users can directly type in their questions in the interactive user interface and receive the response. The data set for this study is created using response templates from the existing dialogue system. We use the Stanford Question and Answer (SQuAD 2.0) data structure to form the dataset, which is mostly used for QA models. During phase1, we evaluate the pre-trained QA models BERT, ROBERTa, and DistilBERT to predict answers and measure the results using Exact Match(EM) and ROUGE-LF1-Score. In Phase 2 of the project, we fine-tune the QA models and their hyper-parameters by training the model with hospitality data sets, and the results are compared. The fine-tuned ROBERTa models achieved the maximum of ROUGE-L F1-Score and EM of 71.39 and 52.17, respectively, which is a relatively 4% increase in F1-Score and 8.7% increase in EM score compared to the pre-trained model. The results of this project will be used to improve the efficiency of the dialogue system in the hospitality industry.
APA, Harvard, Vancouver, ISO, and other styles
21

Boukhers, Zeyd, Timo Hartmann, and Jan Jürjens. "COIN: Counterfactual Image Generation for Visual Question Answering Interpretation." Sensors 22, no. 6 (March 14, 2022): 2245. http://dx.doi.org/10.3390/s22062245.

Full text
Abstract:
Due to the significant advancement of Natural Language Processing and Computer Vision-based models, Visual Question Answering (VQA) systems are becoming more intelligent and advanced. However, they are still error-prone when dealing with relatively complex questions. Therefore, it is important to understand the behaviour of the VQA models before adopting their results. In this paper, we introduce an interpretability approach for VQA models by generating counterfactual images. Specifically, the generated image is supposed to have the minimal possible change to the original image and leads the VQA model to give a different answer. In addition, our approach ensures that the generated image is realistic. Since quantitative metrics cannot be employed to evaluate the interpretability of the model, we carried out a user study to assess different aspects of our approach. In addition to interpreting the result of VQA models on single images, the obtained results and the discussion provides an extensive explanation of VQA models’ behaviour.
APA, Harvard, Vancouver, ISO, and other styles
22

Yeh, Jui-Feng, Yu-Jui Huang, and Kao-Pin Huang. "Ontology based Baysian network for clinical specialty supporting in interactive question answering systems." Engineering Computations 34, no. 7 (October 2, 2017): 2435–47. http://dx.doi.org/10.1108/ec-03-2017-0073.

Full text
Abstract:
Purpose This study aims to provide an ontology based Baysian network for clinical specialty supporting. As a knowledge base, ontology plays an essential role in domain applications especially in expert systems. Interactive question answering systems are suitable for personal domain consulting and recommended for real-time usage. Clinical specialty supporting for dispatching patients can assist hospitals to locate desired treatment departments for individuals relevant to their syndromes and disease efficiently and effectively. By referring to interactive question answering systems, individuals can understand how to alleviate time and medical resource wasting according to recommendations from medical ontology-based systems. Design/methodology/approach This work presents an ontology based on clinical specialty supporting using an interactive question answering system to achieve this aim. The ontology incorporates close temporal associations between words in input query to represent word co-occurrence relationships in concept space. The patterns defined in lexicon chain mechanism are further extracted from the query words to infer related concepts for treatment departments to retrieve information. Findings The precision and recall rates are considered as the criteria for model optimization. Finally, the inference-based interactive question answering system using natural language interface is adopted for clinical specialty supporting, and indicates its superiority in information retrieval over traditional approaches. Originality/value From the observed experimental results, we find the proposed method is useful in practice especially in treatment department decision supporting using metrics precision and recall rates. The interactive interface using natural language dialogue attracts the users’ attention and obtains a good score in mean opinion score measure.
APA, Harvard, Vancouver, ISO, and other styles
23

Guo, Jia. "Deep learning approach to text analysis for human emotion detection from big data." Journal of Intelligent Systems 31, no. 1 (January 1, 2022): 113–26. http://dx.doi.org/10.1515/jisys-2022-0001.

Full text
Abstract:
Abstract Emotional recognition has arisen as an essential field of study that can expose a variety of valuable inputs. Emotion can be articulated in several means that can be seen, like speech and facial expressions, written text, and gestures. Emotion recognition in a text document is fundamentally a content-based classification issue, including notions from natural language processing (NLP) and deep learning fields. Hence, in this study, deep learning assisted semantic text analysis (DLSTA) has been proposed for human emotion detection using big data. Emotion detection from textual sources can be done utilizing notions of Natural Language Processing. Word embeddings are extensively utilized for several NLP tasks, like machine translation, sentiment analysis, and question answering. NLP techniques improve the performance of learning-based methods by incorporating the semantic and syntactic features of the text. The numerical outcomes demonstrate that the suggested method achieves an expressively superior quality of human emotion detection rate of 97.22% and the classification accuracy rate of 98.02% with different state-of-the-art methods and can be enhanced by other emotional word embeddings.
APA, Harvard, Vancouver, ISO, and other styles
24

Xiao, Yuliang, Lijuan Zhang, Jie Huang, Lei Zhang, and Jian Wan. "An Information Retrieval-Based Joint System for Complex Chinese Knowledge Graph Question Answering." Electronics 11, no. 19 (October 7, 2022): 3214. http://dx.doi.org/10.3390/electronics11193214.

Full text
Abstract:
Knowledge graph-based question answering is an intelligent approach to deducing the answer to a natural language question from structured knowledge graph information. As one of the mainstream knowledge graph-based question answering approaches, information retrieval-based methods infer the correct answer by constructing and ranking candidate paths, which achieve excellent performance in simple questions but struggle to handle complex questions due to rich entity information and diverse relations. In this paper, we construct a joint system with three subsystems based on the information retrieval methods, where candidate paths can be efficiently generated and ranked, and a new text-matching method is introduced to capture the semantic correlation between questions and candidate paths. Results of the experiment conducted on the China Conference on Knowledge Graph and Semantic Computing 2019 Chinese Knowledge Base Question Answering dataset verify the superiority and efficiency of our approach.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhu, He, Ren Togo, Takahiro Ogawa, and Miki Haseyama. "Multimodal Natural Language Explanation Generation for Visual Question Answering Based on Multiple Reference Data." Electronics 12, no. 10 (May 10, 2023): 2183. http://dx.doi.org/10.3390/electronics12102183.

Full text
Abstract:
As deep learning research continues to advance, interpretability is becoming as important as model performance. Conducting interpretability studies to understand the decision-making processes of deep learning models can improve performance and provide valuable insights for humans. The interpretability of visual question answering (VQA), a crucial task for human–computer interaction, has garnered the attention of researchers due to its wide range of applications. The generation of natural language explanations for VQA that humans can better understand has gradually supplanted heatmap representations as the mainstream focus in the field. Humans typically answer questions by first identifying the primary objects in an image and then referring to various information sources, both within and beyond the image, including prior knowledge. However, previous studies have only considered input images, resulting in insufficient information that can lead to incorrect answers and implausible explanations. To address this issue, we introduce multiple references in addition to the input image. Specifically, we propose a multimodal model that generates natural language explanations for VQA. We introduce outside knowledge using the input image and question and incorporate object information into the model through an object detection module. By increasing the information available during the model generation process, we significantly improve VQA accuracy and the reliability of the generated explanations. Moreover, we employ a simple and effective feature fusion joint vector to combine information from multiple modalities while maximizing information preservation. Qualitative and quantitative evaluation experiments demonstrate that the proposed method can generate more reliable explanations than state-of-the-art methods while maintaining answering accuracy.
APA, Harvard, Vancouver, ISO, and other styles
26

Asgari-Bidhendi, Majid, Mehrdad Nasser, Behrooz Janfada, and Behrouz Minaei-Bidgoli. "PERLEX: A Bilingual Persian-English Gold Dataset for Relation Extraction." Scientific Programming 2021 (March 16, 2021): 1–8. http://dx.doi.org/10.1155/2021/8893270.

Full text
Abstract:
Relation extraction is the task of extracting semantic relations between entities in a sentence. It is an essential part of some natural language processing tasks such as information extraction, knowledge extraction, question answering, and knowledge base population. The main motivations of this research stem from a lack of a dataset for relation extraction in the Persian language as well as the necessity of extracting knowledge from the growing big data in the Persian language for different applications. In this paper, we present “PERLEX” as the first Persian dataset for relation extraction, which is an expert-translated version of the “SemEval-2010-Task-8” dataset. Moreover, this paper addresses Persian relation extraction utilizing state-of-the-art language-agnostic algorithms. We employ six different models for relation extraction on the proposed bilingual dataset, including a non-neural model (as the baseline), three neural models, and two deep learning models fed by multilingual BERT contextual word representations. The experiments result in the maximum F1-score of 77.66% (provided by BERTEM-MTB method) as the state of the art of relation extraction in the Persian language.
APA, Harvard, Vancouver, ISO, and other styles
27

Jawale, Sakshi, Pranit Londhe, Prajwali Kadam, Sarika Jadhav, and Rushikesh Kolekar. "Automatic Text Summarization." International Journal for Research in Applied Science and Engineering Technology 11, no. 5 (May 31, 2023): 1842–46. http://dx.doi.org/10.22214/ijraset.2023.51815.

Full text
Abstract:
Abstract: Text Summarization is a Natural Language Processing (NLP) method that extracts and collects data from the source and summarizes it. Text summarization has become a requirement for many applications since manually summarizing vast amounts of information is difficult, especially with the expanding magnitude of data. Financial research, search engine optimization, media monitoring, question-answering bots, and document analysis all benefit from text summarization. This paper extensively addresses several summarizing strategies depending on intent, volume of data, and outcome. Our aim is to evaluate and convey an abstract viewpoint of the present scenario research work for text summarization.
APA, Harvard, Vancouver, ISO, and other styles
28

Zhen, Lihua, and Xiaoqi Sun. "The Research of Convolutional Neural Network Based on Integrated Classification in Question Classification." Scientific Programming 2021 (October 12, 2021): 1–8. http://dx.doi.org/10.1155/2021/4176059.

Full text
Abstract:
As a new generation of search engine, automatic question answering system (QAS) is becoming more and more important and has become one of the hotspots of computer application research and natural language processing (NLP). However, as an indispensable part of the QAS, the role of question classification is an understood thing in the system. In view of this, to further make the performance of question classification much better, both the feature extraction and the classification model were explored. On the study of existing CNN research, an improved CNN model based on Bagging integrated classification (“W2V + B-CNN” for short) is proposed and applied to question classification. Firstly, we combine the characteristics of short texts, use the Word2Vec tool to map the features of the words to a certain dimension, and organize the question sentences into the form of a two-dimensional matrix similar to the image. Then, the trained word vectors are used as the input of the CNN for feature extraction. Finally, the Bagging integrated classification algorithm is used to replace the Softmax classification of the traditional CNN for classification. In other words, the good of W2V + B-CNN model is that it can make use of the advantages of CNN and Bagging integrated classification at the same time. Overall, the new model can not only use the powerful feature extraction capabilities of CNN to extract the potential features of natural language questions but also use the good data classification capabilities of the integrated classification algorithm for feature classification at the same time, which can help improve the accuracy of the W2V + B-CNN in the application of question classification. The comparative experiment results prove that the effect of the W2V + B-CNN is significantly better than that of the CNN and other classification algorithms in question classification.
APA, Harvard, Vancouver, ISO, and other styles
29

Phakmongkol, Puri, and Peerapon Vateekul. "Enhance Text-to-Text Transfer Transformer with Generated Questions for Thai Question Answering." Applied Sciences 11, no. 21 (November 1, 2021): 10267. http://dx.doi.org/10.3390/app112110267.

Full text
Abstract:
Question Answering (QA) is a natural language processing task that enables the machine to understand a given context and answer a given question. There are several QA research trials containing high resources of the English language. However, Thai is one of the languages that have low availability of labeled corpora in QA studies. According to previous studies, while the English QA models could achieve more than 90% of F1 scores, Thai QA models could obtain only 70% in our baseline. In this study, we aim to improve the performance of Thai QA models by generating more question-answer pairs with Multilingual Text-to-Text Transfer Transformer (mT5) along with data preprocessing methods for Thai. With this method, the question-answer pairs can synthesize more than 100 thousand pairs from provided Thai Wikipedia articles. Utilizing our synthesized data, many fine-tuning strategies were investigated to achieve the highest model performance. Furthermore, we have presented that the syllable-level F1 is a more suitable evaluation measure than Exact Match (EM) and the word-level F1 for Thai QA corpora. The experiment was conducted on two Thai QA corpora: Thai Wiki QA and iApp Wiki QA. The results show that our augmented model is the winner on both datasets compared to other modern transformer models: Roberta and mT5.
APA, Harvard, Vancouver, ISO, and other styles
30

Mars, Mourad. "From Word Embeddings to Pre-Trained Language Models: A State-of-the-Art Walkthrough." Applied Sciences 12, no. 17 (September 1, 2022): 8805. http://dx.doi.org/10.3390/app12178805.

Full text
Abstract:
With the recent advances in deep learning, different approaches to improving pre-trained language models (PLMs) have been proposed. PLMs have advanced state-of-the-art (SOTA) performance on various natural language processing (NLP) tasks such as machine translation, text classification, question answering, text summarization, information retrieval, recommendation systems, named entity recognition, etc. In this paper, we provide a comprehensive review of prior embedding models as well as current breakthroughs in the field of PLMs. Then, we analyse and contrast the various models and provide an analysis of the way they have been built (number of parameters, compression techniques, etc.). Finally, we discuss the major issues and future directions for each of the main points.
APA, Harvard, Vancouver, ISO, and other styles
31

Kuwana, Ayato, Atsushi Oba, Ranto Sawai, and Incheon Paik. "Automatic Taxonomy Classification by Pretrained Language Model." Electronics 10, no. 21 (October 29, 2021): 2656. http://dx.doi.org/10.3390/electronics10212656.

Full text
Abstract:
In recent years, automatic ontology generation has received significant attention in information science as a means of systemizing vast amounts of online data. As our initial attempt of ontology generation with a neural network, we proposed a recurrent neural network-based method. However, updating the architecture is possible because of the development in natural language processing (NLP). By contrast, the transfer learning of language models trained by a large, unlabeled corpus has yielded a breakthrough in NLP. Inspired by these achievements, we propose a novel workflow for ontology generation comprising two-stage learning. Our results showed that our best method improved accuracy by over 12.5%. As an application example, we applied our model to the Stanford Question Answering Dataset to show ontology generation in a real field. The results showed that our model can generate a good ontology, with some exceptions in the real field, indicating future research directions to improve the quality.
APA, Harvard, Vancouver, ISO, and other styles
32

Ait-Mlouk, Addi, Sadi A. Alawadi, Salman Toor, and Andreas Hellander. "FedQAS: Privacy-Aware Machine Reading Comprehension with Federated Learning." Applied Sciences 12, no. 6 (March 18, 2022): 3130. http://dx.doi.org/10.3390/app12063130.

Full text
Abstract:
Machine reading comprehension (MRC) of text data is a challenging task in Natural Language Processing (NLP), with a lot of ongoing research fueled by the release of the Stanford Question Answering Dataset (SQuAD) and Conversational Question Answering (CoQA). It is considered to be an effort to teach computers how to “understand” a text, and then to be able to answer questions about it using deep learning. However, until now, large-scale training on private text data and knowledge sharing has been missing for this NLP task. Hence, we present FedQAS, a privacy-preserving machine reading system capable of leveraging large-scale private data without the need to pool those datasets in a central location. The proposed approach combines transformer models and federated learning technologies. The system is developed using the FEDn framework and deployed as a proof-of-concept alliance initiative. FedQAS is flexible, language-agnostic, and allows intuitive participation and execution of local model training. In addition, we present the architecture and implementation of the system, as well as provide a reference evaluation based on the SQuAD dataset, to showcase how it overcomes data privacy issues and enables knowledge sharing between alliance members in a Federated learning setting.
APA, Harvard, Vancouver, ISO, and other styles
33

Wu, Hanqian, Mumu Liu, Shangbin Zhang, Zhike Wang, and Siliang Cheng. "Big Data Management and Analytics in Scientific Programming: A Deep Learning-Based Method for Aspect Category Classification of Question-Answering-Style Reviews." Scientific Programming 2020 (June 8, 2020): 1–10. http://dx.doi.org/10.1155/2020/4690974.

Full text
Abstract:
Online product reviews are exploring on e-commerce platforms, and mining aspect-level product information contained in those reviews has great economic benefit. The aspect category classification task is a basic task for aspect-level sentiment analysis which has become a hot research topic in the natural language processing (NLP) field during the last decades. In various e-commerce platforms, there emerge various user-generated question-answering (QA) reviews which generally contain much aspect-related information of products. Although some researchers have devoted their efforts on the aspect category classification for traditional product reviews, the existing deep learning-based approaches cannot be well applied to represent the QA-style reviews. Thus, we propose a 4-dimension (4D) textual representation model based on QA interaction-level and hyperinteraction-level by modeling with different levels of the text representation, i.e., word-level, sentence-level, QA interaction-level, and hyperinteraction-level. In our experiments, the empirical studies on datasets from three domains demonstrate that our proposals perform better than traditional sentence-level representation approaches, especially in the Digit domain.
APA, Harvard, Vancouver, ISO, and other styles
34

Guo, Zihan, and Dezhi Han. "Multi-Modal Explicit Sparse Attention Networks for Visual Question Answering." Sensors 20, no. 23 (November 26, 2020): 6758. http://dx.doi.org/10.3390/s20236758.

Full text
Abstract:
Visual question answering (VQA) is a multi-modal task involving natural language processing (NLP) and computer vision (CV), which requires models to understand of both visual information and textual information simultaneously to predict the correct answer for the input visual image and textual question, and has been widely used in smart and intelligent transport systems, smart city, and other fields. Today, advanced VQA approaches model dense interactions between image regions and question words by designing co-attention mechanisms to achieve better accuracy. However, modeling interactions between each image region and each question word will force the model to calculate irrelevant information, thus causing the model’s attention to be distracted. In this paper, to solve this problem, we propose a novel model called Multi-modal Explicit Sparse Attention Networks (MESAN), which concentrates the model’s attention by explicitly selecting the parts of the input features that are the most relevant to answering the input question. We consider that this method based on top-k selection can reduce the interference caused by irrelevant information and ultimately help the model to achieve better performance. The experimental results on the benchmark dataset VQA v2 demonstrate the effectiveness of our model. Our best single model delivers 70.71% and 71.08% overall accuracy on the test-dev and test-std sets, respectively. In addition, we also demonstrate that our model can obtain better attended features than other advanced models through attention visualization. Our work proves that the models with sparse attention mechanisms can also achieve competitive results on VQA datasets. We hope that it can promote the development of VQA models and the application of artificial intelligence (AI) technology related to VQA in various aspects.
APA, Harvard, Vancouver, ISO, and other styles
35

Deepthi, Godavarthi, and A. Mary Sowjanya. "Query-Based Retrieval Using Universal Sentence Encoder." Revue d'Intelligence Artificielle 35, no. 4 (August 31, 2021): 301–6. http://dx.doi.org/10.18280/ria.350404.

Full text
Abstract:
In Natural language processing, various tasks can be implemented with the features provided by word embeddings. But for obtaining embeddings for larger chunks like sentences, the efforts applied through word embeddings will not be sufficient. To resolve such issues sentence embeddings can be used. In sentence embeddings, complete sentences along with their semantic information are represented as vectors so that the machine finds it easy to understand the context. In this paper, we propose a Question Answering System (QAS) based on sentence embeddings. Our goal is to obtain the text from the provided context for a user-query by extracting the sentence in which the correct answer is present. Traditionally, infersent models have been used on SQUAD for building QAS. In recent times, Universal Sentence Encoder with USECNN and USETrans have been developed. In this paper, we have used another variant of the Universal sentence encoder, i.e. Deep averaging network in order to obtain pre-trained sentence embeddings. The results on the SQUAD-2.0 dataset indicate our approach (USE with DAN) performs well compared to Facebook’s infersent embedding.
APA, Harvard, Vancouver, ISO, and other styles
36

Lee, Minhyeok. "A Mathematical Interpretation of Autoregressive Generative Pre-Trained Transformer and Self-Supervised Learning." Mathematics 11, no. 11 (May 25, 2023): 2451. http://dx.doi.org/10.3390/math11112451.

Full text
Abstract:
In this paper, we present a rigorous mathematical examination of generative pre-trained transformer (GPT) models and their autoregressive self-supervised learning mechanisms. We begin by defining natural language space and knowledge space, which are two key concepts for understanding the dimensionality reduction process in GPT-based large language models (LLMs). By exploring projection functions and their inverses, we establish a framework for analyzing the language generation capabilities of these models. We then investigate the GPT representation space, examining its implications for the models’ approximation properties. Finally, we discuss the limitations and challenges of GPT models and their learning mechanisms, considering trade-offs between complexity and generalization, as well as the implications of incomplete inverse projection functions. Our findings demonstrate that GPT models possess the capability to encode knowledge into low-dimensional vectors through their autoregressive self-supervised learning mechanism. This comprehensive analysis provides a solid mathematical foundation for future advancements in GPT-based LLMs, promising advancements in natural language processing tasks such as language translation, text summarization, and question answering due to improved understanding and optimization of model training and performance.
APA, Harvard, Vancouver, ISO, and other styles
37

Rokkam, Krishna Vamsi. "An Intelligent TLDR Software for Summarization." International Journal for Research in Applied Science and Engineering Technology 10, no. 6 (June 30, 2022): 2852–55. http://dx.doi.org/10.22214/ijraset.2022.44508.

Full text
Abstract:
Abstract: The amount of textual data available from diverse resources is increasing dramatically in the substantial data age. This textual volume has a wealth of information and expertise that must be skilfully summarised in order to be useful. Because billions of articles are published every day, it takes a long time to look through and keep up with all of the information available. Much of this text material has to be reduced to shorter, focused summaries that capture the most important aspects, both so we can explore it more efficiently and to ensure that the bigger papers include the information we need. Because manual text summarising is a time-consuming and typically difficult activity, automating it is expanding in popularity and thus provides an ideal impetus for academic study. The growing availability of documents has necessitated much study in the field of natural language processing (NLP) for automatic text summarization. "Is there any software that can assist us digest the facts more efficiently and in less time?" is the genuine question. As a result, the major goal of the summarization system is to extract the most important information from the data and deliver it to the consumers. In NLP, summarization is the act of condensing text information in huge texts to make it easier to understand and consume. We suggest a solution by developing a text summary programme that uses Natural Language Processing and accepts an input (plain text or text scrapped from a website). The output is the outlined text. Natural language processing, along with machine learning, makes it easier to condense large quantities of information into a coherent and fluent summary that only incorporates the article's most important points.
APA, Harvard, Vancouver, ISO, and other styles
38

Khan, Arijit. "Knowledge Graphs Querying." ACM SIGMOD Record 52, no. 2 (August 10, 2023): 18–29. http://dx.doi.org/10.1145/3615952.3615956.

Full text
Abstract:
Knowledge graphs (KGs) such as DBpedia, Freebase, YAGO, Wikidata, and NELL were constructed to store large-scale, real-world facts as (subject, predicate, object) triples - that can also be modeled as a graph, where a node (a subject or an object) represents an entity with attributes, and a directed edge (a predicate) is a relationship between two entities. Querying KGs is critical in web search, question answering (QA), semantic search, personal assistants, fact checking, and recommendation. While significant progress has been made on KG construction and curation, thanks to deep learning recently we have seen a surge of research on KG querying and QA. The objectives of our survey are two-fold. First, research on KG querying has been conducted by several communities, such as databases, data mining, semantic web, machine learning, information retrieval, and natural language processing (NLP), with different focus and terminologies; and also in diverse topics ranging from graph databases, query languages, join algorithms, graph patterns matching, to more sophisticated KG embedding and natural language questions (NLQs). We aim at uniting different interdisciplinary topics and concepts that have been developed for KG querying. Second, many recent advances on KG and query embedding, multimodal KG, and KG-QA come from deep learning, IR, NLP, and computer vision domains. We identify important challenges of KG querying that received less attention by graph databases, and by the DB community in general, e.g., incomplete KG, semantic matching, multimodal data, and NLQs. We conclude by discussing interesting opportunities for the data management community, for instance, KG as a unified data model and vector-based query processing.
APA, Harvard, Vancouver, ISO, and other styles
39

Guan, Xiaohan, Jianhui Han, Zhi Liu, and Mengmeng Zhang. "Sentence Similarity Algorithm Based on Fused Bi-Channel Dependency Matching Feature." International Journal of Pattern Recognition and Artificial Intelligence 34, no. 07 (October 18, 2019): 2050019. http://dx.doi.org/10.1142/s0218001420500196.

Full text
Abstract:
Many tasks of natural language processing such as information retrieval, intelligent question answering, and machine translation require the calculation of sentence similarity. The traditional calculation methods used in the past could not solve semantic understanding problems well. First, the model structure based on Siamese lack of interaction between sentences; second, it has matching problem which contains lacking position information and only using partial matching factor based on the matching model. In this paper, a combination of word and word’s dependence is proposed to calculate the sentence similarity. This combination can extract the word features and word’s dependency features. To extract more matching features, a bi-directional multi-interaction matching sequence model is proposed by using word2vec and dependency2vec. This model obtains matching features by convolving and pooling the word-granularity (word vector, dependency vector) interaction sequences in two directions. Next, the model aggregates the bi-direction matching features. The paper evaluates the model on two tasks: paraphrase identification and natural language inference. The experimental results show that the combination of word and word’s dependence can enhance the ability of extracting matching features between two sentences. The results also show that the model with dependency can achieve higher accuracy than these models without using dependency.
APA, Harvard, Vancouver, ISO, and other styles
40

Jin, Di, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang, and Peter Szolovits. "What Disease Does This Patient Have? A Large-Scale Open Domain Question Answering Dataset from Medical Exams." Applied Sciences 11, no. 14 (July 12, 2021): 6421. http://dx.doi.org/10.3390/app11146421.

Full text
Abstract:
Open domain question answering (OpenQA) tasks have been recently attracting more and more attention from the natural language processing (NLP) community. In this work, we present the first free-form multiple-choice OpenQA dataset for solving medical problems, MedQA, collected from the professional medical board exams. It covers three languages: English, simplified Chinese, and traditional Chinese, and contains 12,723, 34,251, and 14,123 questions for the three languages, respectively. We implement both rule-based and popular neural methods by sequentially combining a document retriever and a machine comprehension model. Through experiments, we find that even the current best method can only achieve 36.7%, 42.0%, and 70.1% of test accuracy on the English, traditional Chinese, and simplified Chinese questions, respectively. We expect MedQA to present great challenges to existing OpenQA systems and hope that it can serve as a platform to promote much stronger OpenQA models from the NLP community in the future.
APA, Harvard, Vancouver, ISO, and other styles
41

Vaghasia, Rishil. "An Improvised Approach of Deep Learning Neural Networks in NLP Applications." International Journal for Research in Applied Science and Engineering Technology 11, no. 1 (January 31, 2023): 1599–603. http://dx.doi.org/10.22214/ijraset.2023.48884.

Full text
Abstract:
Abstract: In recent years, natural language processing (NLP) has drawn a lot of interest for its ability to computationally represent and analyze human language. Its uses have expanded to include machine translation, email spam detection, information extraction, summarization, medical diagnosis, and question answering, among other areas. The purpose of this research is to investigate how deep learning and neural networks are used to analyze the syntax of natural language. This research first investigates a feed-forward neural network-based classifier for a transfer-based dependent syntax analyzer. This study presents a long-term memory neural network-based dependent syntactic analysis paradigm. This model, which will serve as a feature extractor, is based on the feed-forward neural network model mentioned before. After the feature extractor is learned, we train a recursive neural network classifier that is optimized by sentences using a long short-term memory neural network as a classifier of the transfer action and the characteristics retrieved by the syntactic analyzer as its input. Syntactic analysis replaces the method of modeling independent analysis with one that models the analysis of the entire sentence as a whole. The experimental findings demonstrate that the model has improved its performance more than the benchmark techniques.
APA, Harvard, Vancouver, ISO, and other styles
42

Fung, Yin-Chun, Lap-Kei Lee, and Kwok Tai Chui. "An Automatic Question Generator for Chinese Comprehension." Inventions 8, no. 1 (January 28, 2023): 31. http://dx.doi.org/10.3390/inventions8010031.

Full text
Abstract:
Question generation (QG) is a natural language processing (NLP) problem that aims to generate natural questions from a given sentence or paragraph. QG has many applications, especially in education. For example, QG can complement teachers’ efforts in creating assessment materials by automatically generating many related questions. QG can also be used to generate frequently asked question (FAQ) sets for business. Question answering (QA) can benefit from QG, where the training dataset of QA can be enriched using QG to improve the learning and performance of QA algorithms. However, most of the existing works and tools in QG are designed for English text. This paper presents the design of a web-based question generator for Chinese comprehension. The generator provides a user-friendly web interface for users to generate a set of wh-questions (i.e., what, who, when, where, why, and how) based on a Chinese text conditioned on a corresponding set of answer phrases. The web interface allows users to easily refine the answer phrases that are automatically generated by the web generator. The underlying question generation is based on the transformer approach, which was trained on a dataset combined from three publicly available Chinese reading comprehension datasets, namely, DRUD, CMRC2017, and CMRC2018. Linguistic features such as parts of speech (POS) and named-entity recognition (NER) are extracted from the text, which together with the original text and the answer phrases, are then fed into a machine learning algorithm based on a pre-trained mT5 model. The generated questions with answers are displayed in a user-friendly format, supplemented with the source sentences in the text used for generating each question. We expect the design of this web tool to provide insight into how Chinese question generation can be made easily accessible to users with low computer literacy.
APA, Harvard, Vancouver, ISO, and other styles
43

Avisyah, Gisnaya Faridatul, Ivandi Julatha Putra, and Sidiq Syamsul Hidayat. "Open Artificial Intelligence Analysis using ChatGPT Integrated with Telegram Bot." Jurnal ELTIKOM 7, no. 1 (June 30, 2023): 60–66. http://dx.doi.org/10.31961/eltikom.v7i1.724.

Full text
Abstract:
Chatbot technology uses natural language processing with artificial intelligence that can interact quickly in answering a question and producing relevant answer. ChatGPT is the latest chatbot platform developed by Open AI which allows users to interact with text-based engines. This platform uses the GPT-3 (Generative Pre-trained Transformer) algorithm to help understand the response humans want and generate relevant responses. Using the platform, users can find answers to their questions quickly and relevantly. The method used for OpenAI's research on ChatGPT integrated through Telegram chatbot is using a waterfall method which utilizes open API tokens from Telegram. In this research we develop OpenAI application connected with telegram bot. This application can help provide a wide range of information, especially information related to the Semarang State Polytechnic. By using Telegram chatbot in the program, users can find it easy to ask because it is integrated with OpenAI using the API. Telegram chatbot, which has a chat feature, allows easy communication between users and chatbots. Thus, it may reduce system errors on the bot.
APA, Harvard, Vancouver, ISO, and other styles
44

Ahmed, Muzamil, Hikmat Khan, Tassawar Iqbal, Fawaz Khaled Alarfaj, Abdullah Alomair, and Naif Almusallam. "On solving textual ambiguities and semantic vagueness in MRC based question answering using generative pre-trained transformers." PeerJ Computer Science 9 (July 24, 2023): e1422. http://dx.doi.org/10.7717/peerj-cs.1422.

Full text
Abstract:
Machine reading comprehension (MRC) is one of the most challenging tasks and active fields in natural language processing (NLP). MRC systems aim to enable a machine to understand a given context in natural language and to answer a series of questions about it. With the advent of bi-directional deep learning algorithms and large-scale datasets, MRC achieved improved results. However, these models are still suffering from two research issues: textual ambiguities and semantic vagueness to comprehend the long passages and generate answers for abstractive MRC systems. To address these issues, this paper proposes a novel Extended Generative Pretrained Transformers-based Question Answering (ExtGPT-QA) model to generate precise and relevant answers to questions about a given context. The proposed architecture comprises two modified forms of encoder and decoder as compared to GPT. The encoder uses a positional encoder to assign a unique representation with each word in the sentence for reference to address the textual ambiguities. Subsequently, the decoder module involves a multi-head attention mechanism along with affine and aggregation layers to mitigate semantic vagueness with MRC systems. Additionally, we applied syntax and semantic feature engineering techniques to enhance the effectiveness of the proposed model. To validate the proposed model’s effectiveness, a comprehensive empirical analysis is carried out using three benchmark datasets including SQuAD, Wiki-QA, and News-QA. The results of the proposed ExtGPT-QA outperformed state of art MRC techniques and obtained 93.25% and 90.52% F1-score and exact match, respectively. The results confirm the effectiveness of the ExtGPT-QA model to address textual ambiguities and semantic vagueness issues in MRC systems.
APA, Harvard, Vancouver, ISO, and other styles
45

Yin, Didi, Siyuan Cheng, Boxu Pan, Yuanyuan Qiao, Wei Zhao, and Dongyu Wang. "Chinese Named Entity Recognition Based on Knowledge Based Question Answering System." Applied Sciences 12, no. 11 (May 26, 2022): 5373. http://dx.doi.org/10.3390/app12115373.

Full text
Abstract:
The KBQA (Knowledge-Based Question Answering) system is an essential part of the smart customer service system. KBQA is a type of QA (Question Answering) system based on KB (Knowledge Base). It aims to automatically answer natural language questions by retrieving structured data stored in the knowledge base. Generally, when a KBQA system receives the user’s query, it first needs to recognize topic entities of the query, such as name, location, organization, etc. This process is the NER (Named Entity Recognition). In this paper, we use the Bidirectional Long Short-Term Memory-Conditional Random Field (Bi-LSTM-CRF) model and introduce the SoftLexicon method for a Chinese NER task. At the same time, according to the analysis of the characteristics of application scenario, we propose a fuzzy matching module based on the combination of multiple methods. This module can efficiently modify the error recognition results, which can further improve the performance of entity recognition. We combine the NER model and the fuzzy matching module into an NER system. To explore the availability of the system in some specific fields, such as a power grid field, we utilize the power grid-related original data collected by the Hebei Electric Power Company to improve our system according to the characteristics of data in the power grid field. We innovatively make the dataset and high-frequency word lexicon in the power grid field, which makes our proposed NER system perform better in recognizing entities in the field of power grid. We used the cross-validation method for validation. The experimental results show that the F1-score of the improved NER model on the power grid dataset reaches 92.43%. After processing the recognition results by using the fuzzy matching module, about 99% of the entities in the test set can be correctly recognized. It proves that the proposed NER system can achieve excellent performance in the application scenario of a power grid. The results of this work will also fill the gap in the research of intelligent customer-service-related technologies in the power grid field in China.
APA, Harvard, Vancouver, ISO, and other styles
46

Yogish, Deepa, T. N. Manjunath, and Ravindra S. Hegadi. "Analysis of Vector Space Method in Information Retrieval for Smart Answering System." Journal of Computational and Theoretical Nanoscience 17, no. 9 (July 1, 2020): 4468–72. http://dx.doi.org/10.1166/jctn.2020.9099.

Full text
Abstract:
In the world of internet, searching play a vital role to retrieve the relevant answers for the user specific queries. The most promising application of natural language processing and information retrieval system is Question answering system which provides directly the accurate answer instead of set of documents. The main objective of information retrieval is to retrieve relevant document from a huge volume of data sets underlying in the internet using appropriatemodel. There are many models proposed for retrieval process such as Boolean, Vector space and Probabilistic method. Vector space model is best method in information retrieval for document ranking with efficient document representation which combines simplicity and clarity. VSM adopts similarity function to measure the matching between documents and user intent, and assign scores from the biggest to smallest. The documents and query are assigned with weights using term frequency and inverse document frequency method. To retrieve most relevant document to the user query term, document ranking function cosine similarity score is applied for every document and user query. The documents having more similarity scores will be considered as relevant documents to the query term and they are ranked based on these scores. This paper emphasizes on different techniques of information retrieval and Vector Space Model offers a realistic compromise in IR processing. It allows best weighing scheme which ranks the set of documents in order of relevance based on user query.
APA, Harvard, Vancouver, ISO, and other styles
47

Barash, Guy, Mauricio Castillo-Effen, Niyati Chhaya, Peter Clark, Huáscar Espinoza, Eitan Farchi, Christopher Geib, et al. "Reports of the Workshops Held at the 2019 AAAI Conference on Artificial Intelligence." AI Magazine 40, no. 3 (September 30, 2019): 67–78. http://dx.doi.org/10.1609/aimag.v40i3.4981.

Full text
Abstract:
The workshop program of the Association for the Advancement of Artificial Intelligence’s 33rd Conference on Artificial Intelligence (AAAI-19) was held in Honolulu, Hawaii, on Sunday and Monday, January 27–28, 2019. There were fifteen workshops in the program: Affective Content Analysis: Modeling Affect-in-Action, Agile Robotics for Industrial Automation Competition, Artificial Intelligence for Cyber Security, Artificial Intelligence Safety, Dialog System Technology Challenge, Engineering Dependable and Secure Machine Learning Systems, Games and Simulations for Artificial Intelligence, Health Intelligence, Knowledge Extraction from Games, Network Interpretability for Deep Learning, Plan, Activity, and Intent Recognition, Reasoning and Learning for Human-Machine Dialogues, Reasoning for Complex Question Answering, Recommender Systems Meet Natural Language Processing, Reinforcement Learning in Games, and Reproducible AI. This report contains brief summaries of the all the workshops that were held.
APA, Harvard, Vancouver, ISO, and other styles
48

Kodubets, A. A., and I. L. Artemieva. "Requirements Engineering for Software Systems: A Systematic Literature Review." Programmnaya Ingeneria 12, no. 7 (October 11, 2021): 339–49. http://dx.doi.org/10.17587/prin.12.339-349.

Full text
Abstract:
This article contains a systematic literature review of requirements engineering for software systems. The literature published within last 5 years was included into the review. A research question was defined as requirements development process of large scale software system (with thousands of requirements) and an interaction problem during this process (communication, coordination and control). The problem is caused by the fact that large-scale software system requirements process is a cross-disciplinary task and it involves multiple parties — stakeholders, domain experts, and suppliers with own goals and constrains, and thus, the interaction between them seriously slows down the overall requirements development process than writing the requirements specification itself. The research papers were classified by several research directions: Natural Language Processing for Requirements Engineering (NLP4RE), Requirement Prioritization, Requirements Traceability, Quality of Software Requirements, Non-functional Requirements and Requirements Elicitation. Motivation and intensity of each direction was described. Each direction was structured and represented with the key references. A contribution of each research direction into the research question was analyzed and summarized including potential further steps. It was identified that some researchers had met a part of the described problem in different forms during their researches. At the end, other researches were described additionally in a short overview. To approach the research question further potential direction was described.
APA, Harvard, Vancouver, ISO, and other styles
49

Hou, Xia, Jintao Luo, Junzhe Li, Liangguo Wang, and Hongbo Yang. "A Novel Knowledge Base Question Answering Method Based on Graph Convolutional Network and Optimized Search Space." Electronics 11, no. 23 (November 25, 2022): 3897. http://dx.doi.org/10.3390/electronics11233897.

Full text
Abstract:
Knowledge base question answering (KBQA) aims to provide answers to natural language questions from information in the knowledge base. Although many methods perform well when dealing with simple questions, there are still two challenges for complex questions: huge search space and information missing from the query graphs’ structure. To solve these problems, we propose a novel KBQA method based on a graph convolutional network and optimized search space. When generating the query graph, we rank the query graphs by both their semantic and structural similarities with the question. Then, we just use the top k for the next step. In this process, we specifically extract the structure information of the query graphs by a graph convolutional network while extracting semantic information by a pre-trained model. Thus, we can enhance the method’s ability to understand complex questions. We also introduce a constraint function to optimize the search space. Furthermore, we use the beam search algorithm to reduce the search space further. Experiments on the WebQuestionsSP dataset demonstrate that our method outperforms some baseline methods, showing that the structural information of the query graph has a significant impact on the KBQA task.
APA, Harvard, Vancouver, ISO, and other styles
50

Haisa, Gulizada, and Gulila Altenbek. "Multi-Task Learning Model for Kazakh Query Understanding." Sensors 22, no. 24 (December 14, 2022): 9810. http://dx.doi.org/10.3390/s22249810.

Full text
Abstract:
Query understanding (QU) plays a vital role in natural language processing, particularly in regard to question answering and dialogue systems. QU finds the named entity and query intent in users’ questions. Traditional pipeline approaches manage the two mentioned tasks, namely, the named entity recognition (NER) and the question classification (QC), separately. NER is seen as a sequence labeling task to predict a keyword, while QC is a semantic classification task to predict the user’s intent. Considering the correlation between these two tasks, training them together could be of benefit to both of them. Kazakh is a low-resource language with wealthy lexical and agglutinative characteristics. We argue that current QU techniques restrict the power of the word-level and sentence-level features of agglutinative languages, especially the stem, suffixes, POS, and gazetteers. This paper proposes a new multi-task learning model for query understanding (MTQU). The MTQU model is designed to establish direct connections for QC and NER tasks to help them promote each other mutually, while we also designed a multi-feature input layer that significantly influenced the model’s performance during training. In addition, we constructed new corpora for the Kazakh query understanding task, namely, the KQU. As a result, the MTQU model is simple and effective and obtains competitive results for the KQU.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography