To see the other types of publications on this topic, follow the link: Conversational systems.

Journal articles on the topic 'Conversational systems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Conversational systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Qin, Zhen (Luther). "Conversational Breakdown Detector for a Motivational Interviewing Conversational Agent." IJournal: Student Journal of the Faculty of Information 9, no. 1 (December 19, 2023): 60–77. http://dx.doi.org/10.33137/ijournal.v9i1.42237.

Full text
Abstract:
A conversational breakdown in human-chatbot interaction refers to a disruption or failure in the communicative flow between the human user and the chatbot. To recover a disrupted conversation, the first step is to detect the breakdown. Researchers have proposed methods using supervised learning and semi-supervised learning in dialogue systems to achieve the goal of detecting conversational breakdown. However, few studies have focused on detecting breakdowns in automated therapeutic conversations, especially conversations led by motivational interviewing chatbots. The presence of conversational breakdowns has negative impacts on the human-chatbot interaction, such as frustration, dissatisfaction, or loss of trust. This gap suggests a need to build a robust and efficient conversational breakdown detector that recognizes interruptions during the conversation. Conversational breakdown detection paves the way for further action to recover conversations. In this paper, I develop a novel, unifying framework called “CIMIC” for characterizing the conversational breakdowns of “MIBot,” a motivational interviewing conversational agent for smoking cessation. I collect 200 pieces of conversational data through Prolific and annotate them using the CIMIC framework with a group of four trained annotators. The annotated dataset is then applied as the training set to fine-tune GPT-3 models to build a conversational breakdown detector for the MIBot.
APA, Harvard, Vancouver, ISO, and other styles
2

Watkinson, Neftali, Fedor Zaitsev, Aniket Shivam, Michael Demirev, Mike Heddes, Tony Givargis, Alexandru Nicolau, and Alexander Veidenbaum. "EdgeAvatar: An Edge Computing System for Building Virtual Beings." Electronics 10, no. 3 (January 20, 2021): 229. http://dx.doi.org/10.3390/electronics10030229.

Full text
Abstract:
Dialogue systems, also known as conversational agents, are computing systems that use algorithms for speech and language processing to engage in conversation with humans or other conversation-capable systems. A chatbot is a conversational agent that has, as its primary goal, to maximize the length of the conversation without any specific targeted task. When a chatbot is embellished with an artistic approach that is meant to evoke an emotional response, then it is called a virtual being. On the other hand, conversational agents that interact with the physical world require the use of specialized hardware to sense and process captured information. In this article we describe EdgeAvatar, a system based on Edge Computing principles for the creation of virtual beings. The objective of the EdgeAvatar system is to provide a streamlined and modular framework for virtual being applications that are to be deployed in public settings. We also present two implementations that use EdgeAvatar and are inspired by historical figures to interact with visitors of the Venice Biennale 2019. EdgeAvatar can adapt to fit different approaches for AI powered conversations.
APA, Harvard, Vancouver, ISO, and other styles
3

Kiesel, Johannes, Lars Meyer, Martin Potthast, and Benno Stein. "Meta-Information in Conversational Search." ACM Transactions on Information Systems 39, no. 4 (October 31, 2021): 1–44. http://dx.doi.org/10.1145/3468868.

Full text
Abstract:
The exchange of meta-information has always formed part of information behavior. In this article, we show that this rule also extends to conversational search. Information about the user’s information need, their preferences, and the quality of search results are only some of the most salient examples of meta-information that are exchanged as a matter of course in a search conversation. To understand the importance of meta-information for conversational search, we revisit its definition and survey how meta-information has been taken into account in the past in information retrieval. Meta-information has gone by many names, about which a concise overview is provided. An in-depth analysis of the role of meta-information in search and conversation theories reveals that they provide significant support for the importance of meta-information in conversational search. We further identify conversational search datasets are suitable for a deeper inspection with regard to meta-information, namely, Spoken Conversational Search and Microsoft Information-Seeking Conversations. A quantitative data analysis demonstrates the practical significance of meta-information in information-seeking conversations, whereas a qualitative analysis shows the effects of exchanging different types. Finally, we discuss practical applications and challenges of meta-information in conversational search, including a case study of VERSE, an existing search system for the visually impaired.
APA, Harvard, Vancouver, ISO, and other styles
4

Lin, Dongding, Jian Wang, and Wenjie Li. "COLA: Improving Conversational Recommender Systems by Collaborative Augmentation." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 4 (June 26, 2023): 4462–70. http://dx.doi.org/10.1609/aaai.v37i4.25567.

Full text
Abstract:
Conversational recommender systems (CRS) aim to employ natural language conversations to suggest suitable products to users. Understanding user preferences for prospective items and learning efficient item representations are crucial for CRS. Despite various attempts, earlier studies mostly learned item representations based on individual conversations, ignoring item popularity embodied among all others. Besides, they still need support in efficiently capturing user preferences since the information reflected in a single conversation is limited. Inspired by collaborative filtering, we propose a collaborative augmentation (COLA) method to simultaneously improve both item representation learning and user preference modeling to address these issues. We construct an interactive user-item graph from all conversations, which augments item representations with user-aware information, i.e., item popularity. To improve user preference modeling, we retrieve similar conversations from the training corpus, where the involved items and attributes that reflect the user's potential interests are used to augment the user representation through gate control. Extensive experiments on two benchmark datasets demonstrate the effectiveness of our method. Our code and data are available at https://github.com/DongdingLin/COLA.
APA, Harvard, Vancouver, ISO, and other styles
5

Kum, Junyeong, and Myungho Lee. "Can Gestural Filler Reduce User-Perceived Latency in Conversation with Digital Humans?" Applied Sciences 12, no. 21 (October 29, 2022): 10972. http://dx.doi.org/10.3390/app122110972.

Full text
Abstract:
The demand for a conversational system with digital humans has increased with the development of artificial intelligence. Latency can occur in such conversational systems because of natural language processing and network issues, which can deteriorate the user’s performance and the availability of the systems. There have been attempts to mitigate user-perceived latency by using conversational fillers in human–agent interaction and human–robot interaction. However, non-verbal cues, such as gestures, have received less attention in such attempts, despite their essential roles in communication. Therefore, we designed gestural fillers for the digital humans. This study examined the effects of whether the conversation type and gesture filler matched or not. We also compared the effects of the gestural fillers with conversational fillers. The results showed that the gestural fillers mitigate user-perceived latency and affect the willingness, impression, competence, and discomfort in conversations with digital humans.
APA, Harvard, Vancouver, ISO, and other styles
6

Elvir, Miguel, Avelino J. Gonzalez, Christopher Walls, and Bryan Wilder. "Remembering a Conversation – A Conversational Memory Architecture for Embodied Conversational Agents." Journal of Intelligent Systems 26, no. 1 (January 1, 2017): 1–21. http://dx.doi.org/10.1515/jisys-2015-0094.

Full text
Abstract:
AbstractThis paper addresses the role of conversational memory in Embodied Conversational Agents (ECAs). It describes an investigation into developing such a memory architecture and integrating it into an ECA. ECAs are virtual agents whose purpose is to engage in conversations with human users, typically through natural language speech. While several works in the literature seek to produce viable ECA dialog architectures, only a few authors have addressed the episodic memory architectures in conversational agents and their role in enhancing their intelligence. In this work, we propose, implement, and test a unified episodic memory architecture for ECAs. We describe a process that determines the prevalent contexts in the conversations obtained from the interactions. The process presented demonstrates the use of multiple techniques to extract and store relevant snippets from long conversations, most of whose contents are unremarkable and need not be remembered. The mechanisms used to store, retrieve, and recall episodes from previous conversations are presented and discussed. Finally, we test our episodic memory architecture to assess its effectiveness. The results indicate moderate success in some aspects of the memory-enhanced ECAs, as well as some work still to be done in other aspects.
APA, Harvard, Vancouver, ISO, and other styles
7

Huang, Ting-Hao, Walter Lasecki, Amos Azaria, and Jeffrey Bigham. ""Is There Anything Else I Can Help You With?" Challenges in Deploying an On-Demand Crowd-Powered Conversational Agent." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 4 (September 21, 2016): 79–88. http://dx.doi.org/10.1609/hcomp.v4i1.13292.

Full text
Abstract:
Intelligent conversational assistants, such as Apple's Siri, Microsoft's Cortana, and Amazon's Echo, have quickly become a part of our digital life. However, these assistants have major limitations, which prevents users from conversing with them as they would with human dialog partners. This limits our ability to observe how users really want to interact with the underlying system. To address this problem, we developed a crowd-powered conversational assistant, Chorus, and deployed it to see how users and workers would interact together when mediated by the system. Chorus sophisticatedly converses with end users over time by recruiting workers on demand, which in turn decide what might be the best response for each user sentence. Up to the first month of our deployment, 59 users have held conversations with Chorus during 320 conversational sessions. In this paper, we present an account of Chorus' deployment, with a focus on four challenges: (i) identifying when conversations are over, (ii) malicious users and workers, (iii) on-demand recruiting, and (iv) settings in which consensus is not enough. Our observations could assist the deployment of crowd-powered conversation systems and crowd-powered systems in general.
APA, Harvard, Vancouver, ISO, and other styles
8

Ford, Nigel. "“Conversational” information systems." Journal of Documentation 61, no. 3 (June 2005): 362–84. http://dx.doi.org/10.1108/00220410510598535.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ren, Pengjie, Zhumin Chen, Zhaochun Ren, Evangelos Kanoulas, Christof Monz, and Maarten De Rijke. "Conversations with Search Engines: SERP-based Conversational Response Generation." ACM Transactions on Information Systems 39, no. 4 (October 31, 2021): 1–29. http://dx.doi.org/10.1145/3432726.

Full text
Abstract:
In this article, we address the problem of answering complex information needs by conducting conversations with search engines , in the sense that users can express their queries in natural language and directly receive the information they need from a short system response in a conversational manner. Recently, there have been some attempts towards a similar goal, e.g., studies on Conversational Agent s (CAs) and Conversational Search (CS). However, they either do not address complex information needs in search scenarios or they are limited to the development of conceptual frameworks and/or laboratory-based user studies. We pursue two goals in this article: (1) the creation of a suitable dataset, the Search as a Conversation (SaaC) dataset, for the development of pipelines for conversations with search engines, and (2) the development of a state-of-the-art pipeline for conversations with search engines, Conversations with Search Engines (CaSE), using this dataset. SaaC is built based on a multi-turn conversational search dataset, where we further employ workers from a crowdsourcing platform to summarize each relevant passage into a short, conversational response. CaSE enhances the state-of-the-art by introducing a supporting token identification module and a prior-aware pointer generator, which enables us to generate more accurate responses. We carry out experiments to show that CaSE is able to outperform strong baselines. We also conduct extensive analyses on the SaaC dataset to show where there is room for further improvement beyond CaSE. Finally, we release the SaaC dataset and the code for CaSE and all models used for comparison to facilitate future research on this topic.
APA, Harvard, Vancouver, ISO, and other styles
10

Yan, Rui, Weiheng Liao, Dongyan Zhao, and Ji-Rong Wen. "Multi-Response Awareness for Retrieval-Based Conversations: Respond with Diversity via Dynamic Representation Learning." ACM Transactions on Information Systems 39, no. 4 (October 31, 2021): 1–29. http://dx.doi.org/10.1145/3470450.

Full text
Abstract:
Conversational systems now attract great attention due to their promising potential and commercial values. To build a conversational system with moderate intelligence is challenging and requires big (conversational) data, as well as interdisciplinary techniques. Thanks to the prosperity of the Web, the massive data available greatly facilitate data-driven methods such as deep learning for human-computer conversational systems. In general, retrieval-based conversational systems apply various matching schema between query utterances and responses, but the classic retrieval paradigm suffers from prominent weakness for conversations: the system finds similar responses given a particular query. For real human-to-human conversations, on the contrary, responses can be greatly different yet all are possibly appropriate. The observation reveals the diversity phenomenon in conversations. In this article, we ascribe the lack of conversational diversity to the reason that the query utterances are statically modeled regardless of candidate responses through traditional methods. To this end, we propose a dynamic representation learning strategy that models the query utterances and different response candidates in an interactive way. To be more specific, we propose a Respond-with-Diversity model augmented by the memory module interacting with both the query utterances and multiple candidate responses. Hence, we obtain dynamic representations for the input queries conditioned on different response candidates. We frame the model as an end-to-end learnable neural network. In the experiments, we demonstrate the effectiveness of the proposed model by achieving a good appropriateness score and much better diversity in retrieval-based conversations between humans and computers.
APA, Harvard, Vancouver, ISO, and other styles
11

Lipani, Aldo, Ben Carterette, and Emine Yilmaz. "How Am I Doing?: Evaluating Conversational Search Systems Offline." ACM Transactions on Information Systems 39, no. 4 (October 31, 2021): 1–22. http://dx.doi.org/10.1145/3451160.

Full text
Abstract:
As conversational agents like Siri and Alexa gain in popularity and use, conversation is becoming a more and more important mode of interaction for search. Conversational search shares some features with traditional search, but differs in some important respects: conversational search systems are less likely to return ranked lists of results (a SERP), more likely to involve iterated interactions, and more likely to feature longer, well-formed user queries in the form of natural language questions. Because of these differences, traditional methods for search evaluation (such as the Cranfield paradigm) do not translate easily to conversational search. In this work, we propose a framework for offline evaluation of conversational search, which includes a methodology for creating test collections with relevance judgments, an evaluation measure based on a user interaction model, and an approach to collecting user interaction data to train the model. The framework is based on the idea of “subtopics”, often used to model novelty and diversity in search and recommendation, and the user model is similar to the geometric browsing model introduced by RBP and used in ERR. As far as we know, this is the first work to combine these ideas into a comprehensive framework for offline evaluation of conversational search.
APA, Harvard, Vancouver, ISO, and other styles
12

McTear, Michael. "Conversational AI: Dialogue Systems, Conversational Agents, and Chatbots." Synthesis Lectures on Human Language Technologies 13, no. 3 (October 30, 2020): 1–251. http://dx.doi.org/10.2200/s01060ed1v01y202010hlt048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Arabshahi, Forough, Jennifer Lee, Mikayla Gawarecki, Kathryn Mazaitis, Amos Azaria, and Tom Mitchell. "Conversational Neuro-Symbolic Commonsense Reasoning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 6 (May 18, 2021): 4902–11. http://dx.doi.org/10.1609/aaai.v35i6.16623.

Full text
Abstract:
In order for conversational AI systems to hold more natural and broad-ranging conversations, they will require much more commonsense, including the ability to identify unstated presumptions of their conversational partners. For example, in the command "If it snows at night then wake me up early because I don't want to be late for work" the speaker relies on commonsense reasoning of the listener to infer the implicit presumption that they wish to be woken only if it snows enough to cause traffic slowdowns. We consider here the problem of understanding such imprecisely stated natural language commands given in the form of if-(state), then-(action), because-(goal) statements. More precisely, we consider the problem of identifying the unstated presumptions of the speaker that allow the requested action to achieve the desired goal from the given state (perhaps elaborated by making the implicit presumptions explicit). We release a benchmark data set for this task, collected from humans and annotated with commonsense presumptions. We present a neuro-symbolic theorem prover that extracts multi-hop reasoning chains, and apply it to this problem. Furthermore, to accommodate the reality that current AI commonsense systems lack full coverage, we also present an interactive conversational framework built on our neuro-symbolic system, that conversationally evokes commonsense knowledge from humans to complete its reasoning chains.
APA, Harvard, Vancouver, ISO, and other styles
14

Kandoi, Raunak, Deepali Dixit, Mihul Tyagi, and Raghuraj Singh Yadav. "Conversational AI." International Journal for Research in Applied Science and Engineering Technology 12, no. 3 (March 31, 2024): 769–75. http://dx.doi.org/10.22214/ijraset.2024.58787.

Full text
Abstract:
Abstract: Conversational AI systems are becoming increasingly popular across many industries and are transforming the way people interact with technology. For a more authentic, human-like connection and a smooth user experience, these systems should combine text-based interactions with multimodal capabilities. The authors of this work suggest a new approach to improving conversational AI systems' usability by combining speech and visual analysis. By combining visual and auditory processing capabilities, AI systems can better understand human inquiries and instructions. Both visual data and speech can be better understood with the use of computer vision algorithms and natural language processing techniques, respectively. Conversational AI systems can provide more accurate and tailored replies by integrating many modalities to better grasp human intent and context. The development of multimodal conversational AI presents a significant difficulty in ensuring the smooth integration of voice and visual processing units. A strong architectural design and advanced algorithms are necessary for the simultaneous synchronization and comprehension of data from several modalities in real-time. The system needs to keep track of the conversation's context even when it switches between different forms of communication so it can keep providing fair and relevant responses all through the engagement. Customization is key to making multimodal conversational AI better for users. Based on user data and preferences, the system may tailor interactions to offer more relevant ideas and support. Users are more invested in the AI system over time, and they have a better experience overall because to customization. Ensuring the privacy and security of important audiovisual data is of the utmost importance while building multimodal conversational AI. Strong encryption, anonymization technologies, and compliance with data protection regulations are vital for user privacy and system confidence. Continuous improvement is key to the success of multimodal conversational AI systems. The feedback from users can help the developers improve the system and add new features. Thanks to this iterative technique, the AI system stays flexible and can adjust to changing consumer preferences. By combining voice and picture processing, conversational AI systems have a great deal of promise for improving the user experience. Through the integration of visual and auditory signals, these systems have the ability to comprehend user intent more accurately, provide customized experiences, and completely transform the way humans engage with technology.
APA, Harvard, Vancouver, ISO, and other styles
15

Elbert, Mary, Daniel A. Dinnsen, Paula Swartzlander, and Steven B. Chin. "Generalization to Conversational Speech." Journal of Speech and Hearing Disorders 55, no. 4 (November 1990): 694–99. http://dx.doi.org/10.1044/jshd.5504.694.

Full text
Abstract:
Although changes in children's phonological systems due to treatment have been documented in single-word testing, changes in conversational speech are less well known. Single-word and conversation samples were analyzed for 10 phonologically disordered children, before and after treatment and 3 months later. Results suggest that for most of the children, there were system-changes in both single words and in conversational speech. It appears that many phonologically disordered children are able to extend their correct production to conversation without direct treatment on spontaneous speech.
APA, Harvard, Vancouver, ISO, and other styles
16

Thomas, Paul, Mary Czerwinksi, Daniel Mcduff, and Nick Craswell. "Theories of Conversation for Conversational IR." ACM Transactions on Information Systems 39, no. 4 (October 31, 2021): 1–23. http://dx.doi.org/10.1145/3439869.

Full text
Abstract:
Conversational information retrieval is a relatively new and fast-developing research area, but conversation itself has been well studied for decades. Researchers have analysed linguistic phenomena such as structure and semantics but also paralinguistic features such as tone, body language, and even the physiological states of interlocutors. We tend to treat computers as social agents—especially if they have some humanlike features in their design—and so work from human-to-human conversation is highly relevant to how we think about the design of human-to-computer applications. In this article, we summarise some salient past work, focusing on social norms; structures; and affect, prosody, and style. We examine social communication theories briefly as a review to see what we have learned about how humans interact with each other and how that might pertain to agents and robots. We also discuss some implications for research and design of conversational IR systems.
APA, Harvard, Vancouver, ISO, and other styles
17

Vuong, Tung, Salvatore Andolina, Giulio Jacucci, and Tuukka Ruotsalo. "Spoken Conversational Context Improves Query Auto-completion in Web Search." ACM Transactions on Information Systems 39, no. 3 (May 6, 2021): 1–32. http://dx.doi.org/10.1145/3447875.

Full text
Abstract:
Web searches often originate from conversations in which people engage before they perform a search. Therefore, conversations can be a valuable source of context with which to support the search process. We investigate whether spoken input from conversations can be used as a context to improve query auto-completion. We model the temporal dynamics of the spoken conversational context preceding queries and use these models to re-rank the query auto-completion suggestions. Data were collected from a controlled experiment and comprised conversations among 12 participant pairs conversing about movies or traveling. Search query logs during the conversations were recorded and temporally associated with the conversations. We compared the effects of spoken conversational input in four conditions: a control condition without contextualization; an experimental condition with the model using search query logs; an experimental condition with the model using spoken conversational input; and an experimental condition with the model using both search query logs and spoken conversational input. We show the advantage of combining the spoken conversational context with the Web-search context for improved retrieval performance. Our results suggest that spoken conversations provide a rich context for supporting information searches beyond current user-modeling approaches.
APA, Harvard, Vancouver, ISO, and other styles
18

Hans, Sikander, Balwinder Kumar, Vivek Parihar, and Sukhpreet singh. "Human-AI Collaboration: Understanding User Trust in ChatGPT Conversations." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 01 (January 8, 2024): 1–13. http://dx.doi.org/10.55041/ijsrem27929.

Full text
Abstract:
This research paper delves into the critical dimension of Human-AI Collaboration, with a specific focus on unraveling the intricacies of user trust in ChatGPT conversations. In an era marked by increasing AI integration into various aspects of human life, understanding and fostering user trust in conversational AI systems like ChatGPT is essential for effective collaboration. The study employs a comprehensive approach, investigating metrics for trust measurement, analyzing user experiences, and exploring the factors that influence trust. By examining the evolving impact of trust on collaboration and conducting comparative analyses with other conversational AI models, the research aims to provide valuable insights. Ultimately, the paper not only contributes to a nuanced understanding of user trust in ChatGPT conversations but also offers practical recommendations for developers and stakeholders to enhance the collaborative potential of AI systems in real-world applications. Keywords: Human-AI Collaboration, ChatGPT Conversations, Conversational AI, Trust Metrics, User Trust.
APA, Harvard, Vancouver, ISO, and other styles
19

Vanderveken, Daniel. "Towards a Formal Pragmatics of Discourse." International Review of Pragmatics 5, no. 1 (2013): 34–69. http://dx.doi.org/10.1163/18773109-13050102.

Full text
Abstract:
Could we enrich speech-act theory to deal with discourse? Wittgenstein and Searle pointed out difficulties. Most conversations lack a conversational purpose, they require collective intentionality, their background is indefinitely open, irrelevant and infelicitous utterances do not prevent conversations to continue, etc. Like Wittgenstein and Searle I am sceptic about the possibility of a general theory of all kinds of language-games. In my view, the single primary purpose of discourse pragmatics is to analyse the structure and dynamics of language-games whose type is provided with an internal conversational goal. Such games are indispensable to any kind of discourse. They have a descriptive, deliberative, declaratory or expressive conversational goal corresponding to a possible direction of fit between words and things. Logic can analyse felicity-conditions of such language-games because they are conducted according to systems of constitutive rules. Speakers often speak non-literally or non-seriously. The real units of conversation are therefore attempted illocutions whether literal, serious or not. I will show how to construct speaker-meaning from sentence-meaning, conversational background and conversational maxims. I agree with Montague that we need the resources of formalisms (proof, model- and game-theories) and of mathematical and philosophical logic in pragmatics. I will explain how to further develop propositional and illocutionary logics, the logic of attitudes and of action in order to characterize our ability to converse. I will also compare my approach to others (Austin, Belnap, Grice, Montague, Searle, Sperber and Wilson, Kamp, Wittgenstein) as regards hypotheses, methodology and other issues.
APA, Harvard, Vancouver, ISO, and other styles
20

Adewumi, Tosin, Foteini Liwicki, and Marcus Liwicki. "Vector Representations of Idioms in Conversational Systems." Sci 4, no. 4 (September 29, 2022): 37. http://dx.doi.org/10.3390/sci4040037.

Full text
Abstract:
In this study, we demonstrate that an open-domain conversational system trained on idioms or figurative language generates more fitting responses to prompts containing idioms. Idioms are a part of everyday speech in many languages and across many cultures, but they pose a great challenge for many natural language processing (NLP) systems that involve tasks such as information retrieval (IR), machine translation (MT), and conversational artificial intelligence (AI). We utilized the Potential Idiomatic Expression (PIE)-English idiom corpus for the two tasks that we investigated: classification and conversation generation. We achieved a state-of-the-art (SoTA) result of a 98% macro F1 score on the classification task by using the SoTA T5 model. We experimented with three instances of the SoTA dialogue model—the Dialogue Generative Pre-trained Transformer (DialoGPT)—for conversation generation. Their performances were evaluated by using the automatic metric, perplexity, and a human evaluation. The results showed that the model trained on the idiom corpus generated more fitting responses to prompts containing idioms 71.9% of the time in comparison with a similar model that was not trained on the idiom corpus. We have contributed the model checkpoint/demo/code to the HuggingFace hub for public access.
APA, Harvard, Vancouver, ISO, and other styles
21

Thompson, C. A., M. H. Goker, and P. Langley. "A Personalized System for Conversational Recommendations." Journal of Artificial Intelligence Research 21 (March 1, 2004): 393–428. http://dx.doi.org/10.1613/jair.1318.

Full text
Abstract:
Searching for and making decisions about information is becoming increasingly difficult as the amount of information and number of choices increases. Recommendation systems help users find items of interest of a particular type, such as movies or restaurants, but are still somewhat awkward to use. Our solution is to take advantage of the complementary strengths of personalized recommendation systems and dialogue systems, creating personalized aides. We present a system -- the Adaptive Place Advisor -- that treats item selection as an interactive, conversational process, with the program inquiring about item attributes and the user responding. Individual, long-term user preferences are unobtrusively obtained in the course of normal recommendation dialogues and used to direct future conversations with the same user. We present a novel user model that influences both item search and the questions asked during a conversation. We demonstrate the effectiveness of our system in significantly reducing the time and number of interactions required to find a satisfactory item, as compared to a control group of users interacting with a non-adaptive version of the system.
APA, Harvard, Vancouver, ISO, and other styles
22

Li, Yongqi, Wenjie Li, and Liqiang Nie. "Dynamic Graph Reasoning for Conversational Open-Domain Question Answering." ACM Transactions on Information Systems 40, no. 4 (October 31, 2022): 1–24. http://dx.doi.org/10.1145/3498557.

Full text
Abstract:
In recent years, conversational agents have provided a natural and convenient access to useful information in people’s daily life, along with a broad and new research topic, conversational question answering (QA). On the shoulders of conversational QA, we study the conversational open-domain QA problem, where users’ information needs are presented in a conversation and exact answers are required to extract from the Web. Despite its significance and value, building an effective conversational open-domain QA system is non-trivial due to the following challenges: (1) precisely understand conversational questions based on the conversation context; (2) extract exact answers by capturing the answer dependency and transition flow in a conversation; and (3) deeply integrate question understanding and answer extraction. To address the aforementioned issues, we propose an end-to-end Dynamic Graph Reasoning approach to Conversational open-domain QA (DGRCoQA for short). DGRCoQA comprises three components, i.e., a dynamic question interpreter (DQI), a graph reasoning enhanced retriever (GRR), and a typical Reader, where the first one is developed to understand and formulate conversational questions while the other two are responsible to extract an exact answer from the Web. In particular, DQI understands conversational questions by utilizing the QA context, sourcing from predicted answers returned by the Reader, to dynamically attend to the most relevant information in the conversation context. Afterwards, GRR attempts to capture the answer flow and select the most possible passage that contains the answer by reasoning answer paths over a dynamically constructed context graph . Finally, the Reader, a reading comprehension model, predicts a text span from the selected passage as the answer. DGRCoQA demonstrates its strength in the extensive experiments conducted on a benchmark dataset. It significantly outperforms the existing methods and achieves the state-of-the-art performance.
APA, Harvard, Vancouver, ISO, and other styles
23

Chen, Keyu, and Shiliang Sun. "CP-Rec: Contextual Prompting for Conversational Recommender Systems." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 11 (June 26, 2023): 12635–43. http://dx.doi.org/10.1609/aaai.v37i11.26487.

Full text
Abstract:
The conversational recommender system (CRS) aims to provide high-quality recommendations through interactive dialogues. However, previous CRS models have no effective mechanisms for task planning and topic elaboration, and thus they hardly maintain coherence in multi-task recommendation dialogues. Inspired by recent advances in prompt-based learning, we propose a novel contextual prompting framework for dialogue management, which optimizes prompts based on context, topics, and user profiles. Specifically, we develop a topic controller to sequentially plan the subtasks, and a prompt search module to construct context-aware prompts. We further adopt external knowledge to enrich user profiles and make knowledge-aware recommendations. Incorporating these techniques, we propose a conversational recommender system with contextual prompting, namely CP-Rec. Experimental results demonstrate that it achieves state-of-the-art recommendation accuracy and generates more coherent and informative conversations.
APA, Harvard, Vancouver, ISO, and other styles
24

B, Mr DHANUSH. "CHATBOT USING LARGE LANGUAGE MODEL." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 05 (May 14, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem34001.

Full text
Abstract:
The concept of Natural Language Processing has seen a remarkable advancement in the recent years. This remarkable advancement was particularly with the development of Large Language Models (LLM). Large Language Models are used to develop a human like conversations. This LLM is a part of Natural Language Processing which focuses on enabling computers to understand, interpret, and generate human language. The existing system of chatbots does not generate human like responses. The proposed system of chatbots uses the power of Large Language Models to generate more human like responses, providing the conversation in a natural way. By genereating human like respones, it will be in a natural way for the user. To enhance user experience, the chatbot uses a dynamic learning mechanism, by which it continuously adapt to user preferences and evolving conversational patterns. This system uses feedbacks from the users to refine its responses everytime.Moreover, the chatbot is designed with a multi-turn conversational context awareness, allowing it to maintain coherence and relevance throughout extended dialogues.The effectiveness of the proposed chatbot is evaluated through user testing, comparing its performance against traditional rule-based chatbots and existing conversational agents. This report explains about the usage of Large Language Models in the design and implementation of conversational chatbots. The outcomes of this research contribute to the advancement of intelligent chatbot systems, demonstrating the potential of large language models to significantly enhance conversational AI applications.
APA, Harvard, Vancouver, ISO, and other styles
25

Coppola, Riccardo, and Luca Ardito. "Quality Assessment Methods for Textual Conversational Interfaces: A Multivocal Literature Review." Information 12, no. 11 (October 21, 2021): 437. http://dx.doi.org/10.3390/info12110437.

Full text
Abstract:
The evaluation and assessment of conversational interfaces is a complex task since such software products are challenging to validate through traditional testing approaches. We conducted a systematic Multivocal Literature Review (MLR), on five different literature sources, to provide a view on quality attributes, evaluation frameworks, and evaluation datasets proposed to provide aid to the researchers and practitioners of the field. We came up with a final pool of 118 contributions, including grey (35) and white literature (83). We categorized 123 different quality attributes and metrics under ten different categories and four macro-categories: Relational, Conversational, User-Centered and Quantitative attributes. While Relational and Conversational attributes are most commonly explored by the scientific literature, we testified a predominance of User-Centered Attributes in industrial literature. We also identified five different academic frameworks/tools to automatically compute sets of metrics, and 28 datasets (subdivided into seven different categories based on the type of data contained) that can produce conversations for the evaluation of conversational interfaces. Our analysis of literature highlights that a high number of qualitative and quantitative attributes are available in the literature to evaluate the performance of conversational interfaces. Our categorization can serve as a valid entry point for researchers and practitioners to select the proper functional and non-functional aspects to be evaluated for their products.
APA, Harvard, Vancouver, ISO, and other styles
26

Pieraccini, Roberto, Krishna Dayanidhi, Jonathan Bloom, Jean-Gui Dahan, Michael Phillips, Bryan R. Goodman, and K. Venkatesh Prasad. "Multimodal conversational systems for automobiles." Communications of the ACM 47, no. 1 (January 1, 2004): 47. http://dx.doi.org/10.1145/962081.962104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Sugiyama, Hiroaki, Ryuichiro Higashinaka, and Toyomi Meguro. "Towards User-friendly Conversational Systems." NTT Technical Review 14, no. 11 (November 2016): 25–29. http://dx.doi.org/10.53829/ntr201611fa4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Goh, Ong Sing, Chun Che Fung, Kok Wai Wong, and Arnold Depickere. "Embodied Conversational Agents for H5N1 Pandemic Crisis." Journal of Advanced Computational Intelligence and Intelligent Informatics 11, no. 3 (March 20, 2007): 282–88. http://dx.doi.org/10.20965/jaciii.2007.p0282.

Full text
Abstract:
This paper presents a novel framework for modeling embodied conversational agent for crisis communication focusing on the H5N1 pandemic crisis. Our system aims to cope with the most challenging issue on the maintenance of an engaging while convincing conversation. What primarily distinguishes our system from other conversational agent systems is that the human-computer conversation takes place within the context of H5N1 pandemic crisis. A Crisis Communication Network, called CCNet, is established based on a novel algorithm incorporating natural language query and embodied conversation agent simultaneously. Another significant contribution of our work is the development of a Automated Knowledge Extraction Agent (AKEA) to capitalize on the tremendous amount of data that is now available online to support our experiments. What makes our system differs from typical conversational agents is the attempt to move away from strictly task-oriented dialogue.
APA, Harvard, Vancouver, ISO, and other styles
29

Trippas, Johanne R. "Spoken conversational search." ACM SIGIR Forum 53, no. 2 (December 2019): 106–7. http://dx.doi.org/10.1145/3458553.3458570.

Full text
Abstract:
Speech-based web search where no keyboard or screens are available to present search engine results is becoming ubiquitous, mainly through the use of mobile devices and intelligent assistants such as Apple's HomePod, Google Home, or Amazon Alexa. Currently, these intelligent assistants do not maintain a lengthy information exchange. They do not track context or present information suitable for an audio-only channel, and do not interact with the user in a multi-turn conversation. Understanding how users would interact with such an audio-only interaction system in multi-turn information seeking dialogues, and what users expect from these new systems, are unexplored in search settings. In particular, the knowledge on how to present search results over an audio-only channel and which interactions take place in this new search paradigm is crucial to incorporate while producing usable systems [9, 2, 8]. Thus, constructing insight into the conversational structure of information seeking processes provides researchers and developers opportunities to build better systems while creating a research agenda and directions for future advancements in Spoken Conversational Search (SCS). Such insight has been identified as crucial in the growing SCS area. At the moment, limited understanding has been acquired for SCS, for example, how the components interact, how information should be presented, or how task complexity impacts the interactivity or discourse behaviours. We aim to address these knowledge gaps. This thesis outlines the breadth of SCS and forms a manifesto advancing this highly interactive search paradigm with new research directions including prescriptive notions for implementing identified challenges [3]. We investigate SCS through quantitative and qualitative designs: (i) log and crowdsourcing experiments investigating different interaction and results presentation styles [1, 6], and (ii) the creation and analysis of the first SCS dataset and annotation schema through designing and conducting an observational study of information seeking dialogues [11, 5, 7]. We propose new research directions and design recommendations based on the triangulation of three different datasets and methods: the log analysis to identify practical challenges and limitations of existing systems while informing our future observational study; the crowdsourcing experiment to validate a new experimental setup for future search engine results presentation investigations; and the observational study to establish the SCS dataset (SCSdata), form the first Spoken Conversational Search Annotation Schema (SCoSAS), and study interaction behaviours for different task complexities. Our principle contributions are based on our observational study for which we developed a novel methodology utilising a qualitative design [10]. We show that existing information seeking models may be insufficient for the new SCS search paradigm because they inadequately capture meta-discourse functions and the system's role as an active agent. Thus, the results indicate that SCS systems have to support the user through discourse functions and be actively involved in the users' search process. This suggests that interactivity between the user and system is necessary to overcome the increased complexity which has been imposed upon the user and system by the constraints of the audio-only communication channel [4]. We then present the first schematic model for SCS which is derived from the SCoSAS through the qualitative analysis of the SCSdata. In addition, we demonstrate the applicability of our dataset by investigating the effect of task complexity on interaction and discourse behaviour. Lastly, we present SCS design recommendations and outline new research directions for SCS. The implications of our work are practical, conceptual, and methodological. The practical implications include the development of the SCSdata, the SCoSAS, and SCS design recommendations. The conceptual implications include the development of a schematic SCS model which identifies the need for increased interactivity and pro-activity to overcome the audio-imposed complexity in SCS. The methodological implications include the development of the crowdsourcing framework, and techniques for developing and analysing SCS datasets. In summary, we believe that our findings can guide researchers and developers to help improve existing interactive systems which are less constrained, such as mobile search, as well as more constrained systems such as SCS systems.
APA, Harvard, Vancouver, ISO, and other styles
30

Lubis, Nurul, Michael Heck, Carel Van Niekerk, and Milica Gasic. "Adaptable Conversational Machines." AI Magazine 41, no. 3 (September 14, 2020): 28–44. http://dx.doi.org/10.1609/aimag.v41i3.5322.

Full text
Abstract:
In recent years we have witnessed a surge in machine learning methods that provide machines with conversational abilities. Most notably, neural-network–based systems have set the state of the art for difficult tasks such as speech recognition, semantic understanding, dialogue management, language generation, and speech synthesis. Still, unlike for the ancient game of Go for instance, we are far from achieving human-level performance in dialogue. The reasons for this are numerous. One property of human–human dialogue that stands out is the infinite number of possibilities of expressing oneself during the conversation, even when the topic of the conversation is restricted. A typical solution to this problem was scaling-up the data. The most prominent mantra in speech and language technology has been “There is no data like more data.” However, the researchers now are focused on building smarter algorithms — algorithms that can learn efficiently from just a few examples. This is an intrinsic property of human behavior: an average human sees during their lifetime a fraction of data that we nowadays present to machines. A human can even have an intuition about a solution before ever experiencing an example solution. The human-inspired ability to adapt may just be one of the keys in pushing dialogue systems toward human performance. This article reviews advancements in dialogue systems research with a focus on the adaptation methods for dialogue modeling, and ventures to have a glance at the future of research on adaptable conversational machines.
APA, Harvard, Vancouver, ISO, and other styles
31

Reddy, Siva, Danqi Chen, and Christopher D. Manning. "CoQA: A Conversational Question Answering Challenge." Transactions of the Association for Computational Linguistics 7 (November 2019): 249–66. http://dx.doi.org/10.1162/tacl_a_00266.

Full text
Abstract:
Humans gather information through conversations involving a series of interconnected questions and answers. For machines to assist in information gathering, it is therefore essential to enable them to answer conversational questions. We introduce CoQA, a novel dataset for building Conversational Question Answering systems. Our dataset contains 127k questions with answers, obtained from 8k conversations about text passages from seven diverse domains. The questions are conversational, and the answers are free-form text with their corresponding evidence highlighted in the passage. We analyze CoQA in depth and show that conversational questions have challenging phenomena not present in existing reading comprehension datasets (e.g., coreference and pragmatic reasoning). We evaluate strong dialogue and reading comprehension models on CoQA. The best system obtains an F1 score of 65.4%, which is 23.4 points behind human performance (88.8%), indicating that there is ample room for improvement. We present CoQA as a challenge to the community at https://stanfordnlp.github.io/coqa .
APA, Harvard, Vancouver, ISO, and other styles
32

Young, Tom, Frank Xing, Vlad Pandelea, Jinjie Ni, and Erik Cambria. "Fusing Task-Oriented and Open-Domain Dialogues in Conversational Agents." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 10 (June 28, 2022): 11622–29. http://dx.doi.org/10.1609/aaai.v36i10.21416.

Full text
Abstract:
The goal of building intelligent dialogue systems has largely been separately pursued under two paradigms: task-oriented dialogue (TOD) systems, which perform task-specific functions, and open-domain dialogue (ODD) systems, which focus on non-goal-oriented chitchat. The two dialogue modes can potentially be intertwined together seamlessly in the same conversation, as easily done by a friendly human assistant. Such ability is desirable in conversational agents, as the integration makes them more accessible and useful. Our paper addresses this problem of fusing TODs and ODDs in multi-turn dialogues. Based on the popular TOD dataset MultiWOZ, we build a new dataset FusedChat, by rewriting the existing TOD turns and adding new ODD turns. This procedure constructs conversation sessions containing exchanges from both dialogue modes. It features inter-mode contextual dependency, i.e., the dialogue turns from the two modes depend on each other. Rich dependency patterns such as co-reference and ellipsis are included. The new dataset, with 60k new human-written ODD turns and 5k re-written TOD turns, offers a benchmark to test a dialogue model's ability to perform inter-mode conversations. This is a more challenging task since the model has to determine the appropriate dialogue mode and generate the response based on the inter-mode context. However, such models would better mimic human-level conversation capabilities. We evaluate two baseline models on this task, including the classification-based two-stage models and the two-in-one fused models. We publicly release FusedChat and the baselines to propel future work on inter-mode dialogue systems.
APA, Harvard, Vancouver, ISO, and other styles
33

Tian, Yingzhong, Yafei Jia, Long Li, Zongnan Huang, and Wenbin Wang. "Research on Modeling and Analysis of Generative Conversational System Based on Optimal Joint Structural and Linguistic Model." Sensors 19, no. 7 (April 8, 2019): 1675. http://dx.doi.org/10.3390/s19071675.

Full text
Abstract:
Generative conversational systems consisting of a neural network-based structural model and a linguistic model have always been considered to be an attractive area. However, conversational systems tend to generate single-turn responses with a lack of diversity and informativeness. For this reason, the conversational system method is further developed by modeling and analyzing the joint structural and linguistic model, as presented in the paper. Firstly, we establish a novel dual-encoder structural model based on the new Convolutional Neural Network architecture and strengthened attention with intention. It is able to effectively extract the features of variable-length sequences and then mine their deep semantic information. Secondly, a linguistic model combining the maximum mutual information with the foolish punishment mechanism is proposed. Thirdly, the conversational system for the joint structural and linguistic model is observed and discussed. Then, to validate the effectiveness of the proposed method, some different models are tested, evaluated and compared with respect to Response Coherence, Response Diversity, Length of Conversation and Human Evaluation. As these comparative results show, the proposed method is able to effectively improve the response quality of the generative conversational system.
APA, Harvard, Vancouver, ISO, and other styles
34

Shawar, Bayan Abu, and Eric Steven Atwell. "Using corpora in machine-learning chatbot systems." International Journal of Corpus Linguistics 10, no. 4 (November 7, 2005): 489–516. http://dx.doi.org/10.1075/ijcl.10.4.06sha.

Full text
Abstract:
A chatbot is a machine conversation system which interacts with human users via natural conversational language. Software to machine-learn conversational patterns from a transcribed dialogue corpus has been used to generate a range of chatbots speaking various languages and sublanguages including varieties of English, as well as French, Arabic and Afrikaans. This paper presents a program to learn from spoken transcripts of the Dialogue Diversity Corpus of English, the Minnesota French Corpus, the Corpus of Spoken Afrikaans, the Qur'an Arabic-English parallel corpus, and the British National Corpus of English; we discuss the problems which arose during learning and testing. Two main goals were achieved from the automation process. One was the ability to generate different versions of the chatbot in different languages, bringing chatbot technology to languages with few if any NLP resources: the corpus-based learning techniques transferred straightforwardly to develop chatbots for Afrikaans and Qur'anic Arabic. The second achievement was the ability to learn a very large number of categories within a short time, saving effort and errors in doing such work manually: we generated more than one million AIML categories or conversation-rules from the BNC corpus, 20 times the size of existing AIML rule-sets, and probably the biggest AI Knowledge-Base ever.
APA, Harvard, Vancouver, ISO, and other styles
35

Muhammad Bilal Ahmad Jamil and Duryab Shahzadi. "A systematic review A Conversational interface agent for the export business acceleration." Lahore Garrison University Research Journal of Computer Science and Information Technology 7, no. 02 (August 21, 2023): 37–49. http://dx.doi.org/10.54692/lgurjcsit.2023.0702430.

Full text
Abstract:
Conversational agents, which understand, respond to, and learn from each interaction using Automatic Speech Recognition (ASR), Natural Language Processing (NLP), Advanced Dialog Management, and Machine Learning (ML), have become more common in recent years. Conversational agents, also referred to as chatbots, are used to have real-time conversations with individuals. As a result, conversational agents are now being used in a variety of sectors, including those in education, healthcare, marketing, customer assistance, and entertainment. Conversational agents, which are frequently used as chatbots and virtual or AI helpers, show how computational linguistics is used in everyday life. It can be challenging to pinpoint the variables that affect the use of conversational agents for business acceleration and to defend their utility in order to enhance export company. This paper provides a summary of the evolution of conversational agents from a straightforward model to a sophisticated intelligent system, as well as how they are applied in various practical contexts. This study contributes to the body of literature on information systems by contrasting the different conversational agent types based on the export business acceleration interface. This paper also identifies the challenges conversational applications experience today and makes recommendations for further research.
APA, Harvard, Vancouver, ISO, and other styles
36

Ganguly, Debasis, Gareth J. F. Jones, Procheta Sen, Manisha Verma, and Dipasree Pal. "Report on supporting and understanding of conversational dialogues workshop (SUD 2021) at WSDM 2021." ACM SIGIR Forum 55, no. 1 (June 2021): 1–7. http://dx.doi.org/10.1145/3476415.3476420.

Full text
Abstract:
This report describes the workshop on Supporting and Understanding of (multi-party) conversational Dialogues (SUD) organized as a part of the Web Search and Data Mining conference (WSDM) 2021. The aim of SUD workshop was to encourage researchers to investigate automated methods to analyze and understand conversations. We also discuss the release of a dataset that would be useful in IR research on conversations. The dataset was constructed to support the data challenge in SUD workshop and its precursor event - the Retrieval from Conversational Dialogues (RCD) track at the Forum of Information Retrieval and Evaluation (FIRE) 2020.
APA, Harvard, Vancouver, ISO, and other styles
37

Asfoura, Evan, Gamal Kassem, Belal Alhuthaifi, and Fozi Belhaj. "Developing Chatbot Conversational Systems & the Future Generation Enterprise Systems." International Journal of Interactive Mobile Technologies (iJIM) 17, no. 10 (May 22, 2023): 155–75. http://dx.doi.org/10.3991/ijim.v17i10.37851.

Full text
Abstract:
Conversational technology has recently emerged effectively; it helps people in communicating with smart devices such as smartphones by using human language. When they emerged, they enabled and assisted users to perform various functions such as gathering information, conducting transactions, having general conversations, and easily navigating web services and entertainment. They not only have an impact on people in general by improving customer service as they can provide answers to any inquiries but also facilitated navigation by assisting people with disabilities by interacting with a system that deals with voices or any other human language. In addition, recently they started to play a significant role in enterprise systems by supporting employees as they are assisted in learning the newly implemented system. As virtual assistants, they contribute to better accessibility, and acceptance, and also reduce the costs associated with customer service. This paper provides a chatbot system that helps the employees to learn how to deal with the newly installed ERP system flexibly and easily, which helps in solving one of the common problems that occur when switching to the ERP system.
APA, Harvard, Vancouver, ISO, and other styles
38

Lin, Chien-Chang, Anna Y. Q. Huang, and Stephen J. H. Yang. "A Review of AI-Driven Conversational Chatbots Implementation Methodologies and Challenges (1999–2022)." Sustainability 15, no. 5 (February 22, 2023): 4012. http://dx.doi.org/10.3390/su15054012.

Full text
Abstract:
A conversational chatbot or dialogue system is a computer program designed to simulate conversation with human users, especially over the Internet. These chatbots can be integrated into messaging apps, mobile apps, or websites, and are designed to engage in natural language conversations with users. There are also many applications in which chatbots are used for educational support to improve students’ performance during the learning cycle. The recent success of ChatGPT also encourages researchers to explore more possibilities in the field of chatbot applications. One of the main benefits of conversational chatbots is their ability to provide an instant and automated response, which can be leveraged in many application areas. Chatbots can handle a wide range of inquiries and tasks, such as answering frequently asked questions, booking appointments, or making recommendations. Modern conversational chatbots use artificial intelligence (AI) techniques, such as natural language processing (NLP) and artificial neural networks, to understand and respond to users’ input. In this study, we will explore the objectives of why chatbot systems were built and what key methodologies and datasets were leveraged to build a chatbot. Finally, the achievement of the objectives will be discussed, as well as the associated challenges and future chatbot development trends.
APA, Harvard, Vancouver, ISO, and other styles
39

Wu, Zeqiu, Ryu Parish, Hao Cheng, Sewon Min, Prithviraj Ammanabrolu, Mari Ostendorf, and Hannaneh Hajishirzi. "InSCIt: Information-Seeking Conversations with Mixed-Initiative Interactions." Transactions of the Association for Computational Linguistics 11 (May 18, 2023): 453–68. http://dx.doi.org/10.1162/tacl_a_00559.

Full text
Abstract:
Abstract In an information-seeking conversation, a user may ask questions that are under-specified or unanswerable. An ideal agent would interact by initiating different response types according to the available knowledge sources. However, most current studies either fail to or artificially incorporate such agent-side initiative. This work presents InSCIt, a dataset for Information-Seeking Conversations with mixed-initiative Interactions. It contains 4.7K user-agent turns from 805 human-human conversations where the agent searches over Wikipedia and either directly answers, asks for clarification, or provides relevant information to address user queries. The data supports two subtasks, evidence passage identification and response generation, as well as a human evaluation protocol to assess model performance. We report results of two systems based on state-of-the-art models of conversational knowledge identification and open-domain question answering. Both systems significantly underperform humans, suggesting ample room for improvement in future studies.1
APA, Harvard, Vancouver, ISO, and other styles
40

Chuang, Hsiu-Min, and Ding-Wei Cheng. "Conversational AI over Military Scenarios Using Intent Detection and Response Generation." Applied Sciences 12, no. 5 (February 27, 2022): 2494. http://dx.doi.org/10.3390/app12052494.

Full text
Abstract:
With the rise of artificial intelligence, conversational agents (CA) have found use in various applications in the commerce and service industries. In recent years, many conversational datasets have becomes publicly available, most relating to open-domain social conversations. However, it is difficult to obtain domain-specific or language-specific conversational datasets. This work focused on developing conversational systems based on the Chinese corpus over military scenarios. The soldier will need information regarding their surroundings and orders to carry out their mission in an unfamiliar environment. Additionally, using a conversational military agent will help soldiers obtain immediate and relevant responses while reducing labor and cost requirements when performing repetitive tasks. This paper proposes a system architecture for conversational military agents based on natural language understanding (NLU) and natural language generation (NLG). The NLU phase comprises two tasks: intent detection and slot filling. Detecting intent and filling slots involves predicting the user’s intent and extracting related entities. The goal of the NLG phase, in contrast, is to provide answers or ask questions to clarify the user’s needs. In this study, the military training task was when soldiers sought information via a conversational agent during the mission. In summary, we provide a practical approach to enabling conversational agents over military scenarios. Additionally, the proposed conversational system can be trained by other datasets for future application domains.
APA, Harvard, Vancouver, ISO, and other styles
41

Agarwal, Sanchit, Jan Jezabek, Arijit Biswas, Emre Barut, Bill Gao, and Tagyoung Chung. "Building Goal-Oriented Dialogue Systems with Situated Visual Context." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (June 28, 2022): 13149–51. http://dx.doi.org/10.1609/aaai.v36i11.21710.

Full text
Abstract:
Goal-oriented dialogue agents can comfortably utilize the conversational context and understand its users' goals. However, in visually driven user experiences, these conversational agents are also required to make sense of the screen context in order to provide a proper interactive experience. In this paper, we propose a novel multimodal conversational framework where the dialogue agent's next action and their arguments are derived jointly conditioned both on the conversational and the visual context. We demonstrate the proposed approach via a prototypical furniture shopping experience for a multimodal virtual assistant.
APA, Harvard, Vancouver, ISO, and other styles
42

Jannach, Dietmar, Ahtsham Manzoor, Wanling Cai, and Li Chen. "A Survey on Conversational Recommender Systems." ACM Computing Surveys 54, no. 5 (June 2021): 1–36. http://dx.doi.org/10.1145/3453154.

Full text
Abstract:
Recommender systems are software applications that help users to find items of interest in situations of information overload. Current research often assumes a one-shot interaction paradigm, where the users’ preferences are estimated based on past observed behavior and where the presentation of a ranked list of suggestions is the main, one-directional form of user interaction. Conversational recommender systems (CRS) take a different approach and support a richer set of interactions. These interactions can, for example, help to improve the preference elicitation process or allow the user to ask questions about the recommendations and to give feedback. The interest in CRS has significantly increased in the past few years. This development is mainly due to the significant progress in the area of natural language processing, the emergence of new voice-controlled home assistants, and the increased use of chatbot technology. With this article, we provide a detailed survey of existing approaches to conversational recommendation. We categorize these approaches in various dimensions, e.g., in terms of the supported user intents or the knowledge they use in the background. Moreover, we discuss technological approaches, review how CRS are evaluated, and finally identify a number of gaps that deserve more research in the future.
APA, Harvard, Vancouver, ISO, and other styles
43

Iovine, Andrea, Fedelucio Narducci, and Giovanni Semeraro. "Conversational Recommender Systems and natural language:." Decision Support Systems 131 (April 2020): 113250. http://dx.doi.org/10.1016/j.dss.2020.113250.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Mahmood, Tariq, Ghulam Mujtaba, and Adriano Venturini. "Dynamic personalization in conversational recommender systems." Information Systems and e-Business Management 12, no. 2 (April 30, 2013): 213–38. http://dx.doi.org/10.1007/s10257-013-0222-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Khalifa, AlBara, Tsuneo Kato, and Seiichi Yamamoto. "Learning Effect of Implicit Learning in Joining-in-type Robot-assisted Language Learning System." International Journal of Emerging Technologies in Learning (iJET) 14, no. 02 (January 30, 2019): 105. http://dx.doi.org/10.3991/ijet.v14i02.9212.

Full text
Abstract:
The introduction of robots into language learning systems has been highly useful, especially in motivating learners to engage in the learning process and in letting human learners converse in more realistic conversational situations. This paper describes a novel robot-assisted language learning system that induces the human learner into a triad conversation with two robots through which he or she improves practical communication skills in various conversational situations. The system applies implicit learning as the main learning style for conveying linguistic knowledge, in an indirect way, through conversations on several topics. A series of experiments was conducted using 80 recruited participants to evaluate the effect of implicit learning and the retention effect in a joining-in-type robot-assisted language learning system. The experimental results show positive effects of implicit learning and repetitive learning in general. Based on these experimental results, we propose an improved method, integrating implicit learning and tutoring with corrective feedback in an adaptive way, to increase performance in practical communication skills even for a wide variety of proficiency of L2 learners.
APA, Harvard, Vancouver, ISO, and other styles
46

Abbas, Tahir, Ujwal Gadiraju, Vassilis-Javed Khan, and Panos Markopoulos. "Understanding User Perceptions of Response Delays in Crowd-Powered Conversational Systems." Proceedings of the ACM on Human-Computer Interaction 6, CSCW2 (November 7, 2022): 1–42. http://dx.doi.org/10.1145/3555765.

Full text
Abstract:
Crowd-powered conversational systems (CPCS) are gaining considerable attention for their potential utility in a variety of application domains, for which automated conversational interfaces are still too limited. CPCS currently suffer from long response delays, which hampers their potential as conversational partners. The majority of prior work in this area has focused on demonstrating the feasibility of the approach and improving performance, while evaluation studies have primarily focused on response latency and ways to reduce it. Relatively little is currently known about how response delays in a CPCS can affect user experience. While the importance of reducing response latency is widely recognized in the broader field of human-computer interaction, little attention has been paid to how response quality, response delay, conversational context, and the complexity of the task affect how users experience the conversation, and how they perceive waiting for responses in particular. We conducted a between-subjects experiment (N = 478), to examine the influence of these four factors on the overall waiting experience of users. Results show that users 1) evaluated the waiting experience more negatively when the response delay was longer than 8 seconds, 2) underestimated the elapsed time but experienced more frustration in tasks with high complexity, 3) underestimated the elapsed time and experienced less frustration with high quality bot's utterances, 4) judged response delays to be slightly longer, and experienced more frustration in an emotion-centric CPCS compared to a task-centric CPCS. Our insights can inform the design of future CPCSs with regards to defining performance requirements and anticipating their potential impact on the user experience they can facilitate.
APA, Harvard, Vancouver, ISO, and other styles
47

Yadav, Sargam, and Abhishek Kaushik. "Do You Ever Get Off Track in a Conversation? The Conversational System’s Anatomy and Evaluation Metrics." Knowledge 2, no. 1 (January 14, 2022): 55–87. http://dx.doi.org/10.3390/knowledge2010004.

Full text
Abstract:
Conversational systems are now applicable to almost every business domain. Evaluation is an important step in the creation of dialog systems so that they may be readily tested and prototyped. There is no universally agreed upon metric for evaluating all dialog systems. Human evaluation, which is not computerized, is now the most effective and complete evaluation approach. Data gathering and analysis are evaluation activities that need human intervention. In this work, we address the many types of dialog systems and the assessment methods that may be used with them. The benefits and drawbacks of each sort of evaluation approach are also explored, which could better help us understand the expectations associated with developing an automated evaluation system. The objective of this study is to investigate conversational agents, their design approaches and evaluation metrics. This approach can help us to better understand the overall process of dialog system development, and future possibilities to enhance user experience. Because human assessment is costly and time consuming, we emphasize the need of having a generally recognized and automated evaluation model for conversational systems, which may significantly minimize the amount of time required for analysis.
APA, Harvard, Vancouver, ISO, and other styles
48

Chiang, Ting-Rui, Hao-Tong Ye, and Yun-Nung Chen. "An Empirical Study of Content Understanding in Conversational Question Answering." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 7578–85. http://dx.doi.org/10.1609/aaai.v34i05.6257.

Full text
Abstract:
With a lot of work about context-free question answering systems, there is an emerging trend of conversational question answering models in the natural language processing field. Thanks to the recently collected datasets, including QuAC and CoQA, there has been more work on conversational question answering, and recent work has achieved competitive performance on both datasets. However, to best of our knowledge, two important questions for conversational comprehension research have not been well studied: 1) How well can the benchmark dataset reflect models' content understanding? 2) Do the models well utilize the conversation content when answering questions? To investigate these questions, we design different training settings, testing settings, as well as an attack to verify the models' capability of content understanding on QuAC and CoQA. The experimental results indicate some potential hazards in the benchmark datasets, QuAC and CoQA, for conversational comprehension research. Our analysis also sheds light on both what models may learn and how datasets may bias the models. With deep investigation of the task, it is believed that this work can benefit the future progress of conversation comprehension. The source code is available at https://github.com/MiuLab/CQA-Study.
APA, Harvard, Vancouver, ISO, and other styles
49

Oliver, Rhonda. "Negative Feedback in Child NS-NNS Conversation." Studies in Second Language Acquisition 17, no. 4 (December 1995): 459–81. http://dx.doi.org/10.1017/s0272263100014418.

Full text
Abstract:
This paper reports on a study that examines the pattern of interaction in child native speaker (NS)–nonnative speaker (NNS) conversation to determine if the NSs provide negative feedback to their NNS conversational partners. It appears that just as children are able to modify their input for their less linguistically proficient conversational partners in first language acquisition (Snow, 1977), so too are children able to modify their interactions for NNS peers in the second language acquisition process and, in doing so, provide negative feedback. Two forms of NS modification were identified in this study as providing reactive and implicit negative feedback to the NNS. These were (a) negotiation strategies, including repetition, clarification requests, and comprehension checks, and (b) recasts. The results indicated that NSs respond differentially to the grammaticality and ambiguity of their NNS peers' conversational contributions. Furthermore, NS responses (negotiate, recast, or ignore) appeared to be triggered by the type and complexity of NNS errors, although it was more likely overall that negative feedback would be used rather than the error ignored. Additionally, evidence suggested that negative feedback was incorporated by the NNSs into their interlanguage systems. This indicates that not only does negative evidence exist for child second language learners in these types of conversations, but that it is also usable and used by them in the language acquisition process.
APA, Harvard, Vancouver, ISO, and other styles
50

Noble, E. J. Menasse, and J. Adler. "Facilitating Location Independence with Computerized Conversation Systems." Environment and Planning A: Economy and Space 28, no. 2 (February 1996): 223–35. http://dx.doi.org/10.1068/a280223.

Full text
Abstract:
Location independence for organizations is desirable if they wish to achieve a given spatial distribution in a regional development plan. An organization's interaction with its environment forms the basis of its daily work and takes the form of ‘information links’ composed of fundamental indivisible blocks called ‘conversations’. To achieve location independence it is necessary for organizations to develop and maintain environment interactions independent of their location. Information technology systems are able to reduce location restrictions by providing distant parties with the conversational structure present in face-to-face interpersonal interactions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography