Journal articles on the topic 'Conversational Assistants'

To see the other types of publications on this topic, follow the link: Conversational Assistants.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Conversational Assistants.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Huang, Ting-Hao, Walter Lasecki, Amos Azaria, and Jeffrey Bigham. ""Is There Anything Else I Can Help You With?" Challenges in Deploying an On-Demand Crowd-Powered Conversational Agent." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 4 (September 21, 2016): 79–88. http://dx.doi.org/10.1609/hcomp.v4i1.13292.

Full text
Abstract:
Intelligent conversational assistants, such as Apple's Siri, Microsoft's Cortana, and Amazon's Echo, have quickly become a part of our digital life. However, these assistants have major limitations, which prevents users from conversing with them as they would with human dialog partners. This limits our ability to observe how users really want to interact with the underlying system. To address this problem, we developed a crowd-powered conversational assistant, Chorus, and deployed it to see how users and workers would interact together when mediated by the system. Chorus sophisticatedly converses with end users over time by recruiting workers on demand, which in turn decide what might be the best response for each user sentence. Up to the first month of our deployment, 59 users have held conversations with Chorus during 320 conversational sessions. In this paper, we present an account of Chorus' deployment, with a focus on four challenges: (i) identifying when conversations are over, (ii) malicious users and workers, (iii) on-demand recruiting, and (iv) settings in which consensus is not enough. Our observations could assist the deployment of crowd-powered conversation systems and crowd-powered systems in general.
APA, Harvard, Vancouver, ISO, and other styles
2

Ortiz, Charles L. "Holistic Conversational Assistants." AI Magazine 39, no. 1 (March 27, 2018): 88–90. http://dx.doi.org/10.1609/aimag.v39i1.2771.

Full text
Abstract:
This column describes work being done at Nuance Communication in developing virtual personal assistants (VPAs) that can engage in extended task center dialogues and the involve the coordination of many complex modules, along with conversational and collaborative support to such VPAs.
APA, Harvard, Vancouver, ISO, and other styles
3

Bickmore, Timothy W., Stefán Ólafsson, and Teresa K. O'Leary. "Mitigating Patient and Consumer Safety Risks When Using Conversational Assistants for Medical Information: Exploratory Mixed Methods Experiment." Journal of Medical Internet Research 23, no. 11 (November 9, 2021): e30704. http://dx.doi.org/10.2196/30704.

Full text
Abstract:
Background Prior studies have demonstrated the safety risks when patients and consumers use conversational assistants such as Apple’s Siri and Amazon’s Alexa for obtaining medical information. Objective The aim of this study is to evaluate two approaches to reducing the likelihood that patients or consumers will act on the potentially harmful medical information they receive from conversational assistants. Methods Participants were given medical problems to pose to conversational assistants that had been previously demonstrated to result in potentially harmful recommendations. Each conversational assistant’s response was randomly varied to include either a correct or incorrect paraphrase of the query or a disclaimer message—or not—telling the participants that they should not act on the advice without first talking to a physician. The participants were then asked what actions they would take based on their interaction, along with the likelihood of taking the action. The reported actions were recorded and analyzed, and the participants were interviewed at the end of each interaction. Results A total of 32 participants completed the study, each interacting with 4 conversational assistants. The participants were on average aged 42.44 (SD 14.08) years, 53% (17/32) were women, and 66% (21/32) were college educated. Those participants who heard a correct paraphrase of their query were significantly more likely to state that they would follow the medical advice provided by the conversational assistant (χ21=3.1; P=.04). Those participants who heard a disclaimer message were significantly more likely to say that they would contact a physician or health professional before acting on the medical advice received (χ21=43.5; P=.001). Conclusions Designers of conversational systems should consider incorporating both disclaimers and feedback on query understanding in response to user queries for medical advice. Unconstrained natural language input should not be used in systems designed specifically to provide medical advice.
APA, Harvard, Vancouver, ISO, and other styles
4

Geetha, Dr V., Dr C. K. Gomathy*, Mr Kottamasu Manasa Sri Vardhan, and Mr Nukala Pavan Kumar. "The Voice Enabled Personal Assistant for Pc using Python." International Journal of Engineering and Advanced Technology 10, no. 4 (April 30, 2021): 162–65. http://dx.doi.org/10.35940/ijeat.d2425.0410421.

Full text
Abstract:
Personal Assistants, or conversational interfaces, or chat bots reinvent a new way for individuals to interact with computes. A Personal Virtual Assistant allows a user to simply ask questions in the same manner that they would address a human, and are even capable of doing some basic tasks like opening apps, reading out news, taking notes etc., with just a voice command. Personal Assistants like Google Assistant, Alexa, Siri wo
APA, Harvard, Vancouver, ISO, and other styles
5

Beaver, Ian, and Abdullah Mueen. "On the Care and Feeding of Virtual Assistants: Automating Conversation Review with AI." AI Magazine 42, no. 4 (January 12, 2022): 29–42. http://dx.doi.org/10.1609/aimag.v42i4.15101.

Full text
Abstract:
With the rise of intelligent virtual assistants (IVAs), there is a necessary rise in human effort to identify conversations containing misunderstood user inputs. These conversations uncover error in natural language understanding and help prioritize improvements to the IVA. As human analysis is time consuming and expensive, prioritizing the conversations where misunderstanding has likely occurred reduces costs and speeds IVA improvement. In addition, less conversations reviewed by humans mean less user data are exposed, increasing privacy. We describe Trace AI, a scalable system for automated conversation review based on the detection of conversational features that can identify potential miscommunications. Trace AI provides IVA designers with suggested actions to correct understanding errors, prioritizes areas of language model repair, and can automate the review of conversations. We discuss the system design and report its performance at identifying errors in IVA understanding compared to that of human reviewers. Trace AI has been commercially deployed for over 4 years and is responsible for significant savings in human annotation costs as well as accelerating the refinement cycle of deployed enterprise IVAs.
APA, Harvard, Vancouver, ISO, and other styles
6

Hwang, Inseok, Youngki Lee, Chungkuk Yoo, Chulhong Min, Dongsun Yim, and John Kim. "Towards Interpersonal Assistants: Next-Generation Conversational Agents." IEEE Pervasive Computing 18, no. 2 (April 1, 2019): 21–31. http://dx.doi.org/10.1109/mprv.2019.2922907.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Almousa, Omar Saad, Hazem Migdady, and Mohammad Al-Talib. "Conversational Frames." International Journal of Embedded and Real-Time Communication Systems 11, no. 4 (October 2020): 104–33. http://dx.doi.org/10.4018/ijertcs.2020100106.

Full text
Abstract:
This paper extends the previous work of Almousa and Migdady by proposing and implementing three models to improve the correctness and naturality of the given answers of smart personal assistants (SPAs). The motivation behind the three proposed models is the failure of some well-known existing SPAs (Siri and Salma) to extract contextual information from a conversation. The authors call this kind of information: the conversational frame. For evaluating the suggested models, the authors also implement an abstraction of the existing SPAs that they call non-framed model. To evaluate the four implementations, 36 respondents answered an on-line questionnaire after watching four videos that the authors recorded per implementation. The authors statistically prove that their three suggested models precede the non-framed model. The researchers attribute this to the fact that our models overcome the shortcoming present in non-framed model. Moreover, their model that implements the conversational frame precedes all other models.
APA, Harvard, Vancouver, ISO, and other styles
8

Kraus, Matthias, Nicolas Wagner, Zoraida Callejas, and Wolfgang Minker. "The Role of Trust in Proactive Conversational Assistants." IEEE Access 9 (2021): 112821–36. http://dx.doi.org/10.1109/access.2021.3103893.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Krommyda, Maria, and Verena Kantere. "Semantic Analysis for Conversational Datasets: Improving Their Quality Using Semantic Relationships." International Journal of Semantic Computing 14, no. 03 (September 2020): 395–422. http://dx.doi.org/10.1142/s1793351x2050004x.

Full text
Abstract:
As more and more datasets become available, their utilization in different applications increases in popularity. Their volume and production rate, however, means that their quality and content control is in most cases non-existing, resulting in many datasets that contain inaccurate information of low quality. Especially, in the field of conversational assistants, where the datasets come from many heterogeneous sources with no quality assurance, the problem is aggravated. We present here an integrated platform that creates task- and topic-specific conversational datasets to be used for training conversational agents. The platform explores available conversational datasets, extracts information based on semantic similarity and relatedness, and applies a weight-based score function to rank the information based on its value for the specific task and topic. The finalized dataset can then be used for the training of an automated conversational assistance over accurate data of high quality.
APA, Harvard, Vancouver, ISO, and other styles
10

Parikh, Soham, Quaizar Vohra, and Mitul Tiwari. "Automated Utterance Generation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 08 (April 3, 2020): 13344–49. http://dx.doi.org/10.1609/aaai.v34i08.7047.

Full text
Abstract:
Conversational AI assistants are becoming popular and question-answering is an important part of any conversational assistant. Using relevant utterances as features in question-answering has shown to improve both the precision and recall for retrieving the right answer by a conversational assistant. Hence, utterance generation has become an important problem with the goal of generating relevant utterances (sentences or phrases) from a knowledge base article that consists of a title and a description. However, generating good utterances usually requires a lot of manual effort, creating the need for an automated utterance generation. In this paper, we propose an utterance generation system which 1) uses extractive summarization to extract important sentences from the description, 2) uses multiple paraphrasing techniques to generate a diverse set of paraphrases of the title and summary sentences, and 3) selects good candidate paraphrases with the help of a novel candidate selection algorithm.
APA, Harvard, Vancouver, ISO, and other styles
11

Adaimi, Rebecca, Howard Yong, and Edison Thomaz. "Ok Google, What Am I Doing?" Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, no. 1 (March 19, 2021): 1–24. http://dx.doi.org/10.1145/3448090.

Full text
Abstract:
Conversational assistants in the form of stand-alone devices such as Amazon Echo and Google Home have become popular and embraced by millions of people. By serving as a natural interface to services ranging from home automation to media players, conversational assistants help people perform many tasks with ease, such as setting timers, playing music and managing to-do lists. While these systems offer useful capabilities, they are largely passive and unaware of the human behavioral context in which they are used. In this work, we explore how off-the-shelf conversational assistants can be enhanced with acoustic-based human activity recognition by leveraging the short interval after a voice command is given to the device. Since always-on audio recording can pose privacy concerns, our method is unique in that it does not require capturing and analyzing any audio other than the speech-based interactions between people and their conversational assistants. In particular, we leverage background environmental sounds present in these short duration voice-based interactions to recognize activities of daily living. We conducted a study with 14 participants in 3 different locations in their own homes. We showed that our method can recognize 19 different activities of daily living with average precision of 84.85% and average recall of 85.67% in a leave-one-participant-out performance evaluation with 30-second audio clips bound by the voice interactions.
APA, Harvard, Vancouver, ISO, and other styles
12

Huang, Ting-Hao K., Amos Azaria, Oscar J. Romero, and Jeffrey P. Bigham. "InstructableCrowd: Creating IF-THEN Rules for Smartphones via Conversations with the Crowd." Human Computation 6 (September 10, 2019): 113–46. http://dx.doi.org/10.15346/hc.v6i1.104.

Full text
Abstract:
Natural language interfaces have become a common part of modern digital life. Chatbots utilize text-based conversations to communicate with users; personal assistants on smartphones such as Google Assistant take direct speech commands from their users; and speech-controlled devices such as Amazon Echo use voice as their only input mode. In this paper, we introduce InstructableCrowd, a crowd-powered system that allows users to program their devices via conversation. The user verbally expresses a problem to the system, in which a group of crowd workers collectively respond and program relevant multi-part IF-THEN rules to help the user. The IF-THEN rules generated by InstructableCrowd connect relevant sensor combinations (e.g., location, weather, device acceleration, etc.) to useful effectors (e.g., text messages, device alarms, etc.). Our study showed that non-programmers can use the conversational interface of InstructableCrowd to create IF-THEN rules that have similar quality compared with the rules created manually. InstructableCrowd generally illustrates how users may converse with their devices, not only to trigger simple voice commands, but also to personalize their increasingly powerful and complicated devices.
APA, Harvard, Vancouver, ISO, and other styles
13

Beaver, Ian, and Abdullah Mueen. "Automated Conversation Review to Surface Virtual Assistant Misunderstandings: Reducing Cost and Increasing Privacy." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 08 (April 3, 2020): 13140–47. http://dx.doi.org/10.1609/aaai.v34i08.7017.

Full text
Abstract:
With the rise of Intelligent Virtual Assistants (IVAs), there is a necessary rise in human effort to identify conversations containing misunderstood user inputs. These conversations uncover error in natural language understanding and help prioritize and expedite improvements to the IVA. As human reviewer time is valuable and manual analysis is time consuming, prioritizing the conversations where misunderstanding has likely occurred reduces costs and speeds improvement. In addition, less conversations reviewed by humans mean less user data is exposed, increasing privacy. We present a scalable system for automated conversation review that can identify potential miscommunications. Our system provides IVA designers with suggested actions to fix errors in IVA understanding, prioritizes areas of language model repair, and automates the review of conversations where desired.Verint - Next IT builds IVAs on behalf of other companies and organizations, and therefore analyzes large volumes of conversational data. Our review system has been in production for over three years and saves our company roughly $1.5 million in annotation costs yearly, as well as shortened the refinement cycle of production IVAs. In this paper, the system design is discussed and performance in identifying errors in IVA understanding is compared to that of human reviewers.
APA, Harvard, Vancouver, ISO, and other styles
14

Lasecki, Walter, and Jeffrey Bigham. "Automated Support for Collective Memory of Conversational Interactions." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 1 (November 3, 2013): 40–41. http://dx.doi.org/10.1609/hcomp.v1i1.13104.

Full text
Abstract:
Maintaining consistency is a difficult challenge in crowd-powered systems in which constituent crowd workers may change over time. We discuss an initial outline for Chorus:Mnemonic, a system that augments the crowd's collective memory of a conversation by automatically recovering past knowledge based on topic, allowing the system to support consistent multi-session interactions. We present the design of the system itself, and discuss methods for testing its effectiveness. Our goal is to provide consistency between long interactions with crowd-powered conversational assistants by using AI to augment crowd workers.
APA, Harvard, Vancouver, ISO, and other styles
15

Rastogi, Abhinav, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. "Towards Scalable Multi-Domain Conversational Agents: The Schema-Guided Dialogue Dataset." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 8689–96. http://dx.doi.org/10.1609/aaai.v34i05.6394.

Full text
Abstract:
Virtual assistants such as Google Assistant, Alexa and Siri provide a conversational interface to a large number of services and APIs spanning multiple domains. Such systems need to support an ever-increasing number of services with possibly overlapping functionality. Furthermore, some of these services have little to no training data available. Existing public datasets for task-oriented dialogue do not sufficiently capture these challenges since they cover few domains and assume a single static ontology per domain. In this work, we introduce the the Schema-Guided Dialogue (SGD) dataset, containing over 16k multi-domain conversations spanning 16 domains. Our dataset exceeds the existing task-oriented dialogue corpora in scale, while also highlighting the challenges associated with building large-scale virtual assistants. It provides a challenging testbed for a number of tasks including language understanding, slot filling, dialogue state tracking and response generation. Along the same lines, we present a schema-guided paradigm for task-oriented dialogue, in which predictions are made over a dynamic set of intents and slots, provided as input, using their natural language descriptions. This allows a single dialogue system to easily support a large number of services and facilitates simple integration of new services without requiring additional training data. Building upon the proposed paradigm, we release a model for dialogue state tracking capable of zero-shot generalization to new APIs, while remaining competitive in the regular setting.
APA, Harvard, Vancouver, ISO, and other styles
16

Schreuter, Donna, Peter van der Putten, and Maarten H. Lamers. "Trust Me on This One: Conforming to Conversational Assistants." Minds and Machines 31, no. 4 (November 24, 2021): 535–62. http://dx.doi.org/10.1007/s11023-021-09581-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Cebrián, Javier, Ramón Martínez-Jiménez, Natalia Rodriguez, and Luis Fernando D’Haro. "Considerations on Creating Conversational Agents For Multiple Environments and Users." AI Magazine 42, no. 2 (October 20, 2021): 71–86. http://dx.doi.org/10.1609/aimag.v42i2.7484.

Full text
Abstract:
Advances in artificial intelligence algorithms and expansion of straightforward cloud-based platforms have enabled the adoption of conversational assistants by both, medium and large companies, to facilitate interaction between clients and employees. The interactions are possible through the use of ubiquitous devices (e.g., Amazon Echo, Apple HomePod, Google Nest), virtual assistants (e.g., Apple Siri, Google Assistant, Samsung Bixby, or Microsoft Cortana), chat windows on the corporate website, or social network applications (e.g. Facebook Messenger, Telegram, Slack, WeChat).Creating a useful, personalized conversational agent that is also robust and popular is nonetheless challenging work. It requires picking the right algorithm, framework, and/or communication channel, but perhaps more importantly, consideration of the specific task, user needs, environment, available training data, budget, and a thoughtful design.In this paper, we will consider the elements necessary to create a conversational agent for different types of users, environments, and tasks. The elements will account for the limited amount of data available for specific tasks within a company and for non-English languages. We are confident that we can provide a useful resource for the new practitioner developing an agent. We can point out novice problems/traps to avoid, create consciousness that the development of the technology is achievable despite comprehensive and significant challenges, and raise awareness about different ethical issues that may be associated with this technology. We have compiled our experience with deploying conversational systems for daily use in multicultural, multilingual, and intergenerational settings. Additionally, we will give insight on how to scale the proposed solutions.
APA, Harvard, Vancouver, ISO, and other styles
18

Seymour, William, and Max Van Kleek. "Exploring Interactions Between Trust, Anthropomorphism, and Relationship Development in Voice Assistants." Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (October 13, 2021): 1–16. http://dx.doi.org/10.1145/3479515.

Full text
Abstract:
Modern conversational agents such as Alexa and Google Assistant represent significant progress in speech recognition, natural language processing, and speech synthesis. But as these agents have grown more realistic, concerns have been raised over how their social nature might unconsciously shape our interactions with them. Through a survey of 500 voice assistant users, we explore whether users' relationships with their voice assistants can be quantified using the same metrics as social, interpersonal relationships; as well as if this correlates with how much they trust their devices and the extent to which they anthropomorphise them. Using Knapp's staircase model of human relationships, we find that not only can human-device interactions be modelled in this way, but also that relationship development with voice assistants correlates with increased trust and anthropomorphism.
APA, Harvard, Vancouver, ISO, and other styles
19

de Cock, Caroline, Madison Milne-Ives, Michelle Helena van Velthoven, Abrar Alturkistani, Ching Lam, and Edward Meinert. "Effectiveness of Conversational Agents (Virtual Assistants) in Health Care: Protocol for a Systematic Review." JMIR Research Protocols 9, no. 3 (March 9, 2020): e16934. http://dx.doi.org/10.2196/16934.

Full text
Abstract:
Background Conversational agents (also known as chatbots) have evolved in recent decades to become multimodal, multifunctional platforms with potential to automate a diverse range of health-related activities supporting the general public, patients, and physicians. Multiple studies have reported the development of these agents, and recent systematic reviews have described the scope of use of conversational agents in health care. However, there is scarce research on the effectiveness of these systems; thus, their viability and applicability are unclear. Objective The objective of this systematic review is to assess the effectiveness of conversational agents in health care and to identify limitations, adverse events, and areas for future investigation of these agents. Methods The Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocols will be used to structure this protocol. The focus of the systematic review is guided by a population, intervention, comparator, and outcome framework. A systematic search of the PubMed (Medline), EMBASE, CINAHL, and Web of Science databases will be conducted. Two authors will independently screen the titles and abstracts of the identified references and select studies according to the eligibility criteria. Any discrepancies will then be discussed and resolved. Two reviewers will independently extract and validate data from the included studies into a standardized form and conduct quality appraisal. Results As of January 2020, we have begun a preliminary literature search and piloting of the study selection process. Conclusions This systematic review aims to clarify the effectiveness, limitations, and future applications of conversational agents in health care. Our findings may be useful to inform the future development of conversational agents and promote the personalization of patient care. International Registered Report Identifier (IRRID) PRR1-10.2196/16934
APA, Harvard, Vancouver, ISO, and other styles
20

Kowald, Cäcilie, and Beate Bruns. "Chat Me Out of Here!" International Journal of Advanced Corporate Learning (iJAC) 14, no. 2 (December 8, 2021): 50–57. http://dx.doi.org/10.3991/ijac.v14i2.25917.

Full text
Abstract:
Conversational user interfaces, aka chatbots, offer new ways of interaction that can be used not only for task-led applications, but also for learning itself. From drill-and-practice assistants to digital tutors and coaches, Conversational learning offers a variety of new and extensive options to support individuals through the learning process and to push the boundaries of classroom-based learning. However, Conversational Learning applications that go beyond simple question-and-answer dialogs are still rare. “Pit in the Warehouse” takes a new stance to Conversational Learning: by combining an dialogical escape room challenge with an interactive fiction approach and compelling storytelling, it creates an engaging and low-threshold type of game-based learning.
APA, Harvard, Vancouver, ISO, and other styles
21

Pérez, Anxo, Paula Lopez-Otero, and Javier Parapar. "Designing an Open Source Virtual Assistant." Proceedings 54, no. 1 (August 21, 2020): 30. http://dx.doi.org/10.3390/proceedings2020054030.

Full text
Abstract:
A chatbot is a type of agent that allows people to interact with an information repository using natural language. Nowadays, chatbots have been incorporated in the form of conversational assistants on the most important mobile and desktop platforms. In this article, we present our design of an assistant developed with open-source and widely used components. Our proposal covers the process end-to-end, from information gathering and processing to visual and speech-based interaction. We have deployed a proof of concept over the website of our Computer Science Faculty.
APA, Harvard, Vancouver, ISO, and other styles
22

Allen, James, Lucian Galescu, Choh Man Teng, and Ian Perera. "Conversational Agents for Complex Collaborative Tasks." AI Magazine 41, no. 4 (December 28, 2020): 54–78. http://dx.doi.org/10.1609/aimag.v41i4.7384.

Full text
Abstract:
Dialogue is a very active area of research currently, both in developing new computational techniques for robust dialogue systems and in the active fielding of commercial conversational assistants such as Siri and Alexa. This paper argues that, while current techniques can be used to design effective dialogue-based systems for very simple tasks, they are unlikely to generalize to conversational interfaces that enhance human ability to solve complex tasks by interacting with AI reasoning and modeling systems. We explore some of the challenges of tackling such complex tasks and describe a dialogue model designed to meet these challenges. We illustrate our approach with examples of several implemented systems that use this framework.
APA, Harvard, Vancouver, ISO, and other styles
23

Fontecha, Jesús, Iván González, and Alberto Salas-Seguín. "Using Conversational Assistants and Connected Devices to Promote a Responsible Energy Consumption at Home." Proceedings 31, no. 1 (November 20, 2019): 32. http://dx.doi.org/10.3390/proceedings2019031032.

Full text
Abstract:
Today, households worldwide are being increasingly connected. Mobile devices and embedded systems carry out many tasks supported by applications which are based on artificial intelligence algorithms with the aim of leading homes to be smarter. One of the purposes of these systems is to connect appliances to the power network, as well as to the internet to monitor consumption data among others. In addition, new interaction ways are emerging to manage all these systems. For example, conversational assistants which allow us to interact by voice with devices at home. In this work, we present GreenMoCA, a system to monitor energy consumption data from connected devices at home with the aim of improving sustainability aspects and reducing such energy consumption, supported by a conversational assistant. This system is able to interact with the user in a natural way, providing information of current energy use and feedback based on previous consumption measures in a Smart Home environment. Finally, we assessed GreenMoCA from a usability and user experience approach on a group of users with positive results.
APA, Harvard, Vancouver, ISO, and other styles
24

Dingler, Tilman, Dominika Kwasnicka, Jing Wei, Enying Gong, and Brian Oldenburg. "The Use and Promise of Conversational Agents in Digital Health." Yearbook of Medical Informatics 30, no. 01 (August 2021): 191–99. http://dx.doi.org/10.1055/s-0041-1726510.

Full text
Abstract:
Summary Objectives: To describe the use and promise of conversational agents in digital health—including health promotion andprevention—and how they can be combined with other new technologies to provide healthcare at home. Method: A narrative review of recent advances in technologies underpinning conversational agents and their use and potential for healthcare and improving health outcomes. Results: By responding to written and spoken language, conversational agents present a versatile, natural user interface and have the potential to make their services and applications more widely accessible. Historically, conversational interfaces for health applications have focused mainly on mental health, but with an increase in affordable devices and the modernization of health services, conversational agents are becoming more widely deployed across the health system. We present our work on context-aware voice assistants capable of proactively engaging users and delivering health information and services. The proactive voice agents we deploy, allow us to conduct experience sampling in people's homes and to collect information about the contexts in which users are interacting with them. Conclusion: In this article, we describe the state-of-the-art of these and other enabling technologies for speech and conversation and discuss ongoing research efforts to develop conversational agents that “live” with patients and customize their service offerings around their needs. These agents can function as ‘digital companions’ who will send reminders about medications and appointments, proactively check in to gather self-assessments, and follow up with patients on their treatment plans. Together with an unobtrusive and continuous collection of other health data, conversational agents can provide novel and deeply personalized access to digital health care, and they will continue to become an increasingly important part of the ecosystem for future healthcare delivery.
APA, Harvard, Vancouver, ISO, and other styles
25

Diederich, Stephan, Alfred Benedikt Brendel, and Lutz M. Kolbe. "Designing Anthropomorphic Enterprise Conversational Agents." Business & Information Systems Engineering 62, no. 3 (March 10, 2020): 193–209. http://dx.doi.org/10.1007/s12599-020-00639-y.

Full text
Abstract:
Abstract The increasing capabilities of conversational agents (CAs) offer manifold opportunities to assist users in a variety of tasks. In an organizational context, particularly their potential to simulate a human-like interaction via natural language currently attracts attention both at the customer interface as well as for internal purposes, often in the form of chatbots. Emerging experimental studies on CAs look into the impact of anthropomorphic design elements, so-called social cues, on user perception. However, while these studies provide valuable prescriptive knowledge of selected social cues, they neglect the potential detrimental influence of the limited responsiveness of present-day conversational agents. In practice, many CAs fail to continuously provide meaningful responses in a conversation due to the open nature of natural language interaction, which negatively influences user perception and often led to CAs being discontinued in the past. Thus, designing a CA that provides a human-like interaction experience while minimizing the risks associated with limited conversational capabilities represents a substantial design problem. This study addresses the aforementioned problem by proposing and evaluating a design for a CA that offers a human-like interaction experience while mitigating negative effects due to limited responsiveness. Through the presentation of the artifact and the synthesis of prescriptive knowledge in the form of a nascent design theory for anthropomorphic enterprise CAs, this research adds to the growing knowledge base for designing human-like assistants and supports practitioners seeking to introduce them into their organizations.
APA, Harvard, Vancouver, ISO, and other styles
26

Bajaj, Divij, and Dhanya Pramod. "Conversational System, Intelligent Virtual Assistant (IVA) Named DIVA Using Raspberry Pi." International Journal of Security and Privacy in Pervasive Computing 12, no. 4 (October 2020): 38–52. http://dx.doi.org/10.4018/ijsppc.2020100104.

Full text
Abstract:
Humans are living in an era where they are interacting with machines day in and day out. In this new era of the 21st century, a virtual assistant (IVA) is a boon for everyone. It has opened the way for a new world where devices can interact their own. The human voice is integrated with every device making it intelligent. These IVAs can also be used to integrate it with business intelligence software such as Tableau and PowerBI to give dashboards the power of voice and text insights using NLG (natural language generation). This new technology attracted almost the entire world like smart phones, laptops, computers, smart meeting rooms, car InfoTech system, TV, etc. in many ways. Some of the popular voice assistants are like Mibot, Siri, Google Assistant, Cortana, Bixby, and Amazon Alexa. Voice recognition, contextual understanding, and human interaction are some of the issues that are continuously improving in these IVAs and shifting this paradigm towards AI research. This research aims at processing human natural voice and gives a meaningful response to the user. The questions that it is not able to answer are stored in a database for further investigation.
APA, Harvard, Vancouver, ISO, and other styles
27

Yan, Rui, and Wei Wu. "Empowering Conversational AI is a Trip to Mars: Progress and Future of Open Domain Human-Computer Dialogues." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 17 (May 18, 2021): 15078–86. http://dx.doi.org/10.1609/aaai.v35i17.17771.

Full text
Abstract:
Dialogue systems powered by conversational artificial intelligence (AI) have never been so popular. Interacting with computer through languages reveals a more natural interface to give orders and acquire information---just like human communication. Due to promising potential as virtual assistants and/or social bots, major NLP, AI and even Search & Mining communities are explicitly calling-out for contributions of conversational studies. Learning towards real conversational intelligence is a trip to Mars; perhaps we are yet on Earth. We have achieved substantial progress from recent research outputs. Still we have major obstacles to overcome. In this paper, we present an overview of progress and look forward to future trends so as to shed light on possible directions towards success.
APA, Harvard, Vancouver, ISO, and other styles
28

Trippas, Johanne R. "Spoken conversational search." ACM SIGIR Forum 53, no. 2 (December 2019): 106–7. http://dx.doi.org/10.1145/3458553.3458570.

Full text
Abstract:
Speech-based web search where no keyboard or screens are available to present search engine results is becoming ubiquitous, mainly through the use of mobile devices and intelligent assistants such as Apple's HomePod, Google Home, or Amazon Alexa. Currently, these intelligent assistants do not maintain a lengthy information exchange. They do not track context or present information suitable for an audio-only channel, and do not interact with the user in a multi-turn conversation. Understanding how users would interact with such an audio-only interaction system in multi-turn information seeking dialogues, and what users expect from these new systems, are unexplored in search settings. In particular, the knowledge on how to present search results over an audio-only channel and which interactions take place in this new search paradigm is crucial to incorporate while producing usable systems [9, 2, 8]. Thus, constructing insight into the conversational structure of information seeking processes provides researchers and developers opportunities to build better systems while creating a research agenda and directions for future advancements in Spoken Conversational Search (SCS). Such insight has been identified as crucial in the growing SCS area. At the moment, limited understanding has been acquired for SCS, for example, how the components interact, how information should be presented, or how task complexity impacts the interactivity or discourse behaviours. We aim to address these knowledge gaps. This thesis outlines the breadth of SCS and forms a manifesto advancing this highly interactive search paradigm with new research directions including prescriptive notions for implementing identified challenges [3]. We investigate SCS through quantitative and qualitative designs: (i) log and crowdsourcing experiments investigating different interaction and results presentation styles [1, 6], and (ii) the creation and analysis of the first SCS dataset and annotation schema through designing and conducting an observational study of information seeking dialogues [11, 5, 7]. We propose new research directions and design recommendations based on the triangulation of three different datasets and methods: the log analysis to identify practical challenges and limitations of existing systems while informing our future observational study; the crowdsourcing experiment to validate a new experimental setup for future search engine results presentation investigations; and the observational study to establish the SCS dataset (SCSdata), form the first Spoken Conversational Search Annotation Schema (SCoSAS), and study interaction behaviours for different task complexities. Our principle contributions are based on our observational study for which we developed a novel methodology utilising a qualitative design [10]. We show that existing information seeking models may be insufficient for the new SCS search paradigm because they inadequately capture meta-discourse functions and the system's role as an active agent. Thus, the results indicate that SCS systems have to support the user through discourse functions and be actively involved in the users' search process. This suggests that interactivity between the user and system is necessary to overcome the increased complexity which has been imposed upon the user and system by the constraints of the audio-only communication channel [4]. We then present the first schematic model for SCS which is derived from the SCoSAS through the qualitative analysis of the SCSdata. In addition, we demonstrate the applicability of our dataset by investigating the effect of task complexity on interaction and discourse behaviour. Lastly, we present SCS design recommendations and outline new research directions for SCS. The implications of our work are practical, conceptual, and methodological. The practical implications include the development of the SCSdata, the SCoSAS, and SCS design recommendations. The conceptual implications include the development of a schematic SCS model which identifies the need for increased interactivity and pro-activity to overcome the audio-imposed complexity in SCS. The methodological implications include the development of the crowdsourcing framework, and techniques for developing and analysing SCS datasets. In summary, we believe that our findings can guide researchers and developers to help improve existing interactive systems which are less constrained, such as mobile search, as well as more constrained systems such as SCS systems.
APA, Harvard, Vancouver, ISO, and other styles
29

Moldt, Julia-Astrid, Teresa Festl-Wietek, Amir Madany Mamlouk, and Anne Herrmann-Werner. "Assessing medical students’ perceived stress levels by comparing a chatbot-based approach to the Perceived Stress Questionnaire (PSQ20) in a mixed-methods study." DIGITAL HEALTH 8 (January 2022): 205520762211390. http://dx.doi.org/10.1177/20552076221139092.

Full text
Abstract:
Objective Digital transformation in higher education has presented medical students with new challenges, which has increased the difficulty of organising their own studies. The main objective of this study is to evaluate the effectiveness of a chatbot in assessing the stress levels of medical students in everyday conversations and to identify the main condition for accepting a chatbot as a conversational partner based on validated stress instruments, such as the Perceived Stress Questionnaire (PSQ20). Methods In this mixed-methods research design, medical-student stress level was assessed using a quantitative (digital- and paper-based versions of PSQ20) and qualitative (chatbot conversation) study design. PSQ20 items were also shortened to investigate whether medical students’ stress levels can be measured in everyday conversations. Therefore, items were integrated into the chat between medical students and a chatbot named Melinda. Results PSQ20 revealed increased stress levels in 43.4% of medical students who participated ( N = 136). The integrated PSQ20 items in the conversations with Melinda obtained similar subjective stress degree results in the statistical analysis of both PSQ20 versions. Qualitative analysis revealed that certain functional and technical requirements have a significant impact on the expected use and success of the chatbot. Conclusion The results suggest that chatbots are promising as personal digital assistants for medical students; they can detect students’ stress factors during the conversation. Increasing the chatbot's technical and social capabilities could have a positive impact on user acceptance.
APA, Harvard, Vancouver, ISO, and other styles
30

Richer, Robert, Nan Zhao, Bjoern M. Eskofier, and Joseph A. Paradiso. "Exploring Smart Agents for the Interaction with Multimodal Mediated Environments." Multimodal Technologies and Interaction 4, no. 2 (June 6, 2020): 27. http://dx.doi.org/10.3390/mti4020027.

Full text
Abstract:
After conversational agents have been made available to the broader public, we speculate that applying them as a mediator for adaptive environments reduces control complexity and increases user experience by providing a more natural interaction. We implemented and tested four agents, each of them differing in their system intelligence and input modality, as personal assistants for Mediated Atmospheres, an adaptive smart office prototype. They were evaluated in a user study ( N = 33 ) to collect subjective and objective measures. Results showed that a smartphone application was the most favorable system, followed by conversational text and voice agents that were perceived as being more engaging and intelligent than a non-conversational voice agent. Significant differences were observed between native and non-native speakers in both subjective and objective measures. Our findings reveal the potential of conversational agents for the interaction with adaptive environments to reduce work and information overload.
APA, Harvard, Vancouver, ISO, and other styles
31

Brinkschulte, Luisa, Stephan Schlögl, Alexander Monz, Pascal Schöttle, and Matthias Janetschek. "Perspectives on Socially Intelligent Conversational Agents." Multimodal Technologies and Interaction 6, no. 8 (July 25, 2022): 62. http://dx.doi.org/10.3390/mti6080062.

Full text
Abstract:
The propagation of digital assistants is consistently progressing. Manifested by an uptake of ever more human-like conversational abilities, respective technologies are moving increasingly away from their role as voice-operated task enablers and becoming rather companion-like artifacts whose interaction style is rooted in anthropomorphic behavior. One of the required characteristics in this shift from a utilitarian tool to an emotional character is the adoption of social intelligence. Although past research has recognized this need, more multi-disciplinary investigations should be devoted to the exploration of relevant traits and their potential embedding in future agent technology. Aiming to lay a foundation for further developments, we report on the results of a Delphi study highlighting the respective opinions of 21 multi-disciplinary domain experts. Results exhibit 14 distinctive characteristics of social intelligence, grouped into different levels of consensus, maturity, and abstraction, which may be considered a relevant basis, assisting the definition and consequent development of socially intelligent conversational agents.
APA, Harvard, Vancouver, ISO, and other styles
32

Harrington, Christina, and Amanda Woodward. "I Wouldn’t Search That With My Mobile Phone: Credibility and Trust in OHIRs Among Lower-Income Black Older Adults." Innovation in Aging 5, Supplement_1 (December 1, 2021): 507. http://dx.doi.org/10.1093/geroni/igab046.1960.

Full text
Abstract:
Abstract Online health information resources (OHIRs) such as conversational assistants and smart devices that provide access to consumer health information in the home are promoted as viable options for older adults to independently manage health. However, there is question as to how well these devices are perceived to meet the needs of marginalized populations such as lower-income Black older adults who often experience lower digital literacy or technology proficiency. We examined the experiences of 34 lower-income Black older adults aged 65-83 from Chicago and Detroit with various OHIRs and explored whether conversational resources were perceived to better support health information seeking compared to traditional online web searching. In a three-phase study, participants tracked their experiences with various OHIRs and documented health-related questions in a health diary. Participants were then interviewed about their diaries in focus groups and semi-structured interviews, followed by a technology critique and co-design session to re-envision a more usable and engaging conversational device. We present preliminary results of the themes that emerged from our analysis: cultural variables in health information seeking practices, perceptions of credibility, likelihood of use, and system accessibility. Participants indicated that their trust of different resources depended on the type of information sought, and that conversational assistants would be a useful resource that require less technology proficiency, even among those with lower e-health literacy. Although our findings indicate that familiarity and trust were salient constructs associated with perceptions of OHIRs, these devices may address digital literacy and technology familiarity with certain design considerations.
APA, Harvard, Vancouver, ISO, and other styles
33

Davis, Courtney R., Karen J. Murphy, Rachel G. Curtis, and Carol A. Maher. "A Process Evaluation Examining the Performance, Adherence, and Acceptability of a Physical Activity and Diet Artificial Intelligence Virtual Health Assistant." International Journal of Environmental Research and Public Health 17, no. 23 (December 7, 2020): 9137. http://dx.doi.org/10.3390/ijerph17239137.

Full text
Abstract:
Artificial intelligence virtual health assistants are a promising emerging technology. This study is a process evaluation of a 12-week pilot physical activity and diet program delivered by virtual assistant “Paola”. This single-arm repeated measures study (n = 28, aged 45–75 years) was evaluated on technical performance (accuracy of conversational exchanges), engagement (number of weekly check-ins completed), adherence (percentage of step goal and recommended food servings), and user feedback. Paola correctly asked scripted questions and responded to participants during the check-ins 97% and 96% of the time, respectively, but correctly responded to spontaneous exchanges only 21% of the time. Participants completed 63% of weekly check-ins and conducted a total of 3648 exchanges. Mean dietary adherence was 91% and was lowest for discretionary foods, grains, red meat, and vegetables. Participants met their step goal 59% of the time. Participants enjoyed the program and found Paola useful during check-ins but not for spontaneous exchanges. More in-depth knowledge, personalized advice and spontaneity were identified as important improvements. Virtual health assistants should ensure an adequate knowledge base and ability to recognize intents and entities, include personality and spontaneity, and provide ongoing technical troubleshooting of the virtual assistant to ensure the assistant remains effective.
APA, Harvard, Vancouver, ISO, and other styles
34

Curtis, Rachel G., Bethany Bartel, Ty Ferguson, Henry T. Blake, Celine Northcott, Rosa Virgara, and Carol A. Maher. "Improving User Experience of Virtual Health Assistants: Scoping Review." Journal of Medical Internet Research 23, no. 12 (December 21, 2021): e31737. http://dx.doi.org/10.2196/31737.

Full text
Abstract:
Background Virtual assistants can be used to deliver innovative health programs that provide appealing, personalized, and convenient health advice and support at scale and low cost. Design characteristics that influence the look and feel of the virtual assistant, such as visual appearance or language features, may significantly influence users’ experience and engagement with the assistant. Objective This scoping review aims to provide an overview of the experimental research examining how design characteristics of virtual health assistants affect user experience, summarize research findings of experimental research examining how design characteristics of virtual health assistants affect user experience, and provide recommendations for the design of virtual health assistants if sufficient evidence exists. Methods We searched 5 electronic databases (Web of Science, MEDLINE, Embase, PsycINFO, and ACM Digital Library) to identify the studies that used an experimental design to compare the effects of design characteristics between 2 or more versions of an interactive virtual health assistant on user experience among adults. Data were synthesized descriptively. Health domains, design characteristics, and outcomes were categorized, and descriptive statistics were used to summarize the body of research. Results for each study were categorized as positive, negative, or no effect, and a matrix of the design characteristics and outcome categories was constructed to summarize the findings. Results The database searches identified 6879 articles after the removal of duplicates. We included 48 articles representing 45 unique studies in the review. The most common health domains were mental health and physical activity. Studies most commonly examined design characteristics in the categories of visual design or conversational style and relational behavior and assessed outcomes in the categories of personality, satisfaction, relationship, or use intention. Over half of the design characteristics were examined by only 1 study. Results suggest that empathy and relational behavior and self-disclosure are related to more positive user experience. Results also suggest that if a human-like avatar is used, realistic rendering and medical attire may potentially be related to more positive user experience; however, more research is needed to confirm this. Conclusions There is a growing body of scientific evidence examining the impact of virtual health assistants’ design characteristics on user experience. Taken together, data suggest that the look and feel of a virtual health assistant does affect user experience. Virtual health assistants that show empathy, display nonverbal relational behaviors, and disclose personal information about themselves achieve better user experience. At present, the evidence base is broad, and the studies are typically small in scale and highly heterogeneous. Further research, particularly using longitudinal research designs with repeated user interactions, is needed to inform the optimal design of virtual health assistants.
APA, Harvard, Vancouver, ISO, and other styles
35

Rabassa, Valérie, Ouidade Sabri, and Claire Spaletta. "Conversational commerce: Do biased choices offered by voice assistants’ technology constrain its appropriation?" Technological Forecasting and Social Change 174 (January 2022): 121292. http://dx.doi.org/10.1016/j.techfore.2021.121292.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Cho, Janghee, and Emilee Rader. "The Role of Conversational Grounding in Supporting Symbiosis Between People and Digital Assistants." Proceedings of the ACM on Human-Computer Interaction 4, CSCW1 (May 28, 2020): 1–28. http://dx.doi.org/10.1145/3392838.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Jannach, Dietmar, Ahtsham Manzoor, Wanling Cai, and Li Chen. "A Survey on Conversational Recommender Systems." ACM Computing Surveys 54, no. 5 (June 2021): 1–36. http://dx.doi.org/10.1145/3453154.

Full text
Abstract:
Recommender systems are software applications that help users to find items of interest in situations of information overload. Current research often assumes a one-shot interaction paradigm, where the users’ preferences are estimated based on past observed behavior and where the presentation of a ranked list of suggestions is the main, one-directional form of user interaction. Conversational recommender systems (CRS) take a different approach and support a richer set of interactions. These interactions can, for example, help to improve the preference elicitation process or allow the user to ask questions about the recommendations and to give feedback. The interest in CRS has significantly increased in the past few years. This development is mainly due to the significant progress in the area of natural language processing, the emergence of new voice-controlled home assistants, and the increased use of chatbot technology. With this article, we provide a detailed survey of existing approaches to conversational recommendation. We categorize these approaches in various dimensions, e.g., in terms of the supported user intents or the knowledge they use in the background. Moreover, we discuss technological approaches, review how CRS are evaluated, and finally identify a number of gaps that deserve more research in the future.
APA, Harvard, Vancouver, ISO, and other styles
38

Li, Chang, and Hideyoshi Yanagisawa. "Intrinsic motivation in virtual assistant interaction for fostering spontaneous interactions." PLOS ONE 16, no. 4 (April 23, 2021): e0250326. http://dx.doi.org/10.1371/journal.pone.0250326.

Full text
Abstract:
With the growing utility of today’s conversational virtual assistants, the importance of user motivation in human–artificial intelligence interactions is becoming more obvious. However, previous studies in this and related fields, such as human–computer interaction, scarcely discussed intrinsic motivation (the motivation to interact with the assistants for fun). Previous studies either treated motivation as an inseparable concept or focused on non-intrinsic motivation (the motivation to interact with the assistant for utilitarian purposes). The current study aims to cover intrinsic motivation by taking an affective engineering approach. A novel motivation model is proposed, in which intrinsic motivation is affected by two factors that derive from user interactions with virtual assistants: expectation of capability and uncertainty. Experiments in which these two factors are manipulated by making participants believe they are interacting with the smart speaker “Amazon Echo” are conducted. Intrinsic motivation is measured both by using questionnaires and by covertly monitoring a five-minute free-choice period in the experimenter’s absence, during which the participants could decide for themselves whether to interact with the virtual assistants. Results of the first experiment showed that high expectation engenders more intrinsically motivated interaction compared with low expectation. However, the results did not support our hypothesis that expectation and uncertainty have an interaction effect on intrinsic motivation. We then revised our hypothetical model of action selection accordingly and conducted a verification experiment of the effects of uncertainty. Results of the verification experiment showed that reducing uncertainty encourages more interactions and causes the motivation behind these interactions to shift from non-intrinsic to intrinsic.
APA, Harvard, Vancouver, ISO, and other styles
39

Agarwal, Vineet, and Anjali Shukla. "Chatbot for Interview." International Journal of Recent Technology and Engineering (IJRTE) 11, no. 2 (July 30, 2022): 46–49. http://dx.doi.org/10.35940/ijrte.b7092.0711222.

Full text
Abstract:
The advent of virtual assistants has made communicating with computers a reality. Chatbots are virtual assistant tools designed to simplify the communication between humans and computers. A chatbot will answer your queries and execute a certain computation if required. Chatbots can be developed using Natural Language Processing (NLP) and Deep Learning. Natural Language Process technique like Naïve bayes can be used. Chatbot can be implemented for a fun purpose like chit-chat; these are called Conversational chatbots. Chatbots designed to answer any questions is known as horizontal chatbots and the specific task-oriented chatbots are known as vertical chatbots (also known as Closed Domain Chatbots). In this paper, we will be discussing a task-oriented chatbot to help recruitment team in the technical round of interview process.
APA, Harvard, Vancouver, ISO, and other styles
40

Robe, Peter, and Sandeep Kaur Kuttal. "Designing PairBuddy—A Conversational Agent for Pair Programming." ACM Transactions on Computer-Human Interaction 29, no. 4 (August 31, 2022): 1–44. http://dx.doi.org/10.1145/3498326.

Full text
Abstract:
From automated customer support to virtual assistants, conversational agents have transformed everyday interactions, yet despite phenomenal progress, no agent exists for programming tasks. To understand the design space of such an agent, we prototyped PairBuddy—an interactive pair programming partner—based on research from conversational agents, software engineering, education, human-robot interactions, psychology, and artificial intelligence. We iterated PairBuddy’s design using a series of Wizard-of-Oz studies. Our pilot study of six programmers showed promising results and provided insights toward PairBuddy’s interface design. Our second study of 14 programmers was positively praised across all skill levels. PairBuddy’s active application of soft skills—adaptability, motivation, and social presence—as a navigator increased participants’ confidence and trust, while its technical skills—code contributions, just-in-time feedback, and creativity support—as a driver helped participants realize their own solutions. PairBuddy takes the first step towards an Alexa-like programming partner.
APA, Harvard, Vancouver, ISO, and other styles
41

Merdivan, Erinc, Deepika Singh, Sten Hanke, Johannes Kropf, Andreas Holzinger, and Matthieu Geist. "Human Annotated Dialogues Dataset for Natural Conversational Agents." Applied Sciences 10, no. 3 (January 21, 2020): 762. http://dx.doi.org/10.3390/app10030762.

Full text
Abstract:
Conversational agents are gaining huge popularity in industrial applications such as digital assistants, chatbots, and particularly systems for natural language understanding (NLU). However, a major drawback is the unavailability of a common metric to evaluate the replies against human judgement for conversational agents. In this paper, we develop a benchmark dataset with human annotations and diverse replies that can be used to develop such metric for conversational agents. The paper introduces a high-quality human annotated movie dialogue dataset, HUMOD, that is developed from the Cornell movie dialogues dataset. This new dataset comprises 28,500 human responses from 9500 multi-turn dialogue history-reply pairs. Human responses include: (i) ratings of the dialogue reply in relevance to the dialogue history; and (ii) unique dialogue replies for each dialogue history from the users. Such unique dialogue replies enable researchers in evaluating their models against six unique human responses for each given history. Detailed analysis on how dialogues are structured and human perception on dialogue score in comparison with existing models are also presented.
APA, Harvard, Vancouver, ISO, and other styles
42

Bickmore, Timothy W., Ha Trinh, Stefan Olafsson, Teresa K. O'Leary, Reza Asadi, Nathaniel M. Rickles, and Ricardo Cruz. "Patient and Consumer Safety Risks When Using Conversational Assistants for Medical Information: An Observational Study of Siri, Alexa, and Google Assistant." Journal of Medical Internet Research 20, no. 9 (September 4, 2018): e11510. http://dx.doi.org/10.2196/11510.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Kolenik, Tine, and Matjaž Gams. "Intelligent Cognitive Assistants for Attitude and Behavior Change Support in Mental Health: State-of-the-Art Technical Review." Electronics 10, no. 11 (May 24, 2021): 1250. http://dx.doi.org/10.3390/electronics10111250.

Full text
Abstract:
Intelligent cognitive assistant (ICA) technology is used in various domains to emulate human behavior expressed through synchronous communication, especially written conversation. Due to their ability to use individually tailored natural language, they present a powerful vessel to support attitude and behavior change. Behavior change support systems are emerging as a crucial tool in digital mental health services, and ICAs exceed in effective support, especially for stress, anxiety and depression (SAD), where ICAs guide people’s thought processes and actions by analyzing their affective and cognitive phenomena. Currently, there is no comprehensive review of such ICAs from a technical standpoint, and existing work is conducted exclusively from a psychological or medical perspective. This technical state-of-the-art review tried to discern and systematize current technological approaches and trends as well as detail the highly interdisciplinary landscape of intersections between ICAs, attitude and behavior change, and mental health, focusing on text-based ICAs for SAD. Ten papers with systems, fitting our criteria, were selected. The systems varied significantly in their approaches, with the most successful opting for comprehensive user models, classification-based assessment, personalized intervention, and dialogue tree conversational models.
APA, Harvard, Vancouver, ISO, and other styles
44

Karyotaki, Maria, Athanasios Drigas, and Charalabos Skianis. "Chatbots as Cognitive, Educational, Advisory & Coaching Systems." Technium Social Sciences Journal 30 (April 9, 2022): 109–26. http://dx.doi.org/10.47577/tssj.v30i1.6277.

Full text
Abstract:
Chatbots are software applications assimilating human communication with the aim to raise adherence and engagement between human-systems interaction. Text messaging-based conversational agents (CAs) make an interesting use of natural language processing and improve by learning, allowing coherent two-way communication with humans, either oral or written as well as real-time decision-making. Chatbots serve as means of learning and teaching, as virtual assistants and social companions. Machine learning algorithms embedded into Chatbots simulate human cognition, including cognitive learning, decision making and adaptation to the environment. Thus, the future of artificial intelligence in Chatbots lies in the development of a global, reliable and sustainable ecosystem of knowledge, skills and values by bringing together all interested stakeholders, such as scientists, consumers, businesses and the state. Such effective customized/personalized learning frameworks depend on sophisticated conversational flows based on user models, which cluster user preferences and attributes in combination with learning analytics to induce end users’ personal skills, knowledge mastery, learning ability as well as professional development.
APA, Harvard, Vancouver, ISO, and other styles
45

Rehman, Ubaid Ur, Dong Jin Chang, Younhea Jung, Usman Akhtar, Muhammad Asif Razzaq, and Sungyoung Lee. "Medical Instructed Real-Time Assistant for Patient with Glaucoma and Diabetic Conditions." Applied Sciences 10, no. 7 (March 25, 2020): 2216. http://dx.doi.org/10.3390/app10072216.

Full text
Abstract:
Virtual assistants are involved in the daily activities of humans such as managing calendars, making appointments, and providing wake-up calls. They provide a conversational service to customers around-the-clock and make their daily life manageable. With this emerging trend, many well-known companies launched their own virtual assistants that manage the daily routine activities of customers. In the healthcare sector, virtual medical assistants also provide a list of relevant diseases linked to a specific symptom. Due to low accuracy and uncertainty, these generated recommendations are untrusted and may lead to hypochondriasis. In this study, we proposed a Medical Instructed Real-time Assistant (MIRA) that listens to the user’s chief complaint and predicts a specific disease. Instead of informing about the medical condition, the user is referred to a nearby appropriate medical specialist. We designed an architecture for MIRA that considers the limitations of existing virtual medical assistants such as weak authentication, lack of understanding multiple intent statements about a specific medical condition, and uncertain diagnosis recommendations. To implement the designed architecture, we collected the chief complaints along with the dialogue corpora of real patients. Then, we manually validated these data under the supervision of medical specialists. We then used these data for natural language understanding, disease identification, and appropriate response generation. For the prototype version of MIRA, we considered the cases of glaucoma (eye disease) and diabetes (an autoimmune disease) only. The performance measure of MIRA was evaluated in terms of accuracy (89%), precision (90%), sensitivity (89.8%), specificity (94.9%), and F-measure (89.8%). The task completion was calculated using Cohen’s Kappa ( k = 0.848 ) that categorizes MIRA as ‘Almost Perfect’. Furthermore, the voice-based authentication identifies the user effectively and prevent against masquerading attack. Simultaneously, the user experience shows relatively good results in all aspects based on the User Experience Questionnaire (UEQ) benchmark data. The experimental results show that MIRA efficiently predicts a disease based on chief complaints and supports the user in decision making.
APA, Harvard, Vancouver, ISO, and other styles
46

Lago, André Sousa, João Pedro Dias, and Hugo Sereno Ferreira. "Managing non-trivial internet-of-things systems with conversational assistants: A prototype and a feasibility experiment." Journal of Computational Science 51 (April 2021): 101324. http://dx.doi.org/10.1016/j.jocs.2021.101324.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Bragg, Danielle, Katharina Reinecke, and Richard E. Ladner. "Expanding a Large Inclusive Study of Human Listening Rates." ACM Transactions on Accessible Computing 14, no. 3 (September 30, 2021): 1–26. http://dx.doi.org/10.1145/3461700.

Full text
Abstract:
As conversational agents and digital assistants become increasingly pervasive, understanding their synthetic speech becomes increasingly important. Simultaneously, speech synthesis is becoming more sophisticated and manipulable, providing the opportunity to optimize speech rate to save users time. However, little is known about people’s abilities to understand fast speech. In this work, we provide an extension of the first large-scale study on human listening rates, enlarging the prior study run with 453 participants to 1,409 participants and adding new analyses on this larger group. Run on LabintheWild, it used volunteer participants, was screen reader accessible, and measured listening rate by accuracy at answering questions spoken by a screen reader at various rates. Our results show that people who are visually impaired, who often rely on audio cues and access text aurally, generally have higher listening rates than sighted people. The findings also suggest a need to expand the range of rates available on personal devices. These results demonstrate the potential for users to learn to listen to faster rates, expanding the possibilities for human-conversational agent interaction.
APA, Harvard, Vancouver, ISO, and other styles
48

Seaborn, Katie, Norihisa P. Miyake, Peter Pennefather, and Mihoko Otake-Matsuura. "Voice in Human–Agent Interaction." ACM Computing Surveys 54, no. 4 (May 2021): 1–43. http://dx.doi.org/10.1145/3386867.

Full text
Abstract:
Social robots, conversational agents, voice assistants, and other embodied AI are increasingly a feature of everyday life. What connects these various types of intelligent agents is their ability to interact with people through voice. Voice is becoming an essential modality of embodiment, communication, and interaction between computer-based agents and end-users. This survey presents a meta-synthesis on agent voice in the design and experience of agents from a human-centered perspective: voice-based human–agent interaction (vHAI). Findings emphasize the social role of voice in HAI as well as circumscribe a relationship between agent voice and body, corresponding to human models of social psychology and cognition. Additionally, changes in perceptions of and reactions to agent voice over time reveals a generational shift coinciding with the commercial proliferation of mobile voice assistants. The main contributions of this work are a vHAI classification framework for voice across various agent forms, contexts, and user groups, a critical analysis grounded in key theories, and an identification of future directions for the oncoming wave of vocal machines.
APA, Harvard, Vancouver, ISO, and other styles
49

Toader, Diana-Cezara, Grațiela Boca, Rita Toader, Mara Măcelaru, Cezar Toader, Diana Ighian, and Adrian T. Rădulescu. "The Effect of Social Presence and Chatbot Errors on Trust." Sustainability 12, no. 1 (December 27, 2019): 256. http://dx.doi.org/10.3390/su12010256.

Full text
Abstract:
This article explores the potential of Artificial Intelligence (AI) chatbots for creating positive change by supporting customers in the digital realm. Our study, which focuses on the customer and his/her declarative psychological responses to an interaction with a virtual assistant, will fill a gap in the digital marketing research, where little attention has been paid to the impact of Error and Gender, as well as the extent to which Social Presence and Perceived Competence mediate the relationships between Anthropomorphic design cues and Trust. We provide consistent evidence of the significant negative effect of erroneous conversational interfaces on several constructs considered in our conceptual model, such as: perceived competence, trust, as well as positive consumer responses. We also provide support to previous research findings and confirm that people employ a biased thinking across gender and this categorization also influences their acceptance of chatbots taking social roles. The results of an empirical study demonstrated that highly anthropomorphized female chatbots that engage in social behaviors are significantly shaping positive consumer responses, even in the error condition. Moreover, female virtual assistants are much more commonly forgiven when committing errors compared to male chatbots.
APA, Harvard, Vancouver, ISO, and other styles
50

Marín, Diana Pérez. "A Procedure to Engage Children in the Morphological and Syntax Analysis of Pedagogic Conversational Agent-Generated Sentences to Study Language." International Journal of Online Pedagogy and Course Design 5, no. 2 (April 2015): 23–42. http://dx.doi.org/10.4018/ijopcd.2015040103.

Full text
Abstract:
Pedagogic Conversational Agents are computer applications that interact with the students in natural language. They usually focus the dialogue on a certain topic under study. In this paper, the authors propose the possibility of children studying morphology and syntax by using a Pedagogic Conversational Agent. The main benefit is that the agent is able to generate an infinite number of sentences and, it automatically generates the morphological and syntactical analysis from a given grammar. That way, students can practise with all the sentences they need, receive immediate feedback with automatic evaluation, at their own rhythm and, the level of difficulty can be adapted to their particular competence of analysis. Given the originality of this new computer assisted learning initiative, the authors have devised a procedure to engage the students in the dialogue with the agent to carry out the morphological and syntax analysis at five different levels of difficulty, and test the validity of the approach with a limited number of users according to the principles of User-Centered Design. The results gathered provide evidence of the goodness of the procedure and, they encourage us to keep working on this promising field of using pedagogic agents as computer teaching language assistants.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography