Journal articles on the topic 'Voice-user interface (VUI)'

To see the other types of publications on this topic, follow the link: Voice-user interface (VUI).

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 22 journal articles for your research on the topic 'Voice-user interface (VUI).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Stigall Kelly Caine, Brodrick. "Towards Self Expression Through Voice User Interfaces." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 66, no. 1 (September 2022): 1800–1804. http://dx.doi.org/10.1177/1071181322661048.

Full text
Abstract:
Self-expression through expressive writing yields positive health outcomes. However, people who have difficulty writing by traditional means may have difficulty accessing these benefits. This work proposes a study to investigate whether the positive effects of self-expression can be achieved through a voice user interface (VUI). The study compares traditional expression (writing) to expression using a VUI (voice). This work will extend the realm of expressive writing research to include voice user interfaces (VUIs) as a medium of expression. We expect expression through VUIs to yield results similar to traditional methods of expression such as writing. This finding would indicate that we may be able to make the benefits of expressive writing such as positive health outcomes available to people who cannot write by traditional means.
APA, Harvard, Vancouver, ISO, and other styles
2

Wagner, Amber, and Jeff Gray. "An Empirical Evaluation of a Vocal User Interface for Programming by Voice." International Journal of Information Technologies and Systems Approach 8, no. 2 (July 2015): 47–63. http://dx.doi.org/10.4018/ijitsa.2015070104.

Full text
Abstract:
Although Graphical User Interfaces (GUIs) often improve usability, individuals with physical disabilities may be unable to use a mouse and keyboard to navigate through a GUI-based application. In such situations, a Vocal User Interface (VUI) may be a viable alternative. Existing vocal tools (e.g., Vocal Joystick) can be integrated into software applications; however, integrating an assistive technology into a legacy application may require tedious and manual adaptation. Furthermore, the challenges are deeper for an application whose GUI changes dynamically (e.g., based on the context of the program) and evolves with each new application release. This paper provides a discussion of challenges observed while mapping a GUI to a VUI. The context of the authors' examples and evaluation are taken from Myna, which is the VUI that is mapped to the Scratch programming environment. Initial user studies on the effectiveness of Myna are also presented in the paper.
APA, Harvard, Vancouver, ISO, and other styles
3

Lee, Byeong Ki, and Jae Young Yun. "User Experience on Korean Honorific Expressions and Voice Age of Voice User Interface." Journal of the HCI Society of Korea 14, no. 4 (December 31, 2019): 49–57. http://dx.doi.org/10.17210/jhsk.2019.12.14.4.49.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

George, Jomin, Aju Abraham, and Elizabeth Ndakukamo. "Futuristic applications of voice user interference on child language development." Future Technology 2, no. 3 (August 15, 2023): 5–11. http://dx.doi.org/10.55670/fpll.futech.2.3.2.

Full text
Abstract:
Voice User Interface (VUI) is an Artificial Intelligence tool that enables children to access a computing device and complete tasks through speech instead of using learning methods. VUI, a form of AI (Artificial Intelligence), takes a sound that children articulate in a spoken statement and use intent recognition to understand the action required to fulfill the child’s spoken request. The design and features of VUI have been developed to increase the interpersonal level of communication with users and, to some degree, make voice assistants behave like humans. The features that have been created, have been shaped in such a way as to improve learning efficacy and ease of use for early childhood learning development. The current available VUIs in the market have been geared to provide children with a simpler way to interact with access to educational technology learning tools. The research posits that there are two primary uses of VUI in childhood learning development exploration, whereby children use VUI as a form of entertainment and information seeking, and children use VUI to develop various knowledge facets. For children in the early language stages currently using language to communicate, VUI language stimulation can help children to engage in continuous communication processes, use and understand various words, and successfully complete more complex sentences. The research seeks to state the problems associated with VUI and the standard opinions based on research associated with the problem. Moreover, the study seeks to articulate the hypothesis that VUI is an effective tool for early childhood language learning through the use of peer-reviewed evidence and examples, to the hypothesis, to generate new and innovative perspectives.
APA, Harvard, Vancouver, ISO, and other styles
5

Langer, Dorothea, Franziska Legler, Philipp Kotsch, André Dettmann, and Angelika C. Bullinger. "I Let Go Now! Towards a Voice-User Interface for Handovers between Robots and Users with Full and Impaired Sight." Robotics 11, no. 5 (October 15, 2022): 112. http://dx.doi.org/10.3390/robotics11050112.

Full text
Abstract:
Handing over objects is a collaborative task that requires participants to synchronize their actions in terms of space and time, as well as their adherence to social standards. If one participant is a social robot and the other a visually impaired human, actions should favorably be coordinated by voice. User requirements for such a Voice-User Interface (VUI), as well as its required structure and content, are unknown so far. In our study, we applied the user-centered design process to develop a VUI for visually impaired humans and humans with full sight. Iterative development was conducted with interviews, workshops, and user tests to derive VUI requirements, dialog structure, and content. A final VUI prototype was evaluated in a standardized experiment with 60 subjects who were visually impaired or fully sighted. Results show that the VUI enabled all subjects to successfully receive objects with an error rate of only 1.8%. Likeability and accuracy were evaluated best, while habitability and speed of interaction were shown to need improvement. Qualitative feedback supported and detailed results, e.g., how to shorten some dialogs. To conclude, we recommend that inclusive VUI design for social robots should give precise information for handover processes and pay attention to social manners.
APA, Harvard, Vancouver, ISO, and other styles
6

Song, Yao, Yanpu Yang, and Peiyao Cheng. "The Investigation of Adoption of Voice-User Interface (VUI) in Smart Home Systems among Chinese Older Adults." Sensors 22, no. 4 (February 18, 2022): 1614. http://dx.doi.org/10.3390/s22041614.

Full text
Abstract:
Driven by advanced voice interaction technology, the voice-user interface (VUI) has gained popularity in recent years. VUI has been integrated into various devices in the context of the smart home system. In comparison with traditional interaction methods, VUI provides multiple benefits. VUI allows for hands-free and eyes-free interaction. It also enables users to perform multiple tasks while interacting. Moreover, as VUI is highly similar to a natural conversation in daily lives, it is intuitive to learn. The advantages provided by VUI are particularly beneficial to older adults, who suffer from decreases in physical and cognitive abilities, which hinder their interaction with electronic devices through traditional methods. However, the factors that influence older adults’ adoption of VUI remain unknown. This study addresses this research gap by proposing a conceptual model. On the basis of the technology adoption model (TAM) and the senior technology adoption model (STAM), this study considers the characteristic of VUI and the characteristic of older adults through incorporating the construct of trust and aging-related characteristics (i.e., perceived physical conditions, mobile self-efficacy, technology anxiety, self-actualization). A survey was designed and conducted. A total of 420 Chinese older adults participated in this survey, and they were current or potential users of VUI. Through structural equation modeling, data were analyzed. Results showed a good fit with the proposed conceptual model. Path analysis revealed that three factors determine Chinese older adults’ adoption of VUI: perceived usefulness, perceived ease of use, and trust. Aging-related characteristics also influence older adults’ adoption of VUI, but they are mediated by perceived usefulness, perceived ease of use, and trust. Specifically, mobile self-efficacy is demonstrated to positively influence trust and perceived ease of use but negatively influence perceived usefulness. Self-actualization exhibits positive influences on perceived usefulness and perceived ease of use. Technology anxiety only exerts influence on perceived ease of use in a marginal way. No significant influences of perceived physical conditions were found. This study extends the TAM and STAM by incorporating additional variables to explain Chinese older adults’ adoption of VUI. These results also provide valuable implications for developing suitable VUI for older adults as well as planning actionable communication strategies for promoting VUI among Chinese older adults.
APA, Harvard, Vancouver, ISO, and other styles
7

Kim, Minjung, Jieun Han, Hyo-Jin Kang, and Gyu Hyun Kwon. "Extracting usability dimensions of the voice user interface - Focusing on AI assistants-." Journal of the HCI Society of Korea 15, no. 1 (March 31, 2020): 53–64. http://dx.doi.org/10.17210/jhsk.2020.03.15.1.53.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Subhash S., Siddesh S., Prajwal N. Srivatsa, Ullas A., and Santhosh B. "Developing a Graphical User Interface for an Artificial Intelligence-Based Voice Assistant." International Journal of Organizational and Collective Intelligence 11, no. 3 (July 2021): 49–67. http://dx.doi.org/10.4018/ijoci.2021070104.

Full text
Abstract:
Artificial intelligence machineries have been extensively active in human life in recent times. Self-governing devices are enhancing their way of interacting with both human and devices. Contemporary vision in this topic can pave the way for a new process of human-machine interaction in which users will get to know how people can understand human language, adapting and communicating through it. One such tool is voice assistant, which can be incorporated into many other brilliant devices. In this article, the voice assistant will receive the audio from the microphone and then convert that into text, later with the help of ‘pyttsx3', and then the text response will be converted into an audio file; then the audio file will be played. The audio is processed using the voice user interface (VUI). This article develops a functional intelligent personal assistant (IPA) and integrates it with a graphical user interface that can perform mental tasks such as ON/OFF of smart applications based on the user commands.
APA, Harvard, Vancouver, ISO, and other styles
9

Austerjost, Jonas, Marc Porr, Noah Riedel, Dominik Geier, Thomas Becker, Thomas Scheper, Daniel Marquard, Patrick Lindner, and Sascha Beutel. "Introducing a Virtual Assistant to the Lab: A Voice User Interface for the Intuitive Control of Laboratory Instruments." SLAS TECHNOLOGY: Translating Life Sciences Innovation 23, no. 5 (July 18, 2018): 476–82. http://dx.doi.org/10.1177/2472630318788040.

Full text
Abstract:
The introduction of smart virtual assistants (VAs) and corresponding smart devices brought a new degree of freedom to our everyday lives. Voice-controlled and Internet-connected devices allow intuitive device controlling and monitoring from all around the globe and define a new era of human–machine interaction. Although VAs are especially successful in home automation, they also show great potential as artificial intelligence-driven laboratory assistants. Possible applications include stepwise reading of standard operating procedures (SOPs) and recipes, recitation of chemical substance or reaction parameters to a control, and readout of laboratory devices and sensors. In this study, we present a retrofitting approach to make standard laboratory instruments part of the Internet of Things (IoT). We established a voice user interface (VUI) for controlling those devices and reading out specific device data. A benchmark of the established infrastructure showed a high mean accuracy (95% ± 3.62) of speech command recognition and reveals high potential for future applications of a VUI within the laboratory. Our approach shows the general applicability of commercially available VAs as laboratory assistants and might be of special interest to researchers with physical impairments or low vision. The developed solution enables a hands-free device control, which is a crucial advantage within the daily laboratory routine.
APA, Harvard, Vancouver, ISO, and other styles
10

Aiman Alias, Muhammad Zharif, Wan Norsyafizan W. Muhamad, Darmawaty Mohd Ali, and Azlina Idris. "Voice User Interface(VuI) Smart Office Door Application in the Context of Covid-19 Pandemic." Proceedings of International Conference on Artificial Life and Robotics 27 (January 20, 2022): 981–89. http://dx.doi.org/10.5954/icarob.2022.os32-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Ochoa-Orihuel, Javier, Raúl Marticorena-Sánchez, and María Consuelo Sáiz-Manzanares. "Moodle LMS Integration with Amazon Alexa: A Practical Experience." Applied Sciences 10, no. 19 (September 29, 2020): 6859. http://dx.doi.org/10.3390/app10196859.

Full text
Abstract:
The frequency of interaction between teachers and students through Learning Management Systems (LMSs) is continuously rising. However, recent studies highlight the challenges presented in current LMSs to meet the specific needs of the student, regarding usability and learnability. With the motivation to support the research of effectiveness when using a Voice User Interface (VUI) for education, this paper presents the work done (RQ1) to build the basic architecture for an Alexa skill for educational purposes, including its integration with Moodle, and (RQ2) to establish whether Moodle currently provides the necessary tools for voice-content creation for develop voice-first applications, aiming to provide new scientific insight to help researchers on future works of similar characteristics. As a result of this work, we provide guidelines for the architecture of an Alexa skill application integrated with Moodle through safe protocols, such as Alexa’s Account Linking Web Service, while our findings ratify the need for additional tooling within Moodle platform for voice-content creation in order to create an appealing voice experience, with the capabilities to process Moodle data structures and produce sensible sentences that can be understood by users when spoken by a voice device.
APA, Harvard, Vancouver, ISO, and other styles
12

Sharma, Ashok, Ravindra Parshuram Bachate, Parveen Singh, Vinod Kumar, Ravi Kant Kumar, Amar Singh, and Madan Kadariya. "Parallel Big Bang-Big Crunch-LSTM Approach for Developing a Marathi Speech Recognition System." Mobile Information Systems 2022 (September 10, 2022): 1–11. http://dx.doi.org/10.1155/2022/8708380.

Full text
Abstract:
The Voice User Interface (VUI) for human-computer interaction has received wide acceptance, due to which the systems for speech recognition in regional languages are now being developed, taking into account all of the dialects. Because of the limited availability of the speech corpus (SC) of regional languages for doing research, designing a speech recognition system is challenging. This contribution provides a Parallel Big Bang-Big Crunch (PB3C)-based mechanism to automatically evolve the optimal architecture of LSTM (Long Short-Term Memory). To decide the optimal architecture, we evolved a number of neurons and hidden layers of LSTM model. We validated the proposed approach on Marathi speech recognition system. In this research work, the performance comparisons of the proposed method are done with BBBC based LSTM and manually configured LSTM. The results indicate that the proposed approach is better than two other approaches.
APA, Harvard, Vancouver, ISO, and other styles
13

Jang, Soonkyu, and Seulgi Kim. "Research on Service Availability Basis on VUI(Voice User Interface) for Preservation of Endangered Languages - Focused on Translation Service of Jeju Island Dialect as an Endangered Language -." Journal of the HCI Society of Korea 15, no. 2 (June 30, 2020): 47–55. http://dx.doi.org/10.17210/jhsk.2020.06.15.2.47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Kim, Min Kyung, and Nam Choon Park. "A Study on the Error Handling Guidelines for the Design of Voice User Interface(VUI) - Focusing on a multidisciplinary approach to error type and feedback delivery systems -." Korean Society of Science & Art 37, no. 5 (December 31, 2019): 47–60. http://dx.doi.org/10.17548/ksaf.2019.12.30.47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Chan, Sam W. T., Tamil Selvan Gunasekaran, Yun Suen Pai, Haimo Zhang, and Suranga Nanayakkara. "KinVoices: Using Voices of Friends and Family in Voice Interfaces." Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (October 13, 2021): 1–25. http://dx.doi.org/10.1145/3479590.

Full text
Abstract:
With voice user interfaces (VUIs) becoming ubiquitous and speech synthesis technology maturing, it is possible to synthesise voices to resemble our friends and relatives (which we will collectively call 'kin') and use them on VUIs. However, designing such interfaces and investigating how the familiarity of kin voices affect user perceptions remain under-explored. Our surveys and interviews with 25 users revealed that VUIs using kin voices were perceived as more engaging, persuasive and safer yet eerier than VUIs using common virtual assistant voices. We then developed a technology probe, KinVoice, an Alexa-based VUI that was deployed in three households over two weeks. Users set reminders using KinVoice, which in turn, gave the reminders in synthesised kin voices. This was to explore users' needs, uncover challenges involved and inspire new applications. We discuss design guidelines for integrating familiar kin voices into VUIs, applications that benefit from its usage, and implications for balancing voice realism and usability with security and diversification.
APA, Harvard, Vancouver, ISO, and other styles
16

Stigall, Brodrick, and Kelly Caine. "A Systematic Review of Human Factors Literature About Voice User Interfaces and Older Adults." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 64, no. 1 (December 2020): 13–17. http://dx.doi.org/10.1177/1071181320641004.

Full text
Abstract:
We conducted a systematic literature review of the human factors literature at the intersection of voice user interfaces (VUI) and older adults among Human Factors publications. Our review was limited to research published in the past 50 years (1970 – 2020) in either the journal Human Factors or the Proceedings of the Human Factors and Ergonomics Society. While we included a broad array of search terms related to VUIs, we found very few articles about VUIs that were specifically focused on designing for older adults or used older adults as participants in studies. Of the 26 human factors publications we did find that were related to this topic, most found older adults take more time to operate VUIs and/or made more errors than younger adults, whereas a minority of publications found no age-related differences. We concluded that age-related differences in the use of VUIs are likely task specific.
APA, Harvard, Vancouver, ISO, and other styles
17

Habscheid, Stephan, Tim Moritz Hector, Christine Hrncal, and David Waldecker. "Intelligente Persönliche Assistenten (IPA) mit Voice User Interfaces (VUI) als ‚Beteiligte‘ in häuslicher Alltags­interaktion. Welchen Aufschluss geben die Protokolldaten der Assistenzsysteme?" Journal für Medienlinguistik 4, no. 1 (September 4, 2021): 16–53. http://dx.doi.org/10.21248/jfml.2021.44.

Full text
Abstract:
The paper presents research results emerging from the analysis of Intelligent Personal Assistants (IPA) log data. Based on the assump­tion that media and data, as part of practice, are produced and used cooperatively, the paper discusses how IPA log data can be used to analyze (1) how the IPA systems operate through their connection to platforms and infrastructures, (2) how the dialog systems are de­signed today and (3) how users integrate them into their everyday social interaction. It also asks in which everyday practical contexts the IPA are placed on the system side and on the user side, and how privacy issues in particular are negotiated. It is argued that, in order to be able to investigate these questions, the technical-institutional and the cultural-theoretical perspective on media, which is common in German media linguistics, has to be complemented by a more fun­damental, i.e. social-theoretical and interactionist perspective.
APA, Harvard, Vancouver, ISO, and other styles
18

Arisandy, Desi, and Rudi Rudi. "Perancangan Voice User Interface(VUI) Aplikasi Presensi Karyawan Dengan Speech Recognition." Jurnal SIFO Mikroskil 21, no. 2 (February 5, 2021). http://dx.doi.org/10.55601/jsm.v21i2.740.

Full text
Abstract:
As we all know, the very easy and fast spread of COVID-19 can threaten human life. This virus is easily spread from inanimate objects that are touched by sufferers. The use of fingerprint attendance machines in the workplace can be a source of the spread of COVID-19, this needs to be considered for other alternatives to record employee attendance, for example with speech recognition technology or known as automated Speech Recognition (ASR). This technology is needed as an important component to build a Voice user interface (VUI). ASR is a technology for translating speech from the user into text. In addition, the use of ASR can improve communication between human-computer interactions, that is, users can interact with computers using voice. This research is aimed at designing prototype employee attendance application with the VUI concept and speech recognition technology. To speed up design and development process, researchers used the RAD (Rapid Application Development) method. From the results of research using the RAD method, it can be concluded that the application can be designed quickly through prototype modeling together with the user.
APA, Harvard, Vancouver, ISO, and other styles
19

Sarwar, Noman, Bilal Arif, Dr Muhammad Azam, Naseer Ahmad, and Fahad Sabah. "Enhancement of Input Error Correction Using Efficient Search in Voice User Interface (Vui)." SSRN Electronic Journal, 2022. http://dx.doi.org/10.2139/ssrn.4181214.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Zahabi, Liese. "Thinking Out Loud: An Invitation for Designers to Consider the Voice User Interface (VUI)." Dialectic 4, no. 1 (November 8, 2022). http://dx.doi.org/10.3998/dialectic.14932326.0004.105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Majrashi, Khalid. "Voice Versus Keyboard and Mouse for Text Creation on Arabic User Interfaces." International Arab Journal of Information Technology, January 1, 2022. http://dx.doi.org/10.34028/iajit/19/1/15.

Full text
Abstract:
Voice User Interfaces (VUIs) are increasingly popular owing to improvements in automatic speech recognition. However, the understanding of user interaction with VUIs, particularly Arabic VUIs, remains limited. Hence, this research compared user performance, learnability, and satisfaction when using voice and keyboard-and-mouse input modalities for text creation on Arabic user interfaces. A Voice-enabled Email Interface (VEI) and a Traditional Email Interface (TEI) were developed. Forty participants attempted pre-prepared and self-generated message creation tasks using voice on the VEI, and the keyboard-and-mouse modal on the TEI. The results showed that participants were faster (by 1.76 to 2.67 minutes) in pre-prepared message creation using voice than using the keyboard and mouse. Participants were also faster (by 1.72 to 2.49 minutes) in self-generated message creation using voice than using the keyboard and mouse. Although the learning curves were more efficient with the VEI, more participants were satisfied with the TEI. With the VEI, participants reported problems, such as misrecognitions and misspellings, but were satisfied about the visibility of possible executable commands and about the overall accuracy of voice recognition.
APA, Harvard, Vancouver, ISO, and other styles
22

Ostrowski, Anastasia K., Jenny Fu, Vasiliki Zygouras, Hae Won Park, and Cynthia Breazeal. "Speed Dating with Voice User Interfaces: Understanding How Families Interact and Perceive Voice User Interfaces in a Group Setting." Frontiers in Robotics and AI 8 (January 14, 2022). http://dx.doi.org/10.3389/frobt.2021.730992.

Full text
Abstract:
As voice-user interfaces (VUIs), such as smart speakers like Amazon Alexa or social robots like Jibo, enter multi-user environments like our homes, it is critical to understand how group members perceive and interact with these devices. VUIs engage socially with users, leveraging multi-modal cues including speech, graphics, expressive sounds, and movement. The combination of these cues can affect how users perceive and interact with these devices. Through a set of three elicitation studies, we explore family interactions (N = 34 families, 92 participants, ages 4–69) with three commercially available VUIs with varying levels of social embodiment. The motivation for these three studies began when researchers noticed that families interacted differently with three agents when familiarizing themselves with the agents and, therefore, we sought to further investigate this trend in three subsequent studies designed as a conceptional replication study. Each study included three activities to examine participants’ interactions with and perceptions of the three VUIS in each study, including an agent exploration activity, perceived personality activity, and user experience ranking activity. Consistent for each study, participants interacted significantly more with an agent with a higher degree of social embodiment, i.e., a social robot such as Jibo, and perceived the agent as more trustworthy, having higher emotional engagement, and having higher companionship. There were some nuances in interaction and perception with different brands and types of smart speakers, i.e., Google Home versus Amazon Echo, or Amazon Show versus Amazon Echo Spot between the studies. In the last study, a behavioral analysis was conducted to investigate interactions between family members and with the VUIs, revealing that participants interacted more with the social robot and interacted more with their family members around the interactions with the social robot. This paper explores these findings and elaborates upon how these findings can direct future VUI development for group settings, especially in familial settings.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography